This page intentionally left blank
Centennial History of the Carnegie Institution of Washington Volume II The Departm...
40 downloads
860 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This page intentionally left blank
Centennial History of the Carnegie Institution of Washington Volume II The Department of Terrestrial Magnetism In 1902, Andrew Carnegie founded the Carnegie Institution of Washington, to support innovative science research. Since its creation two years later, the Department of Terrestrial Magnetism has undertaken a broad range of research from terrestrial magnetism, ionospheric physics and geochemistry to biophysics, radio astronomy and planetary science. This second volume in a series of five histories of the Carnegie Institution describes the people and events, the challenges and successes that the Department has witnessed over the last century. Contemporary photographs illustrate some of the remarkable expeditions and instruments developed in pursuit of scientific understanding, from sailing ships to nuclear particle accelerators and radio telescopes to mass spectrometers. These photographs show an evolution of scientific progress through the century, often done under trying, even exciting circumstances. louis b rown’s first scientific position was at the University of Basel where he helped construct the first source of polarized ions. He arrived at the Department of Terrestrial Magnetism (DTM) in 1961 to begin a fifteen-year research program in nuclear physics. However, his interests have always been broad and he has also pursued research in optical and X-ray spectroscopy, and designed numerous instruments used in astronomy and mass spectrometry. He served as Acting Director of DTM from 1991 to 1992, and continued to work in mass spectrometry as an Emeritus Staff Member until 2004.
CENTENNIAL HISTORY OF THE C A R N E G I E I N S T I T U T I O N O F WA S H I N G T O N
Volume II
THE DEPARTMENT OF T E R R E S T R I A L MA G N E T I S M LOUIS BROWN Carnegie Institution of Washington
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge , UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521830799 © The Carnegie Institution of Washington 2004 This book is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format - -
---- eBook (EBL) --- eBook (EBL)
- -
---- hardback --- hardback
Cambridge University Press has no responsibility for the persistence or accuracy of s for external or third-party internet websites referred to in this book, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
CONTENTS
Foreword by Richard A. Meserve Preface 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
page vii ix
Establishment Cruises and war Expeditions Measurements: magnetic and electric The Fleming transition The last cruise The magnetic observatories and final land observations The ionosphere Collaboration and evaluation The Tesla coil The Van de Graaff accelerator The nuclear force Fission Cosmic rays The proximity fuze and the war effort The Tuve transition Postwar nuclear physics The cyclotron Biophysics Explosion seismology Isotope geology Radio astronomy Image tubes Computers Earthquake seismology Strainmeters The Bolton and Wetherill years Astronomy The solar system Geochemistry Island-arc volcanoes v
1 9 23 33 41 47 51 55 65 73 81 89 97 103 109 117 125 133 139 149 157 163 175 183 187 195 203 209 221 227 233
vi
32 33 34 35 36
Contents
Seismology revisited Geochemistry and cosmochemistry The Solomon transition The support staff Epilogue
Notes Index
239 245 253 257 267 275 282
FOREWORD
In 1902 Andrew Carnegie, a steel magnate turned philanthropist, had a brilliant idea. Carnegie was prescient in recognizing the important role that science could play in the advancement of humankind. He also believed that the best science came by providing “exceptional” individuals with the resources they need in an environment that is free of needless constraints. He created the Carnegie Institution as a means to realize these understandings, directing the Institution to undertake “projects of broad scope that may lead to the discovery and utilization of new forces for the benefit of man.” Carnegie was confident that this unusual formula would succeed. And he was right. For over a century, the Carnegie Institution has sponsored creative and often high-risk science. Some of the luminaries who were supported by the Institution over the years are well known. For example, Edwin Hubble, who made the astonishing discoveries that the universe is larger than just our galaxy and that it is expanding, was a Carnegie astronomer. Barbara McClintock, who discovered the existence of transposable genes, and Alfred Hershey, who proved that DNA holds the genetic code, both won Nobel Prizes for their work as Carnegie scientists. But many other innovative Carnegie researchers who are perhaps not so well known outside their fields of work have made significant advances. Thus, as part of its centennial celebration, the Institution enlisted the help of many individuals who have contributed to the Institution’s history to chronicle the achievements of the Institution’s five major departments. (Our newest department, the Department of Global Ecology, was started in 2002 and its contributions will largely lie ahead.) The result is five illustrated volumes, which describe the people and events, and the challenges and controversies behind some of the Institution’s significant accomplishments. The result is a rich and fascinating history not only of the Institution, but also of the progress of science through a remarkable period of scientific discovery. Andrew Carnegie could not have imagined what his Institution would accomplish in the century after its founding. But I believe that he would be very proud. His idea has been validated by the scientific excellence of the vii
viii
Foreword
exceptional men and women who have carried out his mission. Their work has placed the Institution in a unique position in the world of science, which is just what Andrew Carnegie set out to do. Richard A. Meserve President, Carnegie Institution of Washington
PREFACE
When human activities are evaluated two things have permanence: science and history. Science is the knowledge of how the physical and biological worlds function and of their histories. It is the consequence of our learning to use instruments to read the book nature presents to us and to use our minds to form explanations out of what would otherwise be a discordant multitude of facts. It does not address why things are what we find. History is the story of how we obtained science and must tell the whole story of civilization. When the Carnegie Institution of Washington has passed from the scene, it will have left the history of a small but solid contribution. The Institution’s history can, as with other scientific organizations, be extracted from its publications, but this material lacks historical focus, and even Carnegie staff members would have difficulty obtaining a balanced picture without a significant amount of study, regardless of their familiarity with the literature of the various fields. It is the purpose of this book to perform this (very enjoyable) labor for the Department of Terrestrial Magnetism, known far and wide as DTM. It is a history intended for scientists and the scientifically literate public. The goal of the Department has been the production of science, and I construe its history to be descriptions of the advances in the sciences studied. These advances were made by individuals or small groups, and their stories are intimately bound to their work, indeed form the glue that holds the story together. In the background lie administrative matters that have influenced the course of research over the years. It is the glory of the Institution that those decisions have been enlightened, allowing one to write a history that is decidedly upbeat. Finally, the story includes bits and pieces of the effects of conditions in the “real world,” and it has been a strength of the Institution to have been able to shield its staff from many of these, at times, highly disturbing influences. In reading a number of books and articles by historians of science I can see that this book differs from their norm. As a rule they do not follow the science closely, sometimes not at all, preferring to concentrate on how decisions to proceed in certain endeavors came about and the consequences of them. Their scholarship frequently tends toward sociology. They make much of the examination of the correspondence between senior participants and of the records of meetings. Such sources are not excluded from this study ix
x
Preface
but do not carry much weight. Here the emphasis is on what was the science, how was it done and who did it. Anyone connected with the Department during the past century will be appalled at what is missing: many experiments and observations not even alluded to, many important individuals not mentioned. This is a serious flaw, but one that I justify by noting how large the activities of DTM have been during its century and how many, many persons have contributed – and few of those employed or associated have had trivial parts. The purpose of this history is to gather the important facts in a readable form, which places severe constraints on the length and the manner of telling the story. To the persons slighted, either from the absence of their names or of the experiments or observations they held dear or the valuable support they supplied, my sincere apologies. This is especially true of the many whom I knew and whose work I valued. In discussing this problem with my colleagues I contrived a plan to list all the scientific and support staff members, associates, fellows, visitors, postdocs and the like. What seemed on the surface to be a useful and fair way of handling this great omission proved to be quite unworkable, owing to the magnitude and heterogeneity of the records. The overwhelming sources for this study lie in Carnegie publications, especially the Year Books. Information from the Year Books is provided in citation only very rarely because to do so in the usual scholarly manner would burden the reader with a mass of references that he does not require; it is not difficult to make the connection between the text and the Year Books. Other Carnegie publications are cited, as is material gathered from other locations. Discreet use of the Department’s personnel files has allowed the characters of the story to take on more human form. For the first 50 years the Year Books are very useful for an historian. They had the format of the Director describing the activities of his unit to the President and the Trustees as well as any others who might care to learn what the Department had been doing. The scientific accomplishments were described briefly, with non-specialists in mind; the significance of the work was explained and credit for its accomplishment given; publications were listed; administrative matters reported; and in some years lists of seminar speakers provided. Unfortunately, personnel were listed sporadically during the first years, with little about the support staff. Not until Year Book 21 was there a complete listing, and this was followed by a hiatus until Year Book 26 after which they became standard except during the war years, when it was omitted evidently to hide secret activities. It is highly concentrated and magnificent material. In the 1950s a change began that made it significantly harder in some cases for the historian. The Year Book began to be composed of a series of individually or group authored papers, generally, although not always, written for other scientists in the field and quite obviously not for many
Preface
xi
of the Trustees. The number of pages grew from 54 in 1946 to 122 in 1966 for a staff of comparable size, and reading the annual reports changed from pleasure to tedious work. This style spread throughout the Institution, and Year Book 73 reached 1197 pages. With Year Book 82 there was a critical evaluation of its value and of the cost and the effort expended by the staff in preparing it, which was judged greater than the benefit achieved and resulted in a radically new and much shorter volume. The new version had the virtue of addressing non-specialist readers, however until Year Book 87 the material was not organized according to department. The shortened format included only those matters that had come to some kind of fruition and were ready for public display, so chronology became confused and the steps preceding important science omitted. With time they became unduly abridged, and it was necessary in preparing this history to cite papers from archive journals. The present format copies much from reports to the stockholders of corporations, and one suspects the intended readers have changed from those desiring to learn what the Institution has done to prospective donors. In addition to the Year Books the Department has a wealth of archival material of use to the historian. Its library and archives are filled with material relevant to past activities. In addition to books about the diverse science accomplished, there is a collection of nearly 19 000 photographs of land magnetic expeditions, 3700 of the cruises of the Galilee and the Carnegie, and 14 000 documenting instruments, buildings, personnel, attendance at meetings, even moments of great discovery. The library holds an unparalleled collection of scientific literature on geomagnetism and related fields, such as auroras, cosmic rays, terrestrial and atmospheric electricity, including more than 10 000 catalogued books, observatory and expedition reports, conference proceedings, theses and offprints in addition to long runs of the leading geophysical journals. This substantial base was initiated and fashioned by Harry Durward Harradon, who served as librarian, archivist, editor and translator from 1912 until 1949. Unless otherwise indicated, all images in this book come from the DTM archives. One source for this history is conspicuously absent: Bauer’s personal papers. He had an extensive correspondence, which has been frequently sought by historians of science but which has disappeared. It was known to have been in the custody of his daughter, but his granddaughter, Lucy Pirtle, did not find it on settling her mother’s household. Bauer was an avid collector of books, all donated to the Department library. Fleming’s papers are also missing with no clue as to their fate. In telling the story of the Department one must record the principal science accomplished in sufficient detail for its significance to be appreciated. Unfortunately, the story requires some degree of understanding of scientific detail by the reader, and because of the wide variety of the subjects studied
xii
Preface
at the Broad Branch Road location many well-informed readers will find their backgrounds inadequate for smooth reading, making some chapters difficult. The reader should apply the skills in selective skipping that he/she has acquired for just such occasions and wait until the troublesome paragraphs are past. As the Contents shows, there is variety ahead. The preparation of this history has drawn on the generously provided help of many persons. Foremost has been Shaun Hardy, librarian and archivist of the combined DTM and Geophysical Laboratories collections. His knowledge of the material available and of the Department’s history has proved invaluable, and his careful reading of the manuscript caused the removal of factual errors, flawed interpretations and an array of typos and clumsy sentences. The staff has always shown a serious interest in DTM’s history and many have read and commented on portions of the manuscript with improved copy the result. Current staff members who have performed this function are Alan Boss, Rick Carlson, John Graham, David James, Alan Linde, Timothy Mock, Vera Rubin, Selwyn Sacks, Sean Solomon, Fouad Tera and George Wetherill. Retired staff who have done the same are Philip Abelson, Thomas Aldrich, Roy Britten, Kent Ford and Paul Johnson. To them all my sincere thanks. For a careful reading of the manuscript by an outsider I am indebted to F. A. Kingsley.
Preface
Figure 1 Louis A. Bauer. Director 1904–31.
xiii
xiv
Preface
Figure 2 John Fleming. Assistant Director 1922–29, Acting Director 1930–34, Director 1935–46.
Preface
Figure 3 Merle A. Tuve. Director 1946–66. Circa 1963.
xv
xvi
Preface
Figure 4 Ellis T. Bolton. Director 1966–74.
Preface
Figure 5 George W. Wetherill. Director 1974–91. Circa 1990.
xvii
xviii
Preface
Figure 6 Sean C. Solomon. Director 1992–present.
1 ESTABLISHMENT
The Department of Terrestrial Magnetism, invariably called DTM and referred to briefly in the beginning as the Department of International Research in Terrestrial Magnetism, was the creation of Louis Agricola Bauer and for most of the time of his active participation it was the creature of his will. Bauer was born of German–American parents in Cincinnati in 1865. He completed a doctoral dissertation at Berlin in 1895 on the mathematical analysis of the secular variation of the Earth’s magnetic field, and this thesis work so stimulated his interest that he established a scientific journal, Terrestrial Magnetism: An International Quarterly, in 1896 when he returned to the United States, the only periodical devoted specifically to geomagnetism, atmospheric electricity and related subjects. Perhaps the greatest accomplishment of nineteenth-century physics was the creation of a theory that described accurately all of the observed phenomena of electricity and magnetism, a theory that predicted as its crowning achievement the existence of electromagnetic waves. By the end of the century this had given rise to a practical method for maritime communication over distances of hundreds of kilometers, the last of the century’s practical applications that were rapidly transforming civilization: worldwide telegraph systems, cities linked by telephone, electric traction for railways and the replacement of dim oil lamps by brilliant electric lights. For all its triumphs, however, the theory left one particularly vexing question unexplained. There was nothing that provided even a clue for the origin of the Earth’s magnetic field. Geomagnetism had been the subject of continual study since the invention of the compass needle around 1300. It was a subject that always presented a new layer of confusion when an older one had been removed. It was quickly learned that the compass did not generally point toward the geographic north pole but varied according to the location of the observer. This was followed by the knowledge that the “variation” itself varied, compelling it to be renamed “declination.” When properly balanced, needles were found to lie parallel to the Earth’s surface only in the vicinity of the equator with the north-seeking end pointing down as latitude increased to the north, and the south-pointing end pointing down as latitude increased to the south. Further observations disclosed that the model suggested by this, ascribing 1
2
The Department of Terrestrial Magnetism
the field to a single dipole set at an angle with the Earth’s rotation axis, was far too simple to approximate the data. Temporal variation of declination was observed for periods of hours as well as years. Even more disconcerting, the magnitude of the field was found to be decreasing. All these results depended on measurements made primarily in Europe and America. By the nineteenth century the geomagnetic field was being examined in whatever locations were accessible, and instruments were being developed to allow more accurate measurements to be made. Carl Friedrich Gauss and Wilhelm Eduard Weber provided the discipline with a sound observational and analytical underpinning during the 1830s, but neither an understanding nor a global mapping were at hand. When Bauer proposed his approach to this seemingly endless problem there had been no reliable observations at sea for fifty years, and extensive international data – in one case, the International Polar Year of 1882–83, where their acquisition was an expressed goal – were not reduced to usable form, as no international bureau existed for this rudimentary task. The Pacific Ocean had regions with large declination errors, a serious navigational hazard. In 1899 Bauer became the first chief of the Division of Terrestrial Magnetism at the US Coast and Geodetic Survey. He seized the possibility offered by the founding of the Carnegie Institution to accomplish what was fixed in his mind: to map accurately the geomagnetic field and study its temporal variation as a first step toward understanding its causes and influences. He saw such an organization as the key to worldwide cooperation. Although national groups were studying geomagnetism in manners similar to that of the Coast and Geodetic Survey, their governmental nature hindered the needed global cooperation. Furthermore, such a survey had to be made during a time short enough to allow the corrections for temporal drift to be small. Observational programs that extended over decades would yield data difficult to normalize and the required accuracy would be lost. Things had to be pushed. Bauer correctly believed that a non-governmental agency could marshal the efforts of various national groups and provide observers for the many parts of the Earth’s surface that would not be covered by the others. The rapidity with which the Department was formed following Bauer’s proposal was in striking contrast to the thoughtful organization of the remainder of the Institution. The Trustees established 18 committees in the first year for advice as to what subjects of scholarship were to be considered, and terrestrial magnetism was not represented. In spite of this the Department received its initial funding without significant discussion. This success in persuading the founders of the Institution to support the plan may have lain in a greater public awareness of magnetism as an accessible and interesting science than would be the case today. Most people were familiar with the compass that accompanied them on the Sunday walks, which were then common. They
Establishment
3
knew of its deviation from true north and appreciated it as the basis for marine navigation and knew it as a scientific mystery. This is illustrated in the 1911 edition of Encyclopaedia Britannica, which devoted 34 pages to terrestrial magnetism and magnetometers. The 1993 edition had only 13, most being about the effects attributable to the solar wind. Bauer’s success in this venture owed much to his inspiring international outlook. Given no middle name at birth, he provided himself with “Agricola,” the Latin word for farmer, to match his German “Bauer”; he also insisted that his first name be given the French pronunciation.1 This outlook was obvious in his journal, Terrestrial Magnetism, declared open to papers in “all languages that can be printed with Roman characters.” His proposal was backed by letters from the heads of the US Coast and Geodetic Survey, the German Naval Observatory, the Bureau Central M´et´eorologique, the Bureau des Longitudes, the University of Manchester and the Prussian Meteorological Institute among others. Of special significance, however, was the selection of Robert Simpson Woodward as Institution President at the same time as Bauer’s appointment as the Department Director. Woodward was the Dean of the School of Pure Sciences of Columbia University, but more to the point, he had been Chief Geographer at the Geological Survey, a discipline that would have naturally been supportive of an international effort to determine finally the nature of the Earth’s magnetic field. The Coast and Geodetic Survey had given Bauer’s proposal strong support, and two years after the Department’s establishment Henry S. Pritchett, Superintendent of the Survey, was elected as a Carnegie Trustee where he soon served on the important three-member Finance Committee. From the time of its establishment in 1904 the Department occupied quarters rented in the Ontario Apartments located to the north of Columbia Road on a promontory overlooking the Zoological Park in northwest Washington (Fig. 1.1). The impressive building was constructed in 1904 by a company of which Charles D. Walcott, Secretary of the Institution, was president.2 It remains there today as a cooperative apartment house and retains all its early stateliness. Initial requirements were only to provide space for the administrative and logistic support of the far-flung operations and for the personnel needed to reduce to usable form the huge amount of magnetic data that began to arrive and that had accumulated from varied sources over the previous years. Bauer maintained a residence there for himself and his family as well as another apartment for visitors. In January 1908 a small machine shop was set up, certainly an anomalous fixture for a fashionable residence, at least according to the more dainty modern zoning practices. By the time the Department moved to the Broad Branch Road location, a total of 16 rooms were required in the Ontario. Observational research was carried out in two non-magnetic huts overlooking the Zoo about 300 feet from the building.
4
Figure 1.1 The west wing of the Ontario Apartments. Completed in 1904 when DTM was one of the first tenants. An additional wing was built the following year. (Archives of the Ontario)
5
Figure 1.2 Buildings and site of the Department, 3 December 1919. Photograph taken by Dr. W. F. Meggers of the Bureau of Standards from an airplane at 4000 feet. The Main Building is prominent; the wooden Standardizing Magnetic Observatory is located at the right rear; the new Experiment Building is to the left, partially hidden by trees.
Figure 1.3 The renovated campus of the co-located Department of Terrestrial Magnetism and Geophysical Laboratory in fall 1991. The Main Building, later named the Abelson Building, is readily identified at the right front. The Geophysical Laboratory moved from their previous home on Upton Street, NW and occupied the left two thirds of the large building, called the Research Building. DTM occupied the right third of the Research Building and the Cyclotron Building, partly obscured by the trees and to the left of the prominent Van de Graaff. The Experiment Building and its Annex are at the left, partially obscured by trees. The Standardizing Magnetic Observatory was demolished.
Establishment
7
Two staff members joined Bauer in the first months of the Department’s operation: John Adam Fleming and James Percy Ault, both magnetic observers. Fleming immediately became Bauer’s chief assistant with responsibility for testing instruments and with administrative abilities that became increasingly important in the functioning of the Department. Ault was temporarily assigned to the Coast and Geodetic Survey to secure training for magnetic observations at sea and demonstrated the characteristics of a mariner that soon earned him a license as master mariner. The expanded needs of the Department for experimental work and for fireproof storage of the accumulating records led the Trustees to allot $127 200 for acquiring 7.4 acres of land and build permanent quarters at the present location. The large size of the tract and its location were chosen to reduce magnetic disturbances to a minimum. No industrial activity was anywhere near and the Connecticut Avenue electric streetcar line was 2100 feet to the west. Most of the employees found the new location out in the country, and for their convenience a shelter was built on the corner of Connecticut and 36th Street. Later the commuters wanted a parking lot, strikingly absent in the first ground layout. Design of the main building, rendered in the style of the Italian renaissance, and a non-magnetic wooden building for instrument standardization came from the prominent Washington architect, Waddy B. Wood (Fig. 1.2). A contract was awarded to the Davis Construction Company on 29 April 1913 and the work completed on 14 February of the following year. Other buildings came later (Fig. 1.3).
2 CRUISES AND WAR
Bauer submitted a plan to the Institution on 3 October 1904 to undertake a magnetic survey of the Pacific Ocean. While the general state of accurate knowledge of the terrestrial field worldwide was poor at the time, the knowledge of the Pacific was particularly bad for the obvious reason that it depended on a few occasional expeditions undertaken many years before and on observations at various islands, many of which had strong local effects. Bauer’s ultimate goal was explaining at some level the origins of the terrestrial field, but he had nothing against helping mariners, who had to sail the Pacific with the worst compass corrections of the globe and with the gyrocompass still in the future. Such a project required a vessel whose construction had a minimum of magnetic materials that might interfere with the measurements. Although the construction of a special ship with that in mind was discussed immediately, it was thought prudent to gain experience in a chartered wooden vessel from which as much of the iron had been removed as possible. A few cruises would provide important guidance for the design of a special non-magnetic research vessel as well as gather experience in the difficult task of making the measurements at sea. The brigantine Galilee proved to be a suitable choice. Built in Benicia, California in 1891 it had been engaged originally in carrying passengers between San Francisco and Tahiti but was then carrying freight to various South Pacific islands. Iron furnishings were removed, the steel standing rigging replaced by hemp, and a bridge running fore and aft between the masts was constructed 12 feet above the deck for mounting the instruments. The removal of iron did not convert the Galilee into a perfect research vessel but it did produce “magnetic constants” substantially lower than those of any ship previously used for this purpose. The charter called for the Department to pay $1400 per month for the vessel, master and crew (Fig. 2.1). On 5 August 1905 the Galilee sailed from San Francisco with J. F. Pratt, an officer of the Coast and Geodetic Survey, in command of the expedition but with Captain J. T. Hayes as sailing master. There were three others in the scientific party, including J. P. Ault, who would become both commander and sailing master on later cruises of the Carnegie. The first passage was an experimental trip to San Diego during which proficiencies in observational techniques were developed under the supervision of the Director. Especially 9
Figure 2.1 The Galilee in San Francisco harbor on 2 August 1905. This vessel was chartered from 1905 to 1908 for the first cruises. Magnetic fittings were removed to the extent structural integrity allowed. The bridge is seen above the deck and between the masts for mounting instruments as far as possible from magnetic disturbances.
Cruises and war
11
Figure 2.2 The Galilee was caught by a typhoon in August 1906 while in Yokohama harbor. The water was pumped out and it was soon on its way.
important was learning to determine the corrections that had to be applied to the raw data as a result of the iron that remained in the vessel, a procedure called swinging ship because of the need to orient the vessel into several different directions. This was followed by a voyage to Hawaii and beyond, returning to San Diego. After this first cruise, alterations were made to the vessel as seemed advisable based on the early experience; the observers from the Coast and Geodetic Survey returned to their normal duties ashore, and a second more extensive Pacific cruise began on 2 March 1906, one enlivened by the ship sinking in Yokohama harbor as a consequence of a typhoon (Fig. 2.2). The Galilee’s final cruise, under the command of William John Peters with Hayes as sailing master, ended on 5 June 1908 in San Francisco, and the vessel was returned to the owners (Figs. 2.3 and 2.4). Peters had joined the Institution in 1906 on returning from the Ziegler Polar Expedition of 1903–05 in which he had served as second in command and chief of its scientific work. He had worked for the Geological Survey, carrying out topographical work in the western United States and Alaska. He remained with the Department until his retirement in 1928. On 8 December 1908 the Trustees of the Institution let a contract to the Tebo Yacht Basin Company of Brooklyn for the construction of a nonmagnetic vessel under the supervision of Captain Peters, who controlled all metals that went into the fabrication. It was launched on 12 June of the following year, christened Carnegie by Dr. Bauer’s daughter, Dorothea Louise,
12
The Department of Terrestrial Magnetism
Figure 2.3 Aboard the Galilee. A dinner in the cuddy attended by P. H. Dike, D. C. Sowers, W. J. Peters and G. Peterson. October 1907.
Figure 2.4 The Galilee. The crew in an informal social gathering on deck. “The name is Joe.” October 1907.
in the presence of about 3500 persons (Fig. 2.5). The first cruise of the new vessel began in September (Fig. 2.6). The Carnegie had very similar lines to the Galilee and was about the same size, having displacement and register tonnage 568 and 246 respectively (Fig. 2.7). The brigantine rig was retained because the square sails on
Figure 2.5 The launching of the Carnegie was greeted by a chorus of ships’ sirens and gunfire salutes. Dr. Bauer’s 17-year-old daughter, Dorothea, was the vessel’s sponsor and applied the traditional bottle of champagne expertly, insuring that the timbers were well drenched. Press coverage was extensive. 12 June 1909.
14
The Department of Terrestrial Magnetism
Figure 2.6 The quarter deck of the Carnegie on the occasion of its acceptance. In the front row (left to right) are the builder, Wallace Downey and his son, in the second row are L. A. Bauer (3rd), W. J. Peters (5th), J. P. Ault (7th) and J. A. Fleming, proclaiming with a flat cap his non-nautical station. August 1909.
the foremast did not interfere with the instrument stations. Other than an extreme reduction in the residual magnetism, other changes were incorporated as a consequence of the voyages with the chartered ship. The open observation deck was replaced by two observatories with glass revolving domes that allowed much better conditions for the observers (Fig. 2.8). The Galilee had no auxiliary power, and its absence had been sorely felt at various times. Swinging ship and entering and leaving port frequently required a tug, often not available at the islands visited. There had been times when a propulsion motor could have lessened danger, and periods of calm when it could have saved observation time. The requirement of minimum magnetic material was in severe conflict with the needs of an engine, resolved in compromise by the modification of a Craig 4 cylinder internal combustion engine of 150 horsepower (Fig. 2.9). Its fuel was gas generated from anthracite coal, the fuel most readily obtainable from the diverse ports on the Carnegie’s cruises and prophetically declared safer than liquid fuels. The total magnetic material was 600 pounds. The presence of a coal-gas generator allowed a 6 hp motor for a refrigeration plant. The total cost of the vessel was $115 000. In 1919 it was decided to convert the Carnegie’s engine from producer gas to gasoline, as this fuel was by then widely obtainable and the gas engine had given its share of trouble. The original builder of the engine made the
Figure 2.7 The Carnegie at the time of sea trials in 1909.
16
The Department of Terrestrial Magnetism
Figure 2.8 The instrument cupolas on the Carnegie. These provided protection from the weather for the observers and their instruments, which the open bridge of the Galilee could not.
modifications. This, and other alterations, including electric lighting and routine maintenance, were undertaken at Baltimore, but unfortunately not in dry dock. The ship was hauled out on a marine railway, which failed when returning the vessel to water, causing a three-month delay. The sea world of 1910 was a vastly different world from the one of today. One tends to remember the Edwardian Age in terms of magnificent and beautiful liners, typified by Mauritania, and although shipping was rapidly changing from sail to steam there were slightly more sailing ships than steamers registered under the American flag, although the steamers were on the average three times heavier.1 (The large US investment in sail came primarily from the trade between San Francisco and Europe and the US east coast before the Panama Canal, as steamers could not afford the cost of coal for trips around Cape Horn.) Being a seaman in sail was filled with risk, 1.3 chances in 100 of dying at sea each year,2 but sailors troubled themselves not with such statistics, and no one needed to instruct them that their calling was dangerous. It was still a time when boys ran away to sea, where questions were not asked, where men roamed the world without the impediment of identification papers (Fig. 2.10). That desertion of a merchant ship was still a criminal offense did not trouble many. The Galilee and the Carnegie were manned by these men.
Figure 2.9 Transport of the 150 hp engine of the Carnegie from its builder for installation in the vessel. The ship’s iron content was only 600 lbs. Originally it was fueled by coal gas produced from anthracite. A later conversion to gasoline resulted in the ship’s destruction.
18
The Department of Terrestrial Magnetism
Figure 2.10 It was an age when boys ran away to sea.
Martin Hedlund, a Swede, signed aboard in New York in March 1915 as an able seaman because the vessel was so beautiful. A sailor since age 15, he soon became second mate and stayed with the ship for 18 months. He later recorded some of his experiences, a couple of incidents giving us a view of that vanished world:3 We struck bad weather off Hatteras and had shortened sail, as the vessel was shipping a lot of water in a high choppy sea. It was early in the morning, four bells having just struck when the cook came on deck with his ash bucket. It was a big heavy one and when he leaned over the lee rail to empty it the sea caught it and before anybody could do anything he went overboard still hanging onto his bucket. Well, he never came up again so must have gone straight down with it. The ship was run up in the wind and a life buoy thrown over. The lee boat was cleared but when Captain Ault came on deck he decided not to lower it. He said that one man lost was enough, he would not risk a boat’s crew. As there was no sign of the cook, we watched the buoy light until out of sight, then squared the yards and proceeded on our journey.
Cruises and war
19
He was not the first man lost overboard. A few hours after dropping the pilot on 10 June 1914, outbound from New York, the ship’s log has the following entry: At 6:35 PM man was discovered overboard by Boatswain Pedersen. Lifebuoy thrown toward him, as vessel immediately brought about. Boat lowered with two seamen and Boatswain Pedersen in charge. Two sent aloft to keep lookout, but nothing was seen of him after searching for one hour, going over the spot several times; the vessel passing within 50 ft of the lifebuoy on her first return to the spot. As it was growing dark, boat was hoisted on deck, and the vessel put on her course. All hands were called to quarters, and J. Bosanquet found missing.
In December 1915 the Carnegie began a circumnavigation of Antarctica, a voyage Hedlund remembered well. As she came around the southern continent and approached the Australian longitudes the weather became worse and we had almost continuous storms with mountain high seas. . . . Sometimes when rolling, nearly on her beam ends, all liquid used to disappear from the pots (on the stove), only the thick stuff stayed in. Many times we had to sit on the floor to eat, bracing the feet against something with our backs to the wall. The cook had the worst of the deal. One day he got thrown out of the galley and landed on the mess room table together with the coal from the galley bunker. . . . The ice was packed pretty tight at times and there was a danger of getting caught. Sometimes it took a lot of maneuvering to follow the narrow channels between the ice flows.
The Carnegie measured 133 icebergs during this circumnavigation, the largest being 5.0 miles long and 300 feet high (Fig. 2.11). Casual examination of the Carnegie’s log discloses numerous succinct entries that are masterpieces of understatement. On 6 September 1915, “Clewed up lower topsail and sailed before the wind under bare poles, course E by S.” Ten hours later: “Whole gale and hurricane squalls. Very heavy sea.” This cruise continued in the Pacific and then round Cape Horn to Buenos Aires, arriving 2 March 1917. While at that port the United States entered the world war against Germany, and uncertainties concerning naval warfare led to the vessel remaining there for nine months. Although not found in the Department’s records, the decision to keep the Carnegie at Buenos Aires was very likely made because the famous German surface raider Seeadler was known to be in the South Atlantic during March through April 1917, sinking 14 Allied ships. In any case, the Carnegie did not put to sea again until the raider had been destroyed.4 Captain Ault was advised by cable to transfer command of the Carnegie to Harry Marcus Edmonds and return to Washington by way of Valparaiso for conference and assignment to shore duty. By October plans were begun to bring the ship home by way of Cape Horn, the Panama Canal and the western Atlantic. This became Cruise V,
20
The Department of Terrestrial Magnetism
Figure 2.11 Map of the circumnavigation of Antarctica by the Carnegie from 6 December 1915 to 1 April 1916. Note how poorly the coast of this continent was known at that time. This was unquestionably the most difficult and hazardous voyage of the vessel.
which obtained data from the Pacific and from land stations in Chile, Peru and Panama. Home port was reached on 4 June 1918, after an absence of three and a half years. Ault undertook the development of navigation methods for aviators, needed by the rapidly expanding US air services being mobilized for war.5 The Department’s instrumental skills were helpful in aiding US manufacturers to fill the gap left by European manufacturers of chronometers, sextants and compasses. Bauer, Fleming, H. W. Fisk and W. F. G. Swann developed magnetic instruments for the detection of submarines, one of which was tested at the Navy’s submarine station in New London, Connecticut. These studies were broadened to include mapping and describing mathematically the magnetic fields of ships. Professor M. L. Nichols of Cornell
Cruises and war
21
Figure 2.12 The mail barrel. Ships on long passages followed a practice of setting a barrel marked by a flag adrift at locations suitable for pickup by a vessel bound for the port where its contents were taken to the post office and dispatched in the next available mail ship. This method was used to send correspondence and data back home. Captain Ault wrote many letters to his wife that used this method of transfer. Here the barrel is being dropped near Mangareva, the principal island of the Gambier Islands of French Polynesia. December 1916.
University, who was working at DTM, left a description of a magnetic influence mine.6 Other aspects of the war were felt as some employees and observers went into military, naval and merchant marine service, as was the acquiescence to requests by members of the Department to be assigned plots in the grounds to cultivate gardens. Nothing marks the contrast with the world at the time of these cruises more than communication. For an age that accepts instant worldwide telephone service as understood, the conditions for the inhabitants of sailing ships presents a startling contrast. Few steamers and no sailing ships had wireless in 1910, and it was still a common fate for a vessel simply to go missing, leaving mystery and pain for the crew’s family and loved ones. Months passed with no knowledge by the observers of their families or their families of them. Other ships were seldom encountered in the wide Pacific, and letters often had to be left for the slow transit out of remote islands (Fig. 2.12). Only when their vessel reached a port with cable telegraph could word be sent to the Director and then transmitted to the families.
22
The Department of Terrestrial Magnetism
Sightings of other ships were rare and carefully, possibly fondly recorded in the log, such as 23 October 1915: “Sighted steamer west bound, four masts, black funnel.” A sighting of 7 February 1916, during the circumnavigation of Antarctica, reminded them of the nature of their profession. “Passed a dead body 1/2 cable distant on starboard side. By appearance it had been in the sea for a long time.” Any thought of lowering a boat to identify the corpse was dispelled by the notation of wind force 10.
3 EXPEDITIONS
The opening years of the Department were marked by rapidly deployed magnetic observations on land and at sea. The cruises of the Galilee had relatively simple logistics. Keeping a vessel supplied at sea was one of the normal routines of the world, but the land expeditions required planning and support on a wide scale (Figs. 3.1 and 3.2). A residuum of that planning can be found today in the Department’s library, which has an unparalleled collection of travel books from 70 to 90 years ago that were used to familiarize personnel of possible rewards and dangers. With few exceptions, the expeditions were to regions of the Earth for which the means of communication and transport had not achieved the level of which the industrialized nations were by then so proud. It was, however, at the peak of European colonization, and Bauer’s legions could draw on the services of officials with whom they could converse and with whom they shared common attitudes. The speed with which he organized land expeditions all over the Earth once the Department was formed and funded astonishes one today. Where had he been able to locate so quickly the leaders for those journeys, journeys that were often barely short of explorations? In addition to the data acquired by an expedition, the observers were required to submit field reports providing a narrative of its course. It is in the reading of these reports that one recognizes the remarkable nature of Bauer’s accomplishment and the ingenuity, even daring, of his observers. Each report contains a listing of the instruments carried and all too often of the firearms; it ends with a tabulation of the total distance traveled, broken down by the various modes of transport. For a modern reader or even a modern Carnegie field observer, accustomed as he is to reaching any place on the Earth’s surface in a few travel segments accomplished with motorized vehicles, the methods of travel by the Department’s people 90 years ago were simply startling. Until after 1920 travel by motor car was extremely rare, of very limited duration and often as not terminated by failure. The point of origin for the field work could frequently be reached by ocean or river steamer followed by rail, sometimes on lines under construction. Stations, by which was meant a well-defined location where measurements were acquired, were sometimes occupied and data recorded along the rail and river routes, but after that travel was by animals (horse, mule, donkey and camel) used for traction, saddle or pack 23
Figure 3.1 Magnetic and electric field observations required protection of instruments and observers from the elements. Above is a minimum station requirement.
Expeditions
25
Figure 3.2 Elaborate protection on Baffin Island, Canadian Northwest Territories. March 1922.
and by men used as carriers of both goods and passengers, chairs and wheelbarrows being favorites for the latter in China (Fig. 3.3). On waterways every conceivable mode of craft was used: steam and motor launches, sail of all possible rig, canoes, bamboo rafts and barges. Although Bauer gave instructions by cable from Washington directing the overall operation, the observers had to organize their means of travel and provide for the logistic support. Transporting the observers and their instruments to the desired locations required no small amount of local understanding and skill, even where political conditions were calm. The end of the report gave an accounting of the cost and days expended per station together with a careful description with photographs that would allow an accurate reoccupation at some later time. The members of the observation parties that spread over the face of the Earth seemed to have gone forth with a demeanor that smoothed their interaction with the native people and left few tales of strife in the Observers’ Field Reports, although many expeditions had to work in regions that were in various degrees of turmoil, giving a distinctly romantic side to some expeditions. The leaders of the land expeditions were a varied lot. Their stays with the Department were generally only during the years of the expeditions. Darius Weller Berky led the remarkable expedition from Algiers over the Sahara to Timbuktu in 1913, one which calls up distinct images from “Beau Geste.” He joined the Department in March 1912 at age 32 from The Citadel Military College, where he had instructed in physics, and was a rarity among the observers in having had no previous experience as a magnetic observer.
26
The Department of Terrestrial Magnetism
Figure 3.3 Observer W. C. Parkinson using local transport in western Australia. 1914.
His report of the expedition in Morocco, Gambia, Sierra Leone and French Guinea, March to August 1912 had incidents seldom encountered by modern DTM people:1 “it was found necessary to abandon the caravan trip to Fez and Rabat because a revolt had occurred at Fez and about 300 Europeans had been massacred. . . . Mogador was reached on May 22; here, on account of the plague, I was held in quarantine until May 30.” This experience readied him for Algiers to Timbuktu (French West Africa) from October 1912 to July 1913.2 It was deemed advisable to carry sufficient provisions to last the expedition for 7 months. . . . Through a military contractor a caravan of 19 camels was finally engaged. . . . Each white member of the party was armed with a Winchester rifle and a revolver; for each weapon 500 rounds of ammunition were carried. The three Arab servants of the party were armed with shotguns. . . . he was reported to have captured 300 camels and killed about 80 of the marauding Berbers. It was assumed that this event had cleared our route, for some time at least, of these marauders.
Expeditions
27
Berky reports that they had occupied 72 stations. Two months of the time was spent making preparations in Algiers; danger from the Berbers cost another 13 days, resting the animals 15 days, and sand storms 4 days. One technical innovation intruded into this wild scene: chronometers were calibrated through a wireless time signal transmitted from the Eiffel Tower in Paris. Charles Keyser Edmunds had served as a magnetic observer for the Coast and Geodetic Survey in 1900, providing, one assumes, the link to Bauer, as he joined the Department’s field work when he was President of Canton Christian College in China. The extent of his travels in that country can be gauged from the title of a lecture he gave to the Royal Asiatic Society in 1918: “30,000 Miles in China.” He led an expedition along the South China Coast and into Yunnan, French Indo-China and Siam, October 1911 to March 1912.3 Because of the disturbed state of the Province as a result of the revolution, which had begun since my departure from Canton, and because the reports of trouble on their border between Yunnan and Burma were quite alarming, it was deemed best in conference with the British and French consuls-general at Yunnan, to abandon the plan to proceed westward; neither was it possible to proceed northward toward the Yangtse for a similar reason. . . . Progress was very slow on account of numerous rapids, rendering unloading and reloading the canoes necessary several times a day. With the exception of one night, all nights during this journey were spent in native huts, through the courtesy of the chiefs of the villages. . . . On April 8, I left Canton for America by way of the trans-Siberian Railway, and reached New York on May 25.4
Edmunds reported having occupied 43 stations, requiring 257 miles by ocean steamer, 1549 by river steamer, 150 by canoe, 75 by launch and 150 on foot. Frederick Brown joined the Department at age 19, the youngest appointed, with no college education; his competence for his new tasks were having been a magnetic observer at the Greenwich Observatory plus whatever recommendations found their way into Bauer’s correspondence. He undertook the most demanding assignments, which he carried out with a flair. He also proved an excellent photographer. He went right for the thick of things in Mongolia and northeastern China, March 1915 to July 19165 (Fig. 3.4). In a district of central Mongolia known as Derarangai, on September 30, a band of six outlaws stopped the wagon and demanded payment of 200 ounces of silver to allow the party to proceed. When this was not paid, the boxes and stores were searched and finally about 20 pounds in English money, a rifle, and various stores were taken by the chief, who said it would be safest to return, as the country ahead was being looted by a big band of robbers. It was then decided to travel only by night to the southern border of Outer Mongolia, which was reached on the third night, after two days of camping in gullies away from the road. Outer Mongolia proved to be quite peaceful, but there were very few caravans on the road, traffic between Kalgan and Urga having practically ceased.
28
The Department of Terrestrial Magnetism
Figure 3.4 An expedition led by F. Brown is seen crossing a pass between Kweichow and Kwangsi provinces in China. April 1915.
This was followed with southwestern China and Upper Burma, November 1916 to June 1917.6 The caravan, consisting of 9 chair-bearers, 12 load-carriers, and a head man to manage the coolies, started after tiffin, December 8. The local authorities insisted on sending an escort of 36 soldiers and an officer to protect the party. . . . a large cargo boat, laden with paper, dried fish, and bamboo rope, was boarded for the final stage of the journey to Yenpingfu, but in attempting the large rapid just below the town, she was dashed onto a partially submerged rock, and after being almost overturned, was swept on by the rush of waters, with the bottom boards stove in. The crew hurriedly ran her ashore where the cargo was transferred to a salvage boat sent down from the city. As extensive repairs were needed, the trip was not resumed until the following afternoon. . . . and reached Hong Kong on Sunday morning, December 9. The night boat to Canton was then boarded, and the party reported at the Canton Christian College the following morning, thus ending a very pleasant trip of about 3,000 miles, lasting almost four months. (See Fig. 3.5)
Brown switched continents in going to Cameroon and French Equatorial Africa, May 1919 to January 1920.7 On Sunday the sultan rode out in state, surrounded by his archers and spearmen. The streets were filled with yelling horsemen, galloping up and down in their colored and picturesque garments and brandishing their quaint weapons. . . . For the comparatively short distance of 250 miles to Lai, 24 days’ travel was necessary, a day often lasting from 6 a.m. to 10 p.m. The weather was hot, with stormy afternoons and nights, and the scores of tsetse and other biting flies by day, together with the swarms of bloodthirsty mosquitoes by night, made the trip very unpleasant. . . . Leaving Ngoila on November 30, Semb´e was reached the next day, after a march through a thick forest. An escort was supplied in this district as a protection against the remarkably large and ferocious gorillas which kill a number of the natives on the main forest paths every month.
Expeditions
29
Figure 3.5 A Chinese crowd watching Observer Frederick Brown magnetize dip needles. It was said that no other European had ever visited the town, it being off the main roads in a lonely mountainous district. July 1917.
And then on Angola to Mozambique, January to October 1920.8 They [the natives] then form up around the traveler and escort him into the village, singing a chorus to the accompaniment of hand-clapping, some dancing along in front and calling out complimentary titles. Meanwhile, the chief and the men of the place assemble and sit down on the path awaiting the arrival. . . . A few days before our arrival at the former place, two lions had broken into a hut and eaten a native, and the villagers were consequently living in a state of terror. On two occasions lions were prowling around the camp at night. They did not attack, but, after roaring considerably, went away.
Allen Sterling was a 24-year-old B.A. graduate from the University of Kansas who undertook Chile, Bolivia and the Guianas, February 1917 to July 1918.9 The population of Rurrenabaque came to the river and bade us farewell, saluting with their 44-calibre Winchester rifles. We drifted a month on the river, stopping only to occupy stations en route, and occasionally for an hour’s hunt for monkeys, turkeys, or pigs, which added materially to the variety and quantity of our menu. . . . From Porto Velho I went by a good river steamer to Manaos, where I arrived October 29, 1917, and received my first mail since June. . . . The gasoline motor working full speed was helped by 12 husky natives, the owner of the boat, and the magnetic observer aboard, all pushing and pulling with poles arranged with various kinds of hooks, prongs, and points adapted for grappling branches, vines, and stones, or for poling. The work was very strenuous, and on a few occasions a stretch of perhaps 50 feet (?) was passed in no less than half a day, while clothes were literally torn from our backs. . . . The total distance traveled from the time I left Mr. Wise at Mollendo was about 14,700 miles, of which about 4,200 was by ocean steamer, 6,000 by river steamer, 2,500 miles by train, and the remaining 2,000 miles by raft, canoe, walking, and riding. This total gives an average of 253 miles per station. The average field expense was slightly over $65 per station.
30
The Department of Terrestrial Magnetism
Figure 3.6 While some expeditions faced tropic heat, others had to deal with the arctic cold. Observer G. D. Howell prepares his sledge for a trip to the north. Baffin Island, Canadian Northwest Territories. 21 March 1922.
William F. Wallis had been a magnetic observer with the Coast and Geodetic Survey and undertook at age 40 the Mediterranean, Red Sea Coasts and Abyssinia, October 1913 to December 1914.10 From Tobruk, the last station in Cyrenaica, it was necessary to sail directly to Alexandria, as a large band of hostile Arabs made it impossible to travel through by camel along the Egyptian coast. . . . I left Addis Abeda on the morning of July 27 with a caravan of 12 mules and 7 natives. The 3 weeks’ journey from Addis Abeda to Dessi´e was the hardest part of the trip. It rained incessantly. The mountain trails were steep and rocky, and many streams were difficult to ford. Thick mud and marshes often made traveling very slow. The weather was cold because of the great altitude. We were frequently above 11,000 feet and but once below 9,000. . . . I first called on Chief Kentiba, an elderly man, who later brought the King’s invitation to take breakfast with him and his chiefs the following Sunday morning. The invitation was accepted, and the experience was unique and interesting. . . . the scarcity of water now became serious. There had been a stream in every valley and ravine during the rainy season, but now all were dry and we were often compelled to make long marches to reach water and were then glad to camp beside any mud-puddle that contained enough water for ourselves and mules for a night. In 5 days more, we reached Adigrat, where I met the nephew of Prince Sayum, who is also a prince and a very important chief, although only 11 years old.
Present-day members of the staff are intimately familiar with the details of worldwide travel by airlines. In those days the staff knew the details of far more varied methods of transport. Listing the names of some of the ships used in going to, from and on expeditions and remembered (for good or evil) in the reports imparts a bit of the color of the time: Guatemala,
Expeditions
31
I. C. Ward, Hainam, Sado Maru, Liu Shing, Pingching, Rosa, Inca, Ena, El Em´an, Lightning, Princess Royal, Premier, Keewatin, Leichhardt, Lone Hand, West Australian, Chung Hon, Hsin Peking, T. C. D., Haitan, L’Afrique, John Williams, Mindini, Morinda, Misima, Matunga, Goldfinch and Cauca. Staff members now have the advantage of what would have seemed an incredible speed at that time, but the earlier staff possessed a few advantages of their own, though less tangible: accommodations aboard many ships that could make travel a joy and a relaxation as well as providing the ability to confront an accountable human being when arrangements were unsatisfactory. When examining the world map showing the land stations occupied by the Department’s expeditions, bear in mind the circumstances that lay behind the acquisition of those data.
4 MEASUREMENTS: MAGNETIC AND ELECTRIC When Louis Bauer began his plan to map the magnetic field of the Earth accurately there were only two observational methods applicable to land or sea surveys, both clumsy and inaccurate compared with modern techniques. A measurement consisted of determining the direction of the three components of the field and its magnitude. In practice this meant obtaining the deviation of the horizontal field from true north, called the declination, and the angle the field makes with the horizontal, called the inclination. Declination was determined by comparing the direction indicated by a compass with true north determined by an astronomical observation. (The horizontal measurement in fact measured two components: a north and an east–west component.) The inclination was measured by a dip circle, a magnet so balanced in the plane defined by the compass direction that the angle could be read. The compass and dip circle had various forms and were, of course, made to the highest standards of the time. For the first few years these instruments were all purchased from various manufacturers, often according to Bauer’s instructions. Later the Department made its own versions (Fig. 4.1). True north was determined using the well-developed techniques of celestial navigation, and for these data to have any meaning they had to be associated with simultaneous determinations of the station’s latitude and longitude. This required that the land survey parties or the ship know the Greenwich Mean Time, and to accomplish this five chronometers or chronometer-grade watches were generally carried rather than the three that marked standard maritime navigational practice. Despite the great advances made in timekeeping, chronometers drifted or even failed, so accepting the time of two out of three was the rule, but the demands of the survey required backup against failure. When a ship or land party reached a point of accurately known longitude, the chronometers could then be set right again. Wireless propagation of time signals had no use at sea until the Carnegie’s last cruise, but land expeditions could sometimes obtain them by telegraph. At sea, determinations of position could seldom be made simultaneously with the magnetic measurements, requiring interpolations of positions between fixes – interpolations depending on the accuracy of the log (the device for determining distance traveled in the water), knowledge of currents, leeway 33
34
The Department of Terrestrial Magnetism
Figure 4.1 A theodolite magnetometer of DTM manufacture. The magnetometer is seen mounted on the tripod. To determine the orientation of the magnetometer and the station coordinates the magnetometer was replaced with the theodolite seen at the right on the table.
of the vessel (the lateral motion estimated by the watch officer) and accuracy of steering. The magnitude of the field could be obtained from the value of one of the measured components, the directions of the others being known. In mid latitudes the horizontal component was larger and hence usually determined. At high latitudes the inclined component was measured. In both cases a standard magnet was placed on a line at right angles to the one used for direction and its effect observed, allowing the Earth’s field to be compared with the one produced by the standard. Owing to the inherent difficulties of measuring at sea, Bauer devised an alternate method, which he called the sea deflector, that placed the standard magnet above the compass with its axis perpendicular to magnetic north. It was less precise but simpler to use.
Measurements: magnetic and electric
35
These manipulations required considerable skill under good conditions, conditions generally available at land stations, but observations on a rolling, pitching, yawing vessel at sea made the observation of the positions of magnets delicately suspended by fibers or balanced in the dip circle significantly more difficult, even for instruments mounted on gimbals. The observers worked at two separate stations simultaneously taking hundreds of separate measurements that were averaged and later compared for consistency. Another instrument was available to both land and sea observers, the Earth inductor. This was a straightforward use of Faraday’s law in which a coil was rotated at known constant speed in the Earth’s field. If the axis of its rotation were fixed perpendicular to the compass direction, the voltage generated was proportional to the magnitude of the field and could be read on a calibrated galvanometer. If the axis of rotation was adjusted parallel to the total field vector, no voltage would be induced, allowing the inclination and declination to be determined by obtaining a null voltage. Results from the Earth inductor were of inferior accuracy to the other methods, primarily because of deficiencies of the galvanometer, and were used primarily as a check. Both land and sea observations had potential sources of local magnetism that could cause the measured values to deviate from those characterizing the average field of the Earth at the observing station. On land such came about from ore formations or even unnoticed iron near the instruments. Such disturbances would be detected by having two observers a few hundred meters apart measure their compass bearings toward one another. If their readings were not exactly 180◦ apart, something was wrong and a survey of the area had to be moved to a clean location. Although the open sea carried no magnetic anomalies to falsify the data, a ship did. The Galilee, being a wooden merchant ship stripped of all of the iron that could be removed, still had enough iron in the fastenings that held the timbers together to cause trouble, and the trouble was moderately complicated. Those bits of iron that were magnetized produced a field that was more or less the same and fixed in the vessel’s coordinate system. Iron not magnetized produced a field depending on the magnitude and direction of the field being measured. These yielded four corrections for each of the three components of the field being measured, nine of them field dependent. The corrections that had to be applied to the data were determined by swinging ship, i.e., observing the effects as the vessel was slowly worked around 360◦ , a procedure that had to be repeated at every opportunity, as these parameters changed with the terrestrial field and with the vessel’s recent history. These corrections were sufficiently involved that they had to be applied to the data by the computers (by which was meant a human equipped with mechanical calculator and mathematical tables) in Washington. When conditions
36
The Department of Terrestrial Magnetism
allowed, the ship was always swung near land where the measurements could be compared with those of a station set up on shore. When the Galilee was replaced by the Carnegie, which was designed to reduce iron to an absolute minimum (it was said that spinach was stored in the engine room), these corrections were small enough that the ship was swung much less often and were simple enough to be applied at sea, making the data immediately available. Bauer insisted that all data be sent to Washington at the earliest opportunity, so that they could be transmitted to all interested parties. This approach did much to secure the international cooperation that he greatly valued. Whenever a land party or the ship reached the location of a magnetic observatory, which were not all that rare in an age where navigation depended on the compass, elaborate comparisons were made of the standard magnets to learn to what degree they had retained their values. In this, of course, both parties could benefit. A source of error in these determinations was, in fact, of a fundamental nature: the terrestrial magnetic field is not constant, but varies with time. Except for moments when a magnetic storm is in progress with effects as large as 5%, its value varies during the time typical of one of these field or sea observations at most about 0.2%. The temporal variation of the field was indeed one of the most important aspects of the problem. The long-term drift, called secular variation, was a matter to be studied for its own sake but it also had to be known accurately in order to present the global field as it was at a particular time. Land expeditions returned wherever possible to stations where data had previously been taken either by Carnegie or other observers. Because land stations cannot be entirely free of local effects, a very careful description of the station was recorded in addition to its latitude and longitude so that subsequent observations could occupy the same exact spot. Sea observations did not suffer from such local disturbances, and previous stations were occupied from the coordinates. Routes for the Carnegie were chosen to cross earlier stations of its own or of the Galilee. Instrument development was continual and was aimed at the improvement of the accuracy attainable in the field or at sea, which generally meant making the equipment easier to use. With time the Department efforts succeeded in yielding sea data as accurate as those taken by the land expeditions. Expedition sets had to be made as small as possible, and a lot of thought went into designing the carrying cases, which required strength and compactness. To a modern physicist these methods of determining the field, dependent as they were on the observer’s eye looking through microscope and telescope at mirrors and engraved scales, seem as extraordinarily tedious as do the corresponding methods of navigation. Especially noted by their absence are the present-day ubiquitous electronic methods of data acquisition, but during
Measurements: magnetic and electric
37
the last voyage (1928–29) of the Carnegie electronics (a term not then coined) was employed only for radio transmission and reception. Indeed amplifiers stabilized by negative feedback, the heart of all electronic measurement systems, were not invented until just prior to World War II, and it was after that conflict that electronic data recording burst on physics. As we shall see in later parts of this history, the Department left research in terrestrial magnetism after the war, as it did not seem to offer a substantial gain in scientific knowledge. What transpired has led some to say “we left terrestrial magnetism just before it got interesting.” The large increase in government postwar support for research resulted in new research vessels being put to sea, generally outfitted to make a large variety of measurements, which frequently included towing a fluxgate magnetometer behind a ship. (A fluxgate is an electronic instrument invented for the detection of submarines and is usable only for fields comparable in magnitude with the Earth’s field. It would have been Bauer’s dream instrument.) Magnetic records accumulated but were not examined with the single-mindedness that would have marked the approach of Bauer, Fleming, Peters and Ault. These data began to show positional variation that did not fit the understanding of the time and did not find a coherent explanation for almost twenty years, when they formed the strongest evidence for plate-tectonic motion, the most important geophysical discovery of the century. The data acquired by the Carnegie incorporated these variations, but they were effectively hidden by the spacing between stations and the small magnitude of the effects, which were comparable to diurnal variation, c. 0.003 gauss. It was only when continuous measurements were examined that the regularities began to stand out above the noise. Studies of the electrical properties of the atmosphere had their beginnings in the eighteenth century with Benjamin Franklin. At the time of the Department’s founding it was known that the air was a slight conductor and that the Earth’s surface was negative relative to higher elements of the atmosphere. In modern terms we know the electric field resulting from this potential difference varies in fair weather between 100 and 200 V/meter causing a current flow of about 1000 A over the whole globe. The cause of this potential difference was a matter of dispute until recent time, but is now believed to be the negative charge transferred to the surface from thunderstorms. Atmospheric electricity in all its ramifications is a very complicated phenomenon, the details of which are imperfectly understood despite the unparalleled advances in instrumentation of the past century. At the beginning of the twentieth century such instrumentation was not much advanced over Franklin’s time. There had been, of course, enormous improvements in the measurement and application of electric currents based on dynamos and chemical cells, but electrostatics and the measurement of the tiny currents such as encountered in the air relied on electroscopes, technically
38
The Department of Terrestrial Magnetism
improved but not based on new principles. All measurements were restricted by erratic insulator behavior. The mystery of why the air conducts electricity at all had been solved just before the Department’s entrance into research: X-rays had been discovered in 1895 and radioactivity the following year as the product of investigations inspired by them. These two phenomena could increase the conductivity of the air markedly by ionizing the molecules, a phenomenon only known for a decade or so. The discovery of cosmic rays in 1911 completed knowledge of the sources of radiation that ionize air, thereby making it conducting. There was a significant scientific interest in having the magnetic surveys being planned and undertaken by the Department augmented by studies of atmospheric electricity, especially at sea. Bauer was, however, very critical of the instrumental techniques available for this work and did not initiate any surveys until the final cruise of the Galilee in 1907–08, and these quite preliminary. The measurements most desired were those of the potential gradient at sea, but these were by far the most difficult, owing to the great disturbance produced by the ship on what one assumed was an otherwise comparatively uniform field. When periods of almost absolute calm prevailed a skiff was put overboard and measurements were attempted at a great distance. The results were highly variable, ranging from zero to beyond the range of the electroscope but averaging about 90 V/meter. Measurements of the conductivity of air yielded moderately consistent values. Air was drawn through a cylinder having an inner electrode charged to a high potential by a battery of chemical cells; its discharge was observed with an electroscope as a function of time, allowing the specific conductivity to be calculated. Collection of radioactivity on a charged wire demonstrated its presence, most of which was found to have been blown from the land, and later shown to be radon. When the Carnegie began Cruise IV in March 1915 sufficient confidence had been found for the methods of measuring potential gradient, especially after the procedures worked out by W. F. G. Swann had been tested, so that these measurements became a standard part of the cruise repertoire. An umbrella-shaped electrode was extended horizontally from the taffrail of the vessel on a bamboo pole and connected to one terminal of a bifilar electrometer, the other terminal being connected to the sea. The electrode collected ions slowly from the air, causing the electrode to reach the potential of the atmosphere at that point, at which time the electrometer reading remained constant. The vessel provided an unavoidable disturbance to the field, and data were taken only for certain configurations of sails and rigging, and a correction factor had to be applied that was known to be inaccurate. Nevertheless, the cruises of the Carnegie provided the first global knowledge of the Earth’s negative electric field.
Measurements: magnetic and electric
39
Sebastian J. Mauchly extracted from the enormous mass of worldwide data on the Earth’s potential gradient the simple but not too obvious knowledge that the gradient varied periodically about 30% of its average value. The tediousness of this reduction may have pointed his son, John, who later worked for the Department as a computer, towards developing the first electronic computer, ENIAC.
5 THE FLEMING TRANSITION
When the Carnegie returned to its home port on 10 November 1921 after Cruise VI, the goal of a comprehensive global mapping of the geomagnetic field had been largely accomplished. It was by any measure a triumph for the Department and indeed for Bauer personally, as the organization, planning and conduct were to a high degree his alone. He had brought into the work a large group of remarkable individuals and had inspired them with his zeal. Every expedition, every cruise came about because he had willed it; letters and cables from Washington kept things moving as he wished; his occasional participation at sea and in the field provided personal inspiration and supplied him with knowledge of what needed altering. He was at the peak of his career. In 1920 the Trustees elected a new President, John C. Merriam, to replace Robert Woodward. Woodward and Bauer had come to the Institution together in 1904, and Woodward’s work as a geographer made him a consistent supporter of Bauer’s goals. Merriam, a paleontologist famous for his studies of the La Brea tar pits near Los Angeles, began to question the future of the Department that to him seemed to offer little more than a continuation of admittedly successful studies. On becoming President he could have hardly missed noting the expense of the cruises and expeditions, and he set new allocations of funds, which led to the cessation of the cruises. The ship was decommissioned and tied up on the 7th Street wharves, to remain idle and deteriorating for six years. Field work was restricted to selected measurements, especially the occupation of old stations to gain data for following secular variation, and there was also a steady stream of data from the worldwide network of observers who were associated with the Department, but there was talk of “the DTM problem.” Merriam had Bauer convene a one-day “Conference on Fundamental Problems of the Earth’s Magnetism and Electricity and Most Effective Methods of Research” on 22 January 1922, at which a broadening of the Department’s scope was recommended. There was even some discussion of changing the Department’s name to a less restrictive one, but the idea was dropped. Although by no means opposed to fundamental science, Bauer saw such efforts as detracting from the singular function of the Department as the international bureau responsible for tracking the geomagnetic field. He could show the alarming inaccuracies disclosed in earlier magnetic maps, 41
Figure 5.1 W. F. G. Swann experimenting at the new Broad Branch Road building with equipment for measuring atmospheric electricity. 1914.
The Fleming transition
43
inaccuracies which had led mariners to grief. There were also serious disruptions of telegraph communications by magnetic storms. These things were admittedly important, but it was quickly pointed out that since 1910 some ships had begun to steer by the gyrocompass and that the protection of communications against storms was primarily an engineering, not a scientific problem. Merriam did not see in Bauer the leader who would provide the Department with new dreams, and relations between the two became distinctly cold. The committee recommended that Bauer hire a theorist to attack some of the problems lurking in the magnetic data. Bauer had hired a physicist in 1913, the Englishman, William Francis Gray Swann (Fig. 5.1). Although most of his work lay in theory, including early papers on relativity, he had worked out the procedures for observing atmospheric electricity at sea and had designed various magnetic instruments. During the war Swann worked with the National Bureau of Standards on a project for inventing methods for detecting submarines and did not return to the Department after the Armistice, having accepted a professorship at the University of Minnesota. Coincident with Swann’s departure, although possibly independent of it, Bauer initiated laboratory work looking to the fundamentals of magnetism. The physics community was then digesting Bohr’s theory of the atom, which suggested that ferromagnetism resulted from orbital electronic motion in molecules or atoms. The idea had been verified by two separate experiments: the Einstein–de Haas effect and the Barnett effect. Both were based on the assumption that such motion would have both angular momentum and magnetic moment coupled at a microscopic level. In the Einstein–de Haas effect the application of a magnetic field caused an interaction with the microscopic moments so as to transfer angular momentum to an iron cylinder suspended in the field. The effect was small but observable. The Barnett effect was the inverse phenomenon; a spinning iron cylinder was shown to produce a magnetic field. This offered the prospect of explaining at least part of the geomagnetic field, although examination quickly indicated it was several orders of magnitude smaller than required to explain the terrestrial field. Samuel J. Barnett had published the idea that orbital electronic motion was responsible for ferromagnetism well before Bohr’s theory, even proposing that the structure of an atom was a positive core surrounded by electrons in bound, orbital movement, and had tentatively verified the effect experimentally;1 a more careful experiment confirmed the result.2 Barnett had also served as a magnetic observer for the Coast and Geodetic Survey during 1900–05 while an Assistant Professor of Physics at Stanford. Impressed with such original work, Bauer induced the 45-year-old Barnett to join the staff in July 1918 and began the construction of a non-magnetic Experiment Building just north of the Main Building, completed in 1920, that provided him with a laboratory and access to an instrument shop.
44
The Department of Terrestrial Magnetism
His new experiments demonstrated the effect, known now as the gyromagnetic ratio, g, in other metals, but after five years, Barnett had proceeded no further than his original experiment, only done with greater precision. This obsession had its origin in the general belief that g = 1 and his values clustered about g = 2 with enough uncertainty to make him believe his experiments were at fault; thus he had no interest in expanding his investigations to other topics. Einstein and de Haas had reported g = 1, a value that was generally accepted in the physics community, as it was not known that the electron spin is s = 1/2 and it was the product sg = 1 that they demanded. In an attempt to reduce interferences in the delicate measurements required, the Department was able to have late-night street cars on Connecticut Avenue replaced with busses.3 A dispute about his demands for shop time led to his leaving the Institution for a faculty position at Caltech.4 The atomic understanding of magnetism was following the completely different paths indicated by spectroscopy, molecular and atomic beams, and quantum mechanics. These were paths Barnett did not take, and his measurements, which time has shown were correct, had reached a dead end. They were also paths that would be of little use for geomagnetism. With Swann gone and Barnett not a theorist and probably leaving, Bauer found a candidate for the Merriam committee recommendation in Gregory Breit, then working on the theory of radio-frequency coils at the Bureau of Standards (now robbed of its old and illustrious name and called NIST). Breit, however, became interested in one of the other of the committee’s recommendations: “In view of the theoretical importance of magnetic and electric observations in the upper levels of the atmosphere, it was considered highly desirable to make arrangements for such observations as soon as facilities and conditions permit.” This was to prove fruitful and will be described later in detail (pp. 55–63). Bauer named John A. Fleming Assistant Director in 1922, a position needed because of the Director’s often lengthy absences. Fleming had come with him from the Coast and Geodetic Survey when the Department was established and was evidently untroubled by his legendary autocratic manner. This demeanor proved little more than an irritant when the observers were at sea or on distant expeditions. The long service records of most of the employees at the Broad Branch Road indicate that they adapted to Bauer’s way, but scientists were less inclined to acquiesce to this kind of management. Breit had an abrasive personality that was to become equally celebrated, and the clash between the two was reported to have been noisy. Perhaps the spirit of the times was best expressed by Richard B. Roberts, who joined the Department in 1937. When he complained about Fleming’s managerial style he was told by a colleague: “If you think Fleming is bad, you should have been here when Bauer was Director!”5 Bauer’s selection of Ault to command the Carnegie during most of its days at sea was a wise one that did not reflect
The Fleming transition
45
a projection of his own leadership style, for Ault was famous for running a happy ship. Completely secure as a shipmaster and scientist, he was easily approachable by all. Both Bauer and Fleming were active in the administration of scientific organizations and in the attendant editing duties. At the conclusion of World War I, the International Union of Geodesy and Geophysics was formed, and a committee, including Bauer (represented at the first meeting by W. J. Peters) and Institution President Woodward, was appointed by the National Research Council to establish the US section. This became the American Geophysical Union in 1920. In 1925 Fleming was selected as its General Secretary, a position he held until 1947, a period in which he became known as “Mr. A. G. U.” He also assumed the editorship of the journal Terrestrial Magnetism and Atmospheric Electricity, which became the periodical of choice for geophysics, eventually renamed Journal of Geophysical Research in 1949, though continuing the volume enumeration. Both were run from the Director’s office at DTM. Bauer became incapacitated in 1927 from a mental collapse, and Fleming assumed control. He had worked in close accord with Bauer’s administration from the first days of the Department but did not have an advanced degree, having served as a magnetician, and thus did not appear to have the makings of the bold, new chief that Merriam wanted. He could, of course, serve to hold the Department on course until the condition of the Director’s health could be ascertained and a new director named, if the condition proved to be terminal. Bauer was never to guide the Department again and killed himself in 1932. Fleming served in this capacity as Assistant Director until January 1930, at which time he became Acting Director. He proved bolder than expected and was finally named Director in 1935, when the Department had undertaken completely new kinds of research. In 1929 the District of Columbia extended the grid system of streets to the region occupied by the Department and beyond. Theretofore the address had been “36th Street and Broad Branch Road” with 36th Street conspicuously following a path not laid down by a surveyor and making the connection to Connecticut Avenue; 32nd Street and Jocelyn Place formed new boundaries, and the creation of Nevada Avenue was attended by some confusion as to whether a natural extension would replace the Broad Branch Road name. The changes gave DTM its present street address of “5241 Broad Branch Road, N. W.” and allowed wags to say it was at the intersection of 32nd and 36th Streets. Within a decade the region to the immediate northwest would be residential.
6 THE LAST CRUISE
The vessel tied up at the 7th Street wharves presented the Institution with a problem. There was valid work to do in the magnetic surveys, but keeping a ship at sea was expensive; even keeping it decommissioned cost money. There was, however, a growing interest in oceanography, the subject founded, one might say invented, in Washington by the Virginian, Matthew Fontaine Maury. It was a subject for which data were scarce, data that had to be acquired with a research vessel, and the United States had only the Carnegie. There was wide interest and encouragement from many quarters for the Department to undertake this field of study, but little was offered except instruments and various intangibles. President Merriam finally agreed to another cruise that would undertake an extensive number of oceanographic observations in addition to magnetic ones. In the summer of 1927 the Carnegie was towed to dry dock in New York where new masts and yards were installed and much dry-rotted timber replaced (Fig. 6.1). The whale boats were moved from the quarterdeck to midship platforms in order to free deck space for the various wires that were to drop sampling bottles and thermometers. There was equipment for continuously measuring the ocean surface temperature, for measuring temperature at depth, for determining the plankton concentration, for bringing up bottom samples, as well as measuring atmospheric pressure and temperature, and the dust content of the air. There was equipment for launching and tracking weather balloons and a laboratory for analyzing the samples of water brought from various depths. Samples of marine life were to be acquired wherever possible. The Navy installed an echo fathometer, which taxed the staff’s skills in keeping it going, sometimes substituting shotgun blasts for the oft-failing 500 Hz oscillator. Additional observers – referred to as the “scientifics” by the crew – had to be taken aboard, and duplicate magnetic observations had to be given up. There was an increased demand for electric power, especially for the winches that pulled up as much as 5000 meters of sampling wire. Once at sea they soon found that the electric generator was using half of the gasoline. Another drain on electric power was the radio designed for them by the Naval Research Laboratory. Owing to the rationing of electricity it did not use the long-wave bands normally assigned to maritime stations but the 47
48
The Department of Terrestrial Magnetism
Figure 6.1 The Carnegie under way toward New York in 1927 for repairs and outfitting for the last cruise. Note the hemp hawsers used in place of steel anchor chains. They were greatly disliked by the crew as was the hemp standing rigging.
short-wave bands of radio amateurs, who were enthusiastically recruited by the American Radio Relay League for opening communication channels. Short waves, generally of the range from 30 to 100 meters, make use of multiple reflections between the Earth’s surface and the ionosphere to attain great distances, but a given wavelength will have large skip regions in which signals cannot be transmitted or received, dependent on the time of day and the state of the ionosphere. The cooperation of “hams” all over the world combined with the inherent strength of using Morse code allowed messages to travel the long distances between the little ship and home port with family messages a favorite. The rarity of amateurs in the southern hemisphere meant there were weeks when no communications got through.
The last cruise
49
Cruise VII sailed from Washington on 1 May 1928 bound for Plymouth, England and then on to Hamburg. The Atlantic passage was the shakedown for the new oceanographic work, and numerous problems came to the fore. Some of these could be solved in discussions with members of the German Oceanographic Institute and the German Hydrographic Office. Other problems, for example, with the sample-wire winches and the failure of the deck to keep water out of the cabins below, were solved by Hamburg’s repair yards, but many remained. Indeed the entire cruise proved a shakedown for the new techniques of ocean sampling. Even such standard techniques as observing weather balloons proved to be quite different at sea than on land. At the end of July the Carnegie arrived at San Francisco for maintenance and change of some personnel. Notable in terms of personnel was Scott Forbush as observer for magnetism and electricity and as navigator. He would remain with the Department until retirement in 1969 and in time became one of the leading authorities on cosmic rays, but more about that later. New instrumentation allowed data to be collected; a detector for radiation penetrating the ocean surface, bottom samplers and apparatus for measuring gravity. This was the first attempt to make gravity measurements aboard a surface vessel, all previous work having been made aboard submarines. While at San Francisco the Department celebrated its 25th anniversary on 26 August with President Merriam and notables from the Institution and other organizations present at ceremonies on the quarterdeck. The following day the ship was open to the public and visited by about 3000 people. In late November 1929 the Carnegie entered the harbor of Pago Pago, in American Samoa, a frequent stop of the vessel, almost a homecoming. After enjoying the hospitality of the locals it sailed for Apia in Western Samoa where preparations for continuing the voyage were made, and on Friday, 29 November gasoline was being taken from a lighter alongside and 55 gallon drums of the fuel were being winched onto the quarterdeck and poured into the main tanks. The small engine-driven generator was running, and the engineer and the mechanic went below to switch it from charging batteries to the winch needed to hoist another drum of gasoline. The mechanic afterwards remembered smelling gasoline fumes in the engine room, although he did not consider that unusual. When he pulled the switch, invariably drawing a spark, the gasoline vapor exploded and blew the deck planking upward – the deck on which Captain Ault was sitting in a chair. The two seriously burned men escaped from the rapidly spreading fire; the sailor on deck pouring the gasoline was blown into the air and came down on the railing. He saw the captain in the water and tried to get him into the lighter but was himself too injured for the task. The second mate got both into a boat. The others quickly saw that the fire, feeding on hundreds of gallons of gasoline, could not be extinguished and began saving personal effects and important papers from the chart room. Captain Ault died in the boat, but the
50
The Department of Terrestrial Magnetism
engineer, mechanic and sailor survived after hospital care. Roll call disclosed the cabin boy absent. His remains were found in the wreck. Reading the reports of the disaster one is struck by the absence of the knowledge, common to all modern operators of motorboats, of the serious danger of gasoline fumes in confined spaces. But this was knowledge only then being slowly acquired through numerous accidents as motorboats became common. Gasoline was rarely used aboard deep-water ships, and neither the engineer nor the deck officers were aware of the danger. In fact, the final reports do not show that there was at the time a realization that the cause of the explosion was their standard practice of pouring the fuel openly through a funnel with ample opportunities for the vapor to collect in the poorly ventilated engine room. The destruction of the Carnegie put an abrupt end to the Department’s entrance into oceanography. Even without the loss of the ship it is doubtful whether the expense of operating one could have been maintained. There were only the Institution’s funds on which to draw, and the stock market had crashed on Wall Street the previous month. A great deal had been learned, and observational techniques had been perfected, but others would use them. The British Admiralty decided to build a non-magnetic ship, Research, in order to carry on the ocean surveys, and Captain Peters went to England in 1935 as a consultant in the construction and outfitting of the vessel. The outbreak of war brought an end to the project, and postwar magnetic work was carried out at sea with magnetometers that could be trailed behind a steel ship.
7 THE MAGNETIC OBSERVATORIES AND FINAL LAND OBSERVATIONS When the Department’s world magnetic survey began there were in operation a number of magnetic observatories, the limited method by which individual nations or organizations approached the study of the geomagnetic field. Tied as they frequently were as the cultural appendages of nations of the northern hemisphere, they provided a limited picture of matters, despite the high quality of their data. Bauer’s cruises and expeditions expanded this meager set of data into a global picture. Nevertheless, observatories gave extremely valuable information about the temporal variation of the field, both the diurnal and secular variations as well as the enigmatic magnetic storms. Of particular value for interpreting events were recordings of any kind of temporal change taken simultaneously worldwide. In 1915 the Department set out to correct the severe imbalance in the distribution of observatories by establishing two in the southern hemisphere. In addition to magnetic measurements they were to study (1) variation in the fair-weather electric potential and conductivity of the air; (2) earth currents and their relationship to the geomagnetic field;1 (3) cosmic rays and their relationship to magnetic data; and (4) disturbances of the Sun’s chromosphere. The general regions selected were the southwestern part of Australia and the Peruvian Andes. The former was selected to fill a gap between observatories at the same latitude at Melbourne and Mauritius (Indian Ocean); the latter because it would lie near the geomagnetic equator. Experience by other observatories had shown that for the post to be scientifically adequate certain criteria needed to be satisfied: (1) the absence of any detectable local anomalies; (2) a location at least 10 miles from the nearest railroad; (3) isolation from present and future industrial activity; (4) a location at least 50 miles from the ocean; and (5) a situation on level, unforested ground. The last two requirements came from the need to measure earth currents, which required two buried cables about 15 km long, one lying on a north–south geomagnetic meridian, the other on an east–west meridian. A location was found by W. F. Wallis and W. C. Parkinson in western Australia that satisfied the requirements; it was about 100 miles north of Perth and 10 miles west of the small community of Watheroo, specifically at 30◦ 19 south latitude, 115◦ 53 east longitude, and 800 feet altitude. The land was cleared of scrub to protect it from the brush fires that were common 51
Figure 7.1 The Huancayo Magnetic Observatory in Peru entered operation in March 1922. It began with magnetic and atmospheric electric instruments but its coverage was eventually expanded to weather observations, ionospheric sounding and seismic recording.
The magnetic observatories and final land observations
53
occurrences. Although the search for the site was undertaken and the 180 acres were purchased with the cooperation of the Australian government, there were some local difficulties resulting from US neutrality at the time in the world war. The first data were obtained on 1 January 1919. A second location was found in the western cordillera of the Peruvian Andes about 8 miles west of the town of Huancayo, specifically at 12◦ 03 south latitude, 75◦ 20 west longitude, and 10 600 feet altitude (Fig. 7.1). Purchase of the tract of 25 acres presented some difficulty because of the many owners but was eventually completed in September 1919. Transport of building materials proved very difficult, and it was not until March 1922 that the station was put into operation. Huancayo had, in addition to the standard equipment, seismographs operated for the US Coast and Geodetic Survey and a receiver for recording variations in the intensity of signals from a distant radio transmitter. For accuracy it was the practice of observatories to isolate the magnetic instruments from temperature variations to the best extent possible. Older practice had done this by using deep cellars, but these had generally proved to have troublesome moisture, so the two new posts would have surface enclosures of six walls between which were dead-air spaces and sawdust. This arrangement succeeded in reducing the diurnal variation to the order of 0.1◦ C. Naturally, all construction had to be made with strict compliance in the use of non-magnetic materials. Nails were copper, requiring holes to be drilled for their use. (Copper nails were a standard product used in structures wherein explosives were stored or carried, as they reduced the danger of sparks.) The Department furnished the instruments for magnetic and electric measurement. Variometers recorded declination and intensity continuously with optical links and photographic film moving at a speed of 20 mm/hour. The Mount Wilson Solar Observatory furnished spectrohelioscopes for examination of the chromosphere. In 1932 the Department provided radio-echo apparatus for monitoring the height of the ionosphere, and in 1936 continuous recording of cosmic-ray fluxes was begun. Both stations made fundamental meteorological observations regularly, primarily for the benefit of local authorities. It had been foreseen that these two posts would be maintained through at least one solar sunspot cycle and, when the major objects of the research had been accomplished, would be terminated or continued by the local authorities. In 1946 the Australian and Peruvian governments, stimulated by the importance of these studies as demonstrated during World War II, accepted the two stations as gifts and agreed to support their continued operation. Wallis initiated the Watheroo station but turned operation over to W. C. Parkinson in November 1919. It devolved onto G. R. Wait in 1921 and H. F. Johnston in April 1924. Harry Marcus Weston Edmonds had served as
54
The Department of Terrestrial Magnetism
surgeon and master of the Carnegie as well as magnetician, and such wideranging talents made him the obvious candidate to organize the much more difficult Huancayo station, which he accomplished in 1919 and 1920. He was relieved by Wallis in January 1921, who was replaced by Parkinson in June 1923. Operation of the two observatories by Johnston and Parkinson marked the beginning of relatively routine operations. Subsequent chiefs had rotating tours of about three years. Although land expeditions on the scale of the years before 1923 could no longer be supported, a great amount of data continued to accrue. During the years 1921–44 a total of 2297 station occupations were recorded, compared to 4283 for the period 1905–20. In most cases observations were supported when the observers could spare the time and secure the data for modest expense, so these often proved to be small expeditions. This work was organized by Wallis and J. W. Green, who made many of the measurements themselves. Emphasis was placed on occupying previous stations in order to improve the data for determining secular variation. Frederick Brown was a prominent leader of expeditions during the Bauer years. He had subsequently settled in China as a missionary and returned during 1931–36 to do extensive work when he was free from his normal duties. Being a missionary did not have a calming effect on the nature of his field work, as China was still in turmoil. A quotation from one of his reports suffices. “I then hurried back from Shanghai, fearing there would be war with Japan and student rioting. Since a Chinese army later attacked a walled city only 20 miles away, apprehension was felt for our safety and that of the instruments. There were also a quarter of a million flood refugees in camps on the outskirts of Wuhan who might have become a menace should the distribution of food have been discontinued.” His party encountered civil war and plague but made 70 station occupations.2
8 THE IONOSPHERE
In December 1901 Guglielmo Marconi received at St. John’s, Newfoundland, a wireless telegraph transmission of the letter “S” from his powerful station in Cornwall, England. Following Hertz’s discovery of electromagnetic waves in 1887 physicists had concentrated their efforts on verifying that these waves had the properties of optical light, which precluded any thought of receiving them at distances beyond the horizon. Marconi had proceeded along a different path by alone having seen the communication possibilities between ships using waves much longer than Hertz had used. His company prospered, and his experience with waves of hundreds of meters demonstrated that they were being received at distances beyond the horizon. The St. John’s transmission was doubted in many quarters but was soon verified, confounding physics theory. An explanation came independently from two electrical engineers: there must be a conducting layer at the top of the atmosphere. Oliver Heaviside inserted the idea in the article he wrote on telegraphy for the Encyclopaedia Britannica (10th edition, 1902). Arthur Edwin Kennelly, a Philadelphia engineer, had published the same observation more conventionally and somewhat earlier. For reasons other than transmission distance, wireless telegraphy was restricted for the first two decades of the century to those long wavelengths. Owing to the wide bandwidths required by the spark equipment then in use, the limited range of frequencies available in this region of the spectrum was quickly gobbled up by government and commercial traffic, and amateurs were given the useless but enormous short-wave band. In 1923 an amateur in France and another in the United States succeeded in making two-way transmissions by using multiple reflections between the Earth’s surface and what would soon be called the ionosphere. Not only that, but their vacuum-tube equipment used a small fraction of the power required by the commercial trans-Atlantic sets. Interest in the ionosphere soared. Needless to say, amateurs were, like the Indians, driven into small reservations of their now valuable short-wave band. That they were allowed any use at all came from the very large capacity of this region when used with modern radios. Thoughts about the long-distance propagation of short waves occupied Gregory Breit after his appointment to the staff, on 1 July 1924, although the conducting layer was not one of the four projects Bauer had laid before 55
56
The Department of Terrestrial Magnetism
him. Speculation about the electrical properties of the upper atmosphere had engaged persons doing magnetic research for decades. Carl Friedrich Gauss had learned that a small component of the magnetic field measured at the Earth’s surface did not originate from within. The auroras and especially the magnetic storms pointed toward the circulation of large electric currents in the high regions of the atmosphere. All this was in the province of the Department, and what was needed obviously was a more direct demonstration of the physical existence of the conducting layer and, if possible, an exploration of its structure. Breit designed a straightforward experiment. He would build on the grounds just east of the Main Building on Broad Branch Road a very large reflector, formed as a paraboloid of revolution and with a radiating dipole at the focus. A wavelength of about 3 meters was envisioned. His idea, which did not get beyond the drawing-board stage, was to form a narrow, pencillike beam so aimed as to be reflected down in the vicinity of Baltimore. An observer at the far end would map the received signal strength, thereby allowing the height and structure to be ascertained. What was needed was a Baltimore observer, and Breit already had in mind for the task Merle Tuve, a friend from their student days at the University of Minnesota. Tuve was doing graduate work at Johns Hopkins University and had been an avid radio amateur since childhood, when he and his friend Ernest Lawrence exchanged wireless messages between their homes in Canton, South Dakota. Tuve found the objective of the work interesting but did not like Breit’s approach and proposed using a radio-echo sounding method. It was an idea first proposed by W. F. G. Swann, a former DTM staff member who had left the Department to become a professor at the University of Minnesota, and who discussed it at a seminar there that Tuve had attended. Swann had tried out the idea but failed, owing to the severe limitations of the equipment of the time. Given the extensive construction necessary to carry out Breit’s idea and the obvious simplicity of Tuve’s approach, it is easy to see that the latter was attempted before trying the former. (It should be noted here that Breit’s method would have failed completely, as the wavelengths short enough to be focused into a beam by any realizable paraboloid would have passed through the reflecting layer as do the TV and FM radio bands, but this was knowledge not available to them.) The experiment proposed would set up a receiver at least a few kilometers from a transmitter. The transmitter would have the amplitude of its signal modulated by a sinusoidal voltage that would cut off the power completely during one of the swings of the modulating voltage. The output of the receiver would be connected to a string-galvanometer oscillograph recorded on continuously moving photographic film. The signal arriving by the lineof-sight path would precede any signal reflected from the upper atmosphere, an effect observable on the trace of the photographic record.
The ionosphere
57
At that time, Westinghouse operated station KDKA in Pittsburgh, which was initiating radio broadcasting and was open to experimental ideas. They were happy to cooperate, but the result was a failure. The large swing in the modulating voltage pulled the transmitter frequency, and the resulting output was a mixture of frequency and amplitude modulation, which presented only confusion at the receiver. Fortunately, at that time Hoyt Taylor at the Naval Research Laboratory was experimenting with quartz-crystal-controlled oscillators – the heart of modern frequency control and time-keeping – and had built station NKF using the new technique. He placed the station at their disposal. They modulated a 4.2 MHz carrier to cut-off and found the expected echoes. Observations made at different times showed consistent variation of the time delays that were explainable by a changing distance to the conducting layer, thereby ruling out reflections from nearby mountain ranges. These observations, made during the summer of 1925, brought acclaim to the two young men and became the basis for worldwide studies of radio transmissions (Fig. 8.1). The newly invented multivibrator circuit quickly replaced sinusoidal modulation with rectangular-shaped pulses having widths less than a millisecond, and the newly invented gas-focused cathode-ray oscilloscope replaced the galvanometer oscillograph. With that, the elements for studying the nature of the Kennelly–Heaviside layer were complete. The next step was to design relatively simple circuitry that allowed transmitter and receiver to occupy the same location, the problem being that the transmitter signal overwhelmed the receiver, which was intended for extremely small signals, and either damaged it or prevented it from receiving the reflected signal during the time when it should return. The ionosphere reflects radio waves by free electrons of the rarified upper atmosphere, primarily the result of ultraviolet radiation from the Sun together with low recombination times. The concentration is high enough at 50 km during daylight for signals to be so reflected that they appear to be coming from a distinct layer, called the E layer; this “layer” recedes to about 100 km at night, owing to the absence of solar ultraviolet radiation. The height of this E layer is relatively independent of the transmitter frequency. A second, more complicated frequency-dependent structure is found at heights ranging from 250 km to 900 km and is called the F layer or, as required, layers. The National Bureau of Standards, then located about a mile and a half from the Broad Branch Road grounds, was active in research concerning the transmission of short waves, and the people in the radio section began to exploit the new technique with Breit’s and Tuve’s whole-hearted cooperation. The use of short waves for communications increased rapidly during the 1920s and 1930s, and the key to satisfaction lay in knowing the worldwide structure of the ionosphere sufficiently well to be able to predict what frequencies to use for specific contacts. This was very definitely the kind of work that matched
Figure 8.1 Gregory Breit and Merle Tuve using the receiver with which they determined the height of the ionosphere. (Library of Congress)
The ionosphere
59
the Bureau’s charter, and they set about it with a will. Deriving a theory for predicting a structure for which there was abundant evidence of unending complication was by its nature obviously going to be a semi-empirical affair and this did not appeal to Breit, who was becoming ever more interested in the new quantum mechanics. Tuve also found his interest wandering away from the ionosphere to the atomic nucleus. His own natural engineering bent combined with Breit’s understanding of theory led to the idea of studying the atomic nucleus by accelerating protons or alpha particles with some kind of electromagnetic apparatus rather than depending on radioactive decay as a source of alphas. Fleming, whose influence in Bauer’s last years became very strong, backed success, and soon appointed Lawrence Hafstad, a graduate student in physics along with Odd Dahl, the Norwegian aviator-adventurer, to work with Tuve and Breit. They were to continue the ionosphere work until things were satisfactorily transferred to the Bureau, but could start on building a particle accelerator. It was a bold and risky decision for Fleming, an assistant director without an advanced degree, to support two young men in a new field for which they had no demonstrated competence and allow them to move out of a field that was just beginning and in which they had made a substantial new discovery. The transfer of ionospheric work to the Bureau was hit by a severe cutback in the Bureau’s finances as a result of the Depression. This induced Fleming to take on some of the Bureau’s ionosphere responsibilities by hiring Lloyd V. Berkner in 1933 when he had been let go. Before joining the Bureau, Berkner had worked as a ship’s radio operator at age 18, had earned a degree in electrical engineering, and had served as an aviator in Admiral Byrd’s Antarctic expedition of 1929–30.1 The Department obtained the loan of some land near Kensington, Maryland, and Berkner built an improved multifrequency automatic ionospheric sounder that operated over the frequency range 0.5 to 16 MHz, completing a sweep in 15 min (Fig. 8.2). The key to such an automatic device was in holding transmitter and receiver frequencies together tightly. This tuning problem was simplified by a proposal of Theodore R. Gilliand of the Bureau of Standards that one variable oscillator be common to transmitter and receiver. This kind of instrument became the international standard ionosphere research instrument for the next three decades. He installed these sounders at the Department’s magnetic observatories in Huancayo, Peru in 1937, and in Watheroo, Australia a year later; manual equipment had been installed at both in 1932 (Fig. 8.3). Berkner brought ability coupled with the desire to participate widely in national and international radio organizations. He represented the Institution at conferences with the Federal Communications Commission for allocating frequencies. The study of the ionosphere, as it had to be then carried out,
60
The Department of Terrestrial Magnetism
Figure 8.2 Lloyd Berkner at the DTM multifrequency ionosphere sounding equipment that he developed. It measured the heights to the various layers automatically in frequency increments during 15 minutes during which it ranged from 0.5 to 16 MHz. This set, designed in 1936, was the instrument used in the Department’s worldwide distribution of stations and was the basis for predictions of short-wave propagation for the Allies in World War II. 1937.
Figure 8.3 Ionospheric characteristics from multifrequency scanning at the Watheroo station. Frequency is plotted along the horizontal axis and the intensity of reflection is shown by enhanced darkening for the height that is plotted along the vertical axis. 22 January 1939.
62
The Department of Terrestrial Magnetism
Figure 8.4 Quarters at the Christmas Island Station, Indian Ocean of the Department’s worldwide ionosphere program. This work was expanded during World War II to provide accurate predictions of short-wave propagation. Appearances lead one to believe it was not a hardship post. A radio antenna can be discerned behind the house. 1944.
required international cooperation, and to this end he became the secretary of the International Scientific Radio Union of which Edward Appleton was chairman. Berkner proved himself a master of both the science and the organization of radio. It was a style that marked him for the rest of his career. Communication was vital to the armed services, and when war became imminent, Carnegie was asked to establish a station at Fairbanks, Alaska, which Berkner quickly accomplished and to which were soon added stations at Christmas Island (Indian Ocean), Clyde (Baffin Island, Canadian NW Territories), Fort Read (Trinidad), Maui (Hawaii) and Reykjavik (Iceland) (Fig. 8.4). Berkner left the section during the war to assume other duties linked to naval radar, and the section was headed by Harry Wells. The stations provided extremes of climate and operating conditions. Photographs of the Christmas Island station bespeak a vacation paradise, but the tropical climate of the Fort Read station made it difficult to keep the equipment on the air. At Clyde, above the Arctic Circle, the isolation transformed a case of appendicitis in March 1944 into a serious emergency that was resolved by the arrival of a physician by parachute – the doctor’s first use of this extraordinary mode of travel.2
The ionosphere
63
The accumulation of data from these stations along with other cooperating observatories soon established that the contours of the F layers followed the parallels of geomagnetic latitude. These data allowed the prediction of average conditions several months in advance, which had broad practical applications when much communication used the bands reflected by the ionosphere. Reliable links depended on knowing when to use a specific frequency to connect two stations. Data reduction and the prediction of best transmitting frequencies were made at DTM. The radio soundings were augmented by solar observations with a coronagraph at the Harvard College Observatory at Climax, Colorado that gave warning of anomalous propagation. The theory that the ionosphere was formed by solar ultraviolet radiation received support by observations of eclipses at Watheroo on 1 August 1943, and at Huancayo on 25 January 1944, which attained 90% and 88% of totality respectively. In both cases the electron density of the E and F regions declined markedly from the control data of the stations. Attempts to correlate ionospheric phenomena with sunspots yielded a much less clear picture. The ionosphere work made a small step toward understanding magnetic storms. It soon became known that radio propagation became anomalous about a day before a storm and simultaneous with a solar flare. The flare had obviously increased the flux of ultraviolet, forcing the E layer even lower and causing radio fadeouts. From the flare evidently came a flux of corpuscular composition that traveled at speeds much less than the speed of light and that added plasma to the upper atmosphere with large densities of electrons. The structure of the magnetic fields and electron densities about the Earth has been shown to be quite different than what was imagined by the Department’s ionosphere investigators of the 1930s. It is a structure that had little chance of being mapped by ionospheric sounding and that had to await the arrival of space probes. It is now actively studied as a combination of solar and terrestrial phenomena, but it is no longer one of the Department’s disciplines.
9 COLLABORATION AND EVALUATION
Bauer’s international outlook extended the reach of the national surveys, cruises and expeditions through a variety of collaborations, for which the Department furnished the instruments and instructed or supplied the observers. Many of these collaborations conveyed the names of famous explorers and were major efforts for their organizers. There were also a large number of small efforts that returned valuable if limited data. Some of the expeditions that for one reason or another attract one’s attention will be briefly described here. Captain Roald Amundsen was the best-known, certainly the most competent arctic explorer, and had located the north magnetic pole in 1904. In April 1918 he approached Bauer to discuss plans for the Maud expedition. The Maud was a three-masted wooden schooner designed to be frozen into ice. His object was to so position it as to drift in ice for 3 years across the Polar Sea. The Norwegian Harald Ulrik Sverdrup had been in charge of the Maud ’s scientific program since 1917 and was a research associate of the Department, which modified a dip-circle and magnetometer for the task and provided equipment for atmospheric electricity studies. The attempts in the summers of 1918, 1919 and 1920 did not succeed, although he sailed along the coast of northern Siberia from Varde, Norway to Nome, Alaska, occupying a number of land stations where data were acquired to the extent allowed by sledge trips. The 1918–19 venture cost the lives of two men, lost on a sledge trip for communication. On 8 August 1922 Roald Amundsen finally succeeded in being frozen into the drifting ice at about 175◦ west longitude where he remained until the vessel broke free on 9 August 1924 at about 142◦ east longitude; he reached Seattle on 5 October 1925 after sailing along the north coast of Siberia.1 Odd Dahl was Sverdrup’s assistant during the 3-year passage of the vessel across the Polar Sea and was also the pilot for a small airplane taken along to examine ice conditions around the ship. On returning to Washington, Sverdrup recommended Dahl for a technical position and Fleming hired him, but on condition that he report to work only after he had made a trip with a friend across the Andes by mule to the head waters of the Amazon and thence to Manaus, Brazil by canoe. Dahl’s principal work was to be an electrical engineer with Tuve and Hafstad for the ionosphere and the particle 65
66
The Department of Terrestrial Magnetism
Figure 9.1 Observer E. H. Bramhall observed with a DTM magnetometer in a room cut into the ice during the Byrd Antarctic Expedition II. 1933–34.
accelerator work but he contributed to the magnetic surveys by taking a year of leave in order to travel with his wife through the Middle East and India in 1928–29, so the Department outfitted him with the necessary instruments and covered some of his expenses. This trip was made by automobile, a mode of transport that became common for field work after World War I, and most of his travel difficulties, aside from the border officials of Syria, Iraq, Persia, Baluchistan and India, were in getting his vehicle through difficult terrain that had few and poor roads. It is worth noting that he never had difficulty obtaining gasoline, even in the remotest parts of his journeys; thus had the world changed. He reported observations from twenty-three stations.2 The Macmillan Arctic Association occupied stations during the winters of 1921–22 at Bowdoin Harbor, Baffin Island (named after the Association’s vessel Bowdoin), and 1923–24 at Refuge Harbor, North Greenland.
Collaboration and evaluation
67
Richard H. Goddard, a Department observer, accompanied the expedition with the necessary instruments. The igloo observatories functioned satisfactorily, and there was no shortage of non-magnetic building material. These two collaborations furnished very valuable data from regions with extreme values of declination and inclination. Two other arctic ventures involved magnetic measurements that were, owing to the nature of their transport, unable to procure the kinds of data needed for Bauer’s tabulations, but were useful for checking interpolated values. One ended disastrously with nothing to report, the polar flight by Italian General Umberto Nobile in 1928 in the airship Italia. The dirigible crashed with the loss of seventeen lives, although Nobile was rescued. Amundsen, who had criticized the plan of the flight, lost his life in the rescue operations. A skillfully executed airship flight was made by the (at the time) famous Graf Zeppelin for the International Society for the Exploration of Arctic Regions by Means of Aircraft in October 1930. The Department purchased a double compass from the Askania Werken for the trip, which allowed measurement of the horizontal intensity but neither declination nor inclination. A serious weakness of any airship data lay in the uncertainties attributable to position. The determination of position between stellar or solar fixes by dead reckoning was strongly and adversely affected by the ease with which the airship was carried by the wind. One further arctic expedition was conducted through the US Coast and Geodetic Survey for the Second International Polar Year, 1932–33. Stations were occupied in Alaska at College-Fairbanks and Point Barrow, the latter having been occupied during the International Polar Year 1882–83. The Australasian Antarctic Expedition preceded the Carnegie’s circumnavigation of that continent; the Department furnished magnetometers, dip needles and chronometers and carefully oversaw their calibration before departure. The group landed in December 1911 at Commonwealth Bay, Adelie Land where a housing for continuous recording of declination was constructed. To provide a location for absolute magnetic work that was free of local disturbance, an ice cave was excavated 250 feet above rock. Initial attempts to secure measurements away from the base by sledge failed until November 1912 because of the severe winds encountered. The work was concluded in 1913. Richard E. Byrd, a naval aviator whose competence in aerial navigation in polar regions was demonstrated by his flight with Floyd Bennett to the North Pole in May 1926, organized an expedition to Antarctica financed by public subscription. In October 1928 he established a base on the Ross Ice Shelf called Little America from which flights were made, and where scientific data were acquired and he remained until early 1930. The Department furnished him with complete outfits for a temporary magnetic and electric observatory
68
The Department of Terrestrial Magnetism
and trained his two observers (Fig. 9.1). He made similar arrangements for his second expedition during 1933–34 but with a more elaborate magnetic program that collected continuous declination measurements. During the antarctic summer and the return of the Sun, a number of magnetic stations were occupied by sledge trips and data returned. A third Byrd expedition extended the measurements into 1940–41. The Department continued its traditional program in terrestrial magnetism and electricity until 1946. Expeditions and cruises financed by the Institution had long since been discontinued, but data continued to arrive from the two Carnegie observatories (which were given to the Australian and Peruvian governments in 1946), from other observatories, and from collaborations. All had to be incorporated into a form useful to the scientific community. In 1939 Fleming edited a volume that compiled in textbook form the knowledge of terrestrial magnetism and atmospheric electricity as part of a series commissioned by the National Research Council on the Physics of the Earth. It proved to be one of those classic scientific books that was reprinted by Dover Publications when out of print at the original printers, McGraw-Hill.3 It paralleled the two-volume Geomagnetism by Sydney Chapman (Imperial College, London) and Julius Bartels (Geophysical Institute, Potsdam), DTM Research Associates who completed their work despite the enmity of their respective countries. It remains a classic of the field.4 If the reader were to seek maps of magnetic data published for public distribution by the Institution, he would not have found them. The numerous maps that were printed during the decades of DTM’s activity were the products of national geological or similar surveys from the data distributed by the Department, as this was a part of the original agreement.5 Perhaps the most impelling object of Bauer’s search had been the accurate determination of the secular variation of the geomagnetic field (Fig. 9.2). It had been the subject of his dissertation in Berlin, and there had been more than a little hope that knowing its behavior in detail might lead to an understanding of the origin of the Earth’s field. By 1946 the goal of accurately knowing the secular variation for the period of observation had been achieved. There had been enormous amounts of data from cruises, expeditions and observatories, all carefully reduced through spherical-harmonic analysis with desk calculators and incorporated into charts and tables in two volumes,6 far more accurate than had been theretofore seen, and described by the eminent geophysicist Sydney Chapman as “the two great collections of modern geomagnetic data.” (Spherical-harmonic analysis, whose utility enters into countless areas of science, had been devised by Gauss for the first terrestrial magnetic surveys during the 1830s.) Ernest Harry Vestine summarized the findings for secular variation.
Figure 9.2 A contour map presenting the secular change in the vertical intensity of the Earth’s magnetic field from 1885 to 1922. Contour interval is 0.005 oersted (0.5 T) with positive values indicating an increase of the downward magnetic force. A large series of such beautiful contour maps concentrated the results of the years of expeditions, cruises and data reduction.
Figure 9.3 The location of the land stations and cruises that provided the data for mapping the Earth’s magnetic field. Regions that show no stations were covered by various national surveys, which provided DTM with their data for incorporation in the world maps.
Collaboration and evaluation
71
These have been drawn complete in all magnetic elements for the first time. They are also, we believe, the first set of isoporic7 charts reasonably consistent with all available carefully assessed measurements with each other and with the known character of electro-magnetic fields. Since they are drawn at four epochs a decade apart, the phenomenon is apparent with good continuity for almost half a century. A new and rich store of information is thus afforded respecting deep-seated, rapid, and mysterious physical processes of the Earth’s interior which to the best of our present knowledge are not reflected in any way.8
Vestine noted the westward drift to be correlated with previously unexplained variations in the Earth’s rate of rotation, leading him to the conclusion: “(1) that the source of the geomagnetic field lies within a large-scale fluid-circulation inside the central core of the earth; and (2) that this field circulation in the core must be considered established as real, since no other adequate large source needed to conserve angular momentum is apparently available.”9 It was an important step, but many more will be necessary before this and other questions of the origin of the terrestrial field are at hand. There is general agreement that it arises from currents moving in the conducting materials of the core as a result of a self-excited dynamo coupled to the Earth’s rotation, but secular variation still remains enigmatic. There were more impressive results for the changes in the field on shorter time scales, the diurnal variations and magnetic storms, owing to the patient analysis of Oliver Gish. Radio methods of interrogating the ionosphere had shown a significant decrease in the electron density and the associated currents between night and day. Solar flares had been found to be the source for fluxes of plasma that greatly upset the balance in the ionosphere and caused magnetic storms. Details remained for the infant space science to learn, but a general understanding had been achieved. The work of Lloyd Berkner, Harry Wells, Alvin McNish, E. H. Vestine and Julius Bartels produced new measures of geomagnetic activity that could be used in maintaining radio communication during these abnormal events. With this the Department left the field of study for which it was named. It left an impressive accomplishment by any account (Fig. 9.3). That an explanation of the origin of the geomagnetic field, Bauer’s goal, did not follow was a disappointment, but hardly shameful, as it has eluded investigators to the present day, although some general characteristics of the nature of the generator are thought to be understood. The Department’s name remained, as “DTM” had become closely associated with nuclear physics, a discipline unconnected with geomagnetism, and somehow it appealed to others, particularly those who were to work in biophysics. A history was collecting about it.
10 THE TESLA COIL
In 1919 Ernest Rutherford reported a very important discovery, the significance of which was not lost on Breit and Tuve. The collision of an alpha particle with an atom of nitrogen produced an atom of oxygen and one of hydrogen. It was the transmutation of one element into another, a scientific goal dating from the earliest alchemy. This opened an astounding new field of research, but, despite its primordial fascination, one that proceeded very slowly because the process was very rare and required large numbers of alpha particles to produce a single event. Rutherford had speculated openly about replacing the tiny flux of particles from a radioactive source with a relatively gigantic stream of alphas or other particles, certainly including protons, that had been accelerated through an electric potential of millions of volts. Typical kinetic energies of alpha particles were of the order of millions of electron volts (MeV), and laboratory gas discharges could produce copious numbers of ions. The combination of a source of ions and an accelerating potential opened possibilities that allowed one to imagine all manner of experiments. Tuve and Breit wanted to build such a voltage source and accelerate ions with it. Fleming persuaded them to do the work at DTM. In taking this risky step, which was precipitated by Tuve considering doing it at Rutherford’s laboratory, he was following Andrew Carnegie’s original dictum of identifying competent scientists and supporting them. The work began about the time Bauer became incapacitated, and there is little to indicate what he thought about it. Whatever others may have thought, Fleming stuck to his decision and gave the new project all the support he could, even when things looked bleak. There were limited methods of producing millions of volts in 1927. Electrostatic machines yielded spectacular sparks at several tens of kilovolts, but that was far too small. Transforming alternating current had been shown to be capable of very high potentials both from 60 Hz power currents and with the Tesla transformer (Fig. 10.1) that worked at some tens of kHz. These were, of course, alternating potentials and would accelerate the particles for only a fraction of a cycle. Suppliers of electric power equipment were prepared to build such equipment, but the costs of iron-core power transformers were well beyond what the Institution or any other research group at the time 73
74
The Department of Terrestrial Magnetism
Figure 10.1 The Tesla transformer equipment used to determine what voltages could be obtained. The apparatus was immersed in pressurized transformer oil and attained 5 MV. It was quickly determined that oil at atmospheric pressure served just as well. It was learned just about as quickly that the addition of elements needed to form an accelerator reduced the maximum voltage significantly. 1928.
could have afforded. Rutherford’s experiment was spectacular only to a small coterie of scientists and did not attract large amounts of money. This left the Tesla transformer. With a 60 kHz voltage source very high voltages could be obtained with the simple air-insulated transformer that Nikola Tesla had invented and with which he had given sensational displays of discharges in air. As the highvoltage generator of a particle accelerator it had much against it. It was not only an alternating voltage, it had a very short duty cycle because the oscillator was a spark oscillator, that heart of early wireless telegraphy and still found on most ships of the time. A spark functioned as a switch that discharged a capacitor in a tuned circuit that produced oscillatory currents whose amplitude decayed as the energy was dissipated. Until the primary capacitor was recharged to the potential that caused the next spark, the system was idle, hence the duty cycle, the fraction of time during which the alternating voltage was present, was short. In a typical generator, a high-frequency burst was emitted every millisecond or so. Thus a Tesla-coil accelerator would not only have the deficiencies of alternating current, but it furnished an alternating voltage only part time. In principle the oscillator could have been a source of continuous waves, but this came with serious problems of power consumption and dissipation. As it was, their spark oscillator was quite a beast.
The Tesla coil
75
When one considers the other problems facing the experimenters, such as an ion source operating at high voltage, the manipulation of beams, the detection of the particles from whatever experiments might take place, just to mention a few, one can appreciate the courage of youth. Breit had worked briefly for the Bureau of Standards on the theory of radio-frequency coils, which provided a background for the design. It was generally thought necessary to achieve 5 MV to initiate a nuclear reaction, as the Gamow barrier penetration theory lay in the future, so the first step was to generate 5 MV, which was done relatively quickly by placing the coil in a cylinder of pressurized oil. From this triumph things deteriorated rapidly. The real problem was the accelerator tube (Fig. 10.2). This had to have electrodes located in high vacuum with the potential so distributed as to form a beam of ions and allow them to be accelerated through the total potential drop. Ion optics was a field in its infancy, so the design was initially guided mostly by intuition. But that was not their big problem. High voltage applied to a vacuum tube caused unwanted discharges within the accelerating system, which all too often damaged the tube and at times the windings of the transformer. The group, which now had Hafstad and Dahl participating, constructed many kinds of vacuum tubes connected in various ways to the generator and saw the Tesla coil destroy nearly all. With time stable tubes could be kept in operation, but the goal of 5 MV had to be substantially lowered. There was one ration of happiness: the oil did not need to be pressurized, an open tank sufficed; however, working in oil was in itself bad enough. Parallel to these experiments they worked on developing the skills needed in nuclear physics: particle detection with ionization chambers and Geiger counters; rudimentary ion optics and ion sources; electronic pulse amplification; cloud chambers, eventually equipped with stereoscopic photography; preparation of pure sources of polonium from old radon tubes. Hafstad made the electrometer tube a reliable instrument for measuring the currents from ionization chambers observing the cosmic rays that penetrated into the Department’s deep basement. All these experimental successes were capital on which they were soon to draw, but the Tesla apparatus gave only one tiny advance after another. By the end of 1931 there was evidence of having produced accelerated ions and electrons, but in disappointingly small numbers, not remarkably better than strong radioactive sources. In September 1928 Breit went on an extended trip to Europe to study the new developments in quantum mechanics, following which he accepted a faculty position at New York University. He had withdrawn from experimental work very early, in part because he had no laboratory skills and because his personal relations with the others in the group had reached an impasse, owing to his difficult personality. He had the uncommon ability to turn the most abstract discussion of physics into a bitter argument. In September 1930
76
The Department of Terrestrial Magnetism
Figure 10.2 One form of the Tesla-coil accelerator. At the upper right is seen the primary coil of the transformer followed on the lower left by the windings of the secondary, which is connected to the high-voltage terminal (the oblong-shaped object near the center) which is connected to the accelerator tube proceeding off to the lower left. This assembly was immersed in transformer oil. After nearly five years of effort such equipment succeeded in accelerating a small number of protons and molecular ions, but their miniscule currents and their wide energy spread made for a very poor accelerator. These experiments succeeded, however, in teaching the DTM group how to make a modern accelerator tube in advance of any other laboratory. December 1931.
The Tesla coil
77
he announced his desire to return to the Department, a move that met with the immediate approval of President Merriam and the Executive Committee. Breit specifically exempted his future duties from those of the experimental group, but their reaction was immediate and intense – they did not want Breit at the Broad Branch Road address under any circumstances, and Tuve began seeking employment elsewhere. Personal encounters became decidedly unpleasant. Faced with this stance by his colleagues Breit withdrew his request in April. Those who are aware of the close collaboration between Breit and Tuve during the subsequent decade, when Breit guided the nuclear physics group with sound theoretical knowledge, and analyzed their experiments for mutual triumph, will find this difficult to understand. The two were able to cooperate extremely well through letters and Tuve’s occasional visits with Breit in New York. Breit never carried a grudge and probably never understood why people became so upset with him. He never succeeded in establishing a relaxed scientific partnership with an equal but continued his dazzling career alone with his students and an occasional collaborating theorist. It has been said that his temperament worked for the benefit of physics, as he was only briefly deflected by an administrative position and remained a pure scientist for the rest of his life. The passage of years with little to show for a sizable expenditure began to tell on President Merriam, who called a conference in October 1932 to evaluate the nuclear physics program and make recommendations. This came at a time when the Tesla-coil accelerator, the result of five years work, had to be declared an unqualified failure (Fig. 10.3). In September 1931 they had not seen a single proton track in their cloud chamber from the direct beam. But the situation of nuclear physics took a dramatic turn for the better in late 1931 with the invention by Robert Van de Graaff of a novel form of electrostatic generator, with which the group very quickly gained competence, and the conference reported only praise. These few years present a curious example of how science sometimes progresses. The choice of the Tesla coil as a high-voltage source had much to recommend it at the time but in retrospect can be seriously criticized. The basic limits of a low-duty cycle and an alternating potential would have given it, even with the most successful engineering, ion currents that would have been many orders of magnitude lower than what was soon obtained with the Van de Graaff, yet such feeble currents would have been dramatically superior to any radioactive source and could have provided proton currents as well. By entering the field when they did – and it soon began to attract others, especially after Gamow showed the Coulomb barrier of the nucleus was penetrable at energies well below 5 MeV – the DTM group had had to face early the whole assortment of problems that were to define accelerator experimentation. They developed the Van de Graaff into a superb instrument
Figure 10.3 Schematic of the final version of the Tesla transformer accelerator. At left center one sees the primary coil. Extending to the right is a glass cylinder on which thousands of turns of the secondary coil are wound. Lying horizontally is the vacuum accelerator tube that ends at the ion source at the right. At the far left is the Thomson crossed fields analyzer. This apparatus generated significant amounts of X-rays and electrons with MeV energies, creating thereby an artificial source of beta and gamma radiation. Protons and hydrogen molecular ions were observed but at levels below use for nuclear experiments. All accelerated particles had wide distributions of energy. 1931.
The Tesla coil
79
Figure 10.4 Dr. Winifred Whitman examining dental X-ray film that has been exposed to gamma radiation in experiments wherein rats were exposed to varying degrees of radiation to determine its toxicity. It was determined that the previously used blood-count data were far too insensitive for controlling exposure to exposed workers. It was also determined that an observable darkening of the film indicated a practical limit of tolerable worker exposure. This was the origin of the ubiquitous “film badge.” November 1930.
for nuclear structure physics and, having already mastered the secondary techniques, found the path open to unparalleled discoveries during the rest of the decade. Tuve was cautious in his approach to the possible hazards of ionizing radiation, but no studies of the biological effects of gamma rays alone existed to guide him, despite the large volume of clinical medical work done with radium. An opportunity arose for studying the biological hazards when 6 grams of radium came to the Bureau of Standards for measurement, and was offered for biological studies. The experimental work was carried out by Dr. Winifred G. Whitman (Tuve’s wife), a Research Fellow of the School of Hygiene, Johns Hopkins University. A temporary laboratory was provided in the attic during November 1930 where rats were exposed to radiation that had had the alpha, beta, low-level gamma and X-rays filtered out, leaving only the very penetrating gammas (Fig. 10.4). Exposures were varied from 0.5 minutes to 17 hours in roughly geometric proportion. Lethal exposure was found to lie between 3–6 hours, all the rats dying within a week or two
80
The Department of Terrestrial Magnetism
for exposures of 6 hours or longer and death following within 3–4 days for a 16-hour exposure. Eastman extra-fast dental X-ray film was calibrated to the various exposure levels in order to use it as a means of monitoring the radiation received by laboratory workers. This is the first use of the ubiquitous film badge that has subsequently been used by millions of workers. It showed a detectable darkening above the background of the film for 0.0007 lethal dose for rats, which translates into 0.56 rem, if one takes 800 rem as the lethal dose. Thus to accept a detectable film darkening as an indication of a worker having been exposed to too much radiation was a prudent guide. Blood counts were made on 66 rats for various degrees of exposure, leading to the conclusion that a discernable change in blood count occurred at levels far too great for its use as a guide for occupational control.
11 T H E VA N D E G R A A F F A C C E L E R A T O R
Electrostatic generators had been invented in the latter part of the eighteenth century, but all were limited in their maximum voltages by the insulation available to a tabletop machine. In 1931 Robert Van de Graaff published his invention of an electrostatic generator – now well known to the public and at school science fairs – that could have insulation limited only by the size of the support column. He knew, of course, of the DTM high-voltage work and offered to cooperate in evaluating his machine as an accelerator. Tuve brought two small machines along with the inventor from Princeton for a crucial experiment: could one of the vacuum accelerator tubes made for the Tesla coil hold the voltages applied? This was important because one of the arguments in favor of alternating voltages had been that direct voltages were difficult to handle, something that was certainly true for electric power. The two machines were set up in the Experiment Building where, despite the restricted space and their relatively small size, they delivered 500 kV to 600 kV, but best of all one of the Tesla-coil tubes was found to hold such potentials with no effort and without oil! Above 550 kV sparks occurred across the whole tube but without damaging it, owing to the relatively small amount of energy released. This tube had evolved through a survival of the fittest in the Tesla jungle, and the electrostatic generator could not destroy it. This evolution had thus produced what was in effect the modern accelerator tube: a vacuum tube with a long, axial distribution of electrodes across which the total potential was divided into many small sections and which was a fundamentally good ion-optic design. This design plus the assortment of allied skills the group had learned was to give them a strong advantage in the new field of nuclear physics. The question immediately arose as to the amount of voltage a Van de Graaff could provide. A straightforward approach considered the dimension of the high-voltage sphere to be the limit. The potential that would produce an electric field at the surface for which the insulation of the air would fail was proportional to the radius, leading Tuve to order two hemispheres spun from aluminum from which an electrode of 2 meter diameter was assembled. This was mounted outdoors on an insulating composition cylinder within which ran the famous charging belt. A long accelerator tube was connected 81
82
The Department of Terrestrial Magnetism
Figure 11.1 Test of the ability of an accelerator tube of the kind developed for the Tesla coil to hold the million volts attained by this Van de Graaff generator. When this vacuum tube held the voltage successfully, plans were begun for the Experiment Building Annex and the 1 MV machine. May 1932.
to the sphere with the other end at ground potential where it was pumped to high vacuum. Potentials exceeding 1 MV were achieved and the tube held (Fig. 11.1). At this point two parallel paths were followed. A large extension to the Experiment Building was ordered to house an accelerator based on the 2 meter electrode. A smaller accelerator with a 1 meter electrode that reached about 400 kV was built in the Experiment Building in order to work out the accelerator design and to learn how to use it (Fig. 11.2). Things had become more exciting because Cockcroft and Walton had just “split the atom” at Rutherford’s laboratory by bombarding lithium and other elements with protons at energies within the range of the little machine being built in the Experiment Building. The Van de Graaff generator quickly displayed a remarkable and very welcome characteristic: it provided naturally a very steady voltage. A lot of time would be spent with the 2 meter machine developing the accurate voltage standards and controls that would turn it into a precision instrument. Soon the machine could be relied upon to 0.5% in voltage. A completely new
The Van de Graaff accelerator
83
Figure 11.2 The 400 kV Van de Graaff located in the Experiment Building. The charging belt runs vertically. The accelerator tube was identical in design to those used with the Tesla equipment. It was later replaced by one with fewer sections and simpler construction. The experimenters worked at the rear and were protected by a lead sheet against the X-rays emitted by electrons passing up the tube and striking the ion source. 1932.
form of ion source was also developed, one that used a low-voltage arc and in so doing provided the accelerator with a beam having very little spread in ion energy. A variety of ion species come from such sources, so they placed a magnet at the exit to analyze the beam by mass. This separation of atomic from molecular ions of both hydrogen and deuterium beams was soon to give them a singular advantage.
84
The Department of Terrestrial Magnetism
The use of corona current, the current that passes through the atmosphere from a sharp point at negative potential relative to its surroundings, proved unexpectedly to be a very useful phenomenon for working with high voltages. A needle placed a few centimeters from a smooth positive electrode was found to draw a rather steady current, dependent primarily on the distance and the potential difference. This became useful as a high-voltage resistor. It was important that the potential difference between the many electrodes in the accelerator tube be equal, and it was found that this could be done by placing equally spaced needles on the negative side of each accelerating electrode. The technique was found very useful in controlling the current being applied to the moving belt. Ironically, sharp points, theretofore thought to be an anathema for high-voltage work, became a standard design component. Their knowledge of ion optics had reached the stage where they knew that an electrostatic lens of adjustable voltage was required to take the ions leaving the source with a spread of angles and focus them into the accelerator tube. This required an adjustable high-voltage supply in the 2 meter shell where space and power were at a premium. They accomplished this by placing a smaller electrode within the shell, charging it with the same belt and adjusting its potential relative to the large shell by the amount of corona current drawn. The belt that carried the charge to the high-voltage terminal proved to be an irritating mechanical problem. It had to be made of material that was a good insulator yet strong enough to function as a belt. Silk and paper were tried with various kinds of coatings, but they tore after limited use. The fabric employed by airships proved to be the best solution for the open-air machines. By November 1932 the 1 meter machine was delivering 1–3 microamperes on a spot a few millimeters in diameter at the target with essentially complete transmission of the beam leaving the ion source. These currents seemed enormous after the grim days with the oil machine. They reproduced the Cambridge experiments done at Rutherford’s laboratory in which very energetic alpha particles came from protons bombarding lithium and began to study the process as a function of proton energy. Rutherford’s people had also reported seeing energetic alphas from a number of elements throughout the periodic table in addition to those from lithium and boron, but the DTM group was unable to reproduce these results for highly pure aluminum, nickel and silver and soon found that alphas reported at Cambridge had resulted from boron contamination. Boron is a common contaminant and has a very high cross section, i.e., a high probability for reaction, for relatively low-energy protons. The 1 meter machine was found to produce a substantial flux of X-rays, primarily from electrons moving up the tube to the terminal, but the group was well prepared to deal with this health hazard because of Dr. Whitman’s systematic examination of the effects of radiation on mice. Film badges
The Van de Graaff accelerator
85
were required of everyone working in the vicinity of the accelerator. Tuve’s approach to the possible dangers of long exposure to ionizing radiation marked DTM from the beginning. These concerns were the origin of his research interests in biology, which became more important as he recognized the possibilities for using accelerator-produced radioactive tracer isotopes in that field. By October 1933 the 2 meter machine had been erected in the spacious room constructed for it and had produced a beam in the target room below. This marked the beginning of efforts to master the new instrument. The first order of business was to determine the accelerating voltage as well as learn to control it. Tuve’s group were unique in having a stable-voltage accelerator capable of precision. Measuring very high voltage was a problem of its own, and the first method used a “generating voltmeter,” a device found on modern Van de Graaffs. In it a rotating fan-shaped blade faces the terminal, generating an alternating voltage that is amplified electronically with its value proportional to the electric field. The means of calibration were initially poor and the measurements were too high. Another method, which is now the ultimate in machine control, is the amount of magnetic field needed to deflect a given ion beam. This gave better results, but the geometry of the beam path was not well enough determined to give good accuracy. The eventual method became the fundamental one of using Ohm’s law. This required constructing a very large resistor and determining its value. By connecting it to the terminal and measuring the current passing through it to ground, the terminal potential was established. Building such a resistor fell to R. G. Herb of the University of Wisconsin as a summer project at the Department in 1935. Here corona current was a problem not a help, as any such current bypassed the resistor chain and gave a false result. The solution was to link a large number of individually calibrated carbon resistors within plastic tubing, spiraling them around the inside of a large insulating cylinder, forming a 10 000 megohm resistor. The device was a success, and the machine voltage problem was solved (Fig. 11.3). Closely related to the voltage problem was the discovery by the group of resonances in reactions induced by protons. There had been indirect evidence for such a phenomenon, and Breit had urged the use of the controlled variable-energy beam of the Van de Graaff to search for them through the emission of gamma rays. They found them in lithium at 440 keV and in fluorine at 328 892 and 942 keV, with resonant widths from 4 keV to 15 keV. In addition to a discovery that opened the enormous discipline of nuclear spectroscopy, paralleling that of atomic and molecular spectroscopy, they provided voltage standards allowing easy verification of a machine’s energy and the comparison of work done at different laboratories. In 1932 Harold C. Urey demonstrated that hydrogen was composed of two isotopes, the mass-two or heavy hydrogen being present at about one
86
The Department of Terrestrial Magnetism
Figure 11.3 The 1 MV Van de Graaff, located in the Experiment Building Annex. The charging belt runs through the high-voltage terminal from one side of the room to the other. At the right and below it is a fabric belt for driving an ac generator that supplies current for the ion source. The accelerator tube through which the ions pass into the experiment room below extends from the large electrode to the floor. This electrically divided vacuum tube evolved from the tubes used on the Tesla-coil accelerator and quickly became standard for electrostatic accelerators. The diagonal cylinder at the right holds the 10 000 megohm resistor that determined the voltage of the machine and established the energies of nuclear resonances. Norman Heydenburg stands to the right. The accelerator was completed in July 1933. The high-voltage resistor was made by the visitor Raymond G. Herb two years later. April 1936.
The Van de Graaff accelerator
87
part in 7000 in natural hydrogen. By December 1933 deuterons (at the time called deutons) were accelerated using heavy water samples furnished by the Bureau of Standards. Urey’s discovery had caused immediate interest in all the laboratories working with accelerators, which in 1933 meant Rutherford at Cambridge (voltage doubler from power transformer), E. O. Lawrence at Berkeley (11 inch cyclotron) and C. C. Lauritsen at Pasadena (alternating 60 Hz voltage), all limited to about 1 MV. Deuterons excited a number of new reactions and specifically ones producing neutrons and that were much better than alpha particles bombarding beryllium, which had been the main source until then. The deuteron-excited reactions quickly opened a dispute between Tuve and Lawrence, who had reported neutron and proton emission (of the same energy) from every element bombarded with deuterons. He had been invited to the exclusive Solvay Conference because of his invention of the cyclotron and had presented these results, which he said came from the Coulomb disruption of the deuteron. Tuve soon learned that what was being observed came from the reaction of deuterons with the deuterium lodged in the target material by the beam itself. One could watch the neutron or proton yield grow from zero as the beam bombarded any metal target. Two reactions were responsible for the observations. The two deuterons coalesced to form the nucleus of helium in a very excited state that immediately decayed with approximately equal probability into either a neutron and a helium-3 or a proton and a hydrogen-3. These were reactions that were to form the basis of much neutron research and later to figure in the design of nuclear weapons. There were many other reactions that the group investigated, with almost every month bringing incredible excitement, but the ultimate goal was of a more fundamental nature – one decided very early by Breit and Tuve – the scattering of protons by protons. The success of the electrostatic generator obviously led to plans for higher voltages. The obvious way was either to make very large machines or to increase the insulation of the air (or other gas) by increasing the pressure. Natural gas was being stored in welded steel tanks at pressure, so Tuve approached the Chicago Bridge and Iron Works in 1935, which had pioneered this, for the design of a suitable tank. The use of charging belts was still filled with grief, so Dahl proposed using spinning discs of the kind found in the old electrostatic machines, but stacked in tandem in the vertical column. Tuve thought in terms of cascaded transformer rectifiers. With successful experience using belt charging, these ideas were forgotten. Pressure-tank Van de Graaffs were being planned at University of Wisconsin, University of Minnesota and Westinghouse Laboratories (Pittsburgh); the first selected relatively high pressures and small size, the design that ultimately prevailed, whereas DTM and the other two selected low pressure and large tanks. These projects generated newspaper articles heralding high-voltage atom smashers.
88
The Department of Terrestrial Magnetism
An initial design with a large spherical tank met the problem of neighbors objecting to the zoning board about its industrial appearance. There soon came from the drawing board a pear-shaped tank with the lower part enclosed by a cylindrical brick structure that gave it the appearance of an astronomical observatory. As it was widely believed that the Naval Observatory on Massachusetts Avenue had improved real estate values, the new design met little opposition, and the resulting machine was called the Atomic Physics Observatory or APO.
12 THE NUCLEAR FORCE
The fundamental rule for theorists trying to unravel the intricate structures of nuclei is that the nuclear component of the force between two nucleons, be they two protons, two neutrons or a proton and neutron, is attractive and identical. This rule, the foundation of nuclear structure theory, came about in 1935 from experiments done by Tuve, Hafstad and Norman Heydenburg, who recently had joined the group. The first and most difficult of the experiments was the scattering of protons by protons. This had been in the minds of Breit and Tuve from their earliest days; it simply seemed to be the most fundamental experiment they could undertake. It was also just about the most difficult that they could have done at the time, an experiment that demanded the ultimate in precision accelerator techniques. The experiment done by Rutherford in 1911 established the way in which many nuclear properties have been discovered. He collimated alpha particles from a radioactive source into a narrow beam that he passed through a very thin foil of gold. Nearly all of them continued through without significant deflection, but a very small fraction were scattered through large angles, some 180◦ . This result was explained by an astounding theory: that the overwhelming mass of the gold atom and all of its positive charge was concentrated in a tiny volume, the nucleus. In this experiment the force operating between target nucleus and projectile was the Coulomb electric force, following its inverse-square law down to sub-atomic dimensions. Rutherford followed with studies in which the targets were much lighter elements that had correspondingly smaller nuclear charges. In these experiments he observed deviations in the scattering from pure Coulomb that indicated another force, a purely nuclear one, was also functioning. From this work he deduced the approximate radius of a nucleus. From this background it was reasonable to expect that the nuclear force would show up strongly for two particles that had only one electric charge each. Breit’s studies of the quantum mechanics of scattering, a subject for which the fundamental principles had been laid down while the Tesla-coil work was going on, indicated that the scattering cross section would have to be studied as a function of scattering angle and incident proton energy. The early attention given to determining the accelerator voltage had its origin in the need to know it accurately for the proton–proton experiment. 89
90
The Department of Terrestrial Magnetism
A second difficulty was measuring the flux of protons scattered at a given angle. The target could not be a thin foil but had to be a chamber filled with hydrogen. Because the two particles were of equal mass, all scattering took place between 0◦ and 90◦ . The detector was an ionization chamber connected to a high-gain electronic amplifier that was positioned from the outside. It was of necessity very small and could have no foil window separating it from the chamber gas, as any foil would be so thick as to reduce severely the energy of the incident protons and stop completely some of the lowest in energy. This was solved by having an open counter window and using the chamber hydrogen as the counter gas as well as the target gas. A third difficulty was the measurement of the incident proton current to which all data had to be normalized. This could not be done in the chamber directly because of the ionization of the chamber gas along the beam path. After many trials current was measured by scattering the beam from a gold foil at the end of the chamber into another counter. This required calibration of the counter against a current measurement, but it proved stable and reliable (Fig. 12.1). The results of these measurements could be viewed as a series of plots of cross section against scattering angle, each for a given energy. Casual consideration of these data could disclose nothing about what had transpired at the nuclear collision. Analysis by quantum mechanics had three complications with which Breit and Edward Condon had to contend: the particles were identical, both had spin 1/2 and exerted both nuclear and Coulomb forces. One of the peculiarities of quantum mechanics is that in such an encounter it is in principle impossible to distinguish projectile and target; this goes beyond the inability of the observer to know which he counts but is fundamental to the wave function. That the particles have spin (alpha particles have spin 0) requires describing the scattering by two partial interfering waves that also interfered with the Coulomb amplitude. Thus the pair of papers by Tuve et al. and Breit et al. that appeared in 1935 opened new experimental and theoretical methods of studying the atom. The analysis provided a wonderful result: the entire complicated set of data could be quantitatively described with a single function of beam energy, interpreted as the phase shift of the partial wave having no orbital angular momentum relative to the target (called S waves), the phase shift of the scattered relative to the incident wave. The next surprise was that this phase shift could be reproduced by a simple attractive potential well, having a depth of 11.1 MeV and a radius of 2.82 × 10−13 cm (Fig. 12.2). The next experiment was obvious – neutron–proton scattering. Here the experimental technique had to be completely different. Reactions yielding energetic neutrons could provide the necessary incident beam but at intensities much too low for the kind of arrangement used for proton–proton. Here the painful years working with the Wilson cloud chamber paid off. This
The nuclear force
91
Figure 12.1 Schematic drawing of the scattering chamber used for the first proton–proton experiment. The inscribed text should guide the reader. The design went through two more modifications. The next used scattering from a gold foil at the end of the beam path to measure the total current. In experiments taken up at Breit’s insistence after the war the aluminum foil separating the gas of the chamber from the vacuum of the accelerator tube was replaced by differential pumping.
well-known device, the output of which fills textbooks with photographs of nuclear processes, is extremely difficult to make function properly. The chamber gas must have some agent that can form droplets about ions when the pressure is rapidly reduced. If intense lamps are flashed on expansion and a stereoscopic camera exposed, three-dimensional data can be obtained. The results recorded on 35-mm movie film are then examined and the few tracks from a recoiling proton measured. The data were similar to the proton– proton experiment but the analysis did not have to contend with Coulomb
Figure 12.2 The experimental results of the DTM proton–proton scattering experiment. If the force between protons were given by the Coulomb force alone the result would be Mott scattering. The ratio of the measured values to the Mott value is plotted against the scattering angle for four different beam energies. The deviations from unity result from the nuclear strong force. From these data Breit demonstrated that the nuclear force was attractive and was the same as the force between neutron and proton. This fact has remained the foundation of all calculations of nuclear models.
The nuclear force
93
terms. What was important was that the potential well was the same as for the proton–proton interaction. With this the group presented to the scientific world one of the most important discoveries of the decade – that the nuclear force between proton and proton was the same as between neutron and proton (Fig. 12.3). The generation of copious fluxes of neutrons with deuteron beams, especially from beryllium, came at the same time as news of a spectacular discovery by Enrico Fermi in Rome, who began experimenting with neutrons from a radon–beryllium source as Chadwick had used. In 1934 he noticed that the neutrons were absorbed, making neutron-rich isotopes, frequently radioactive, and was soon struck by the much higher probability of capture than his estimates. Indeed a few isotopes were produced several orders of magnitude greater than expected. The explanation was not long in coming. The neutrons were generated from the alpha-induced reaction yielding neutron energies in the MeV range, but on leaving the reaction they encountered matter in some form and generally underwent elastic collision. This reduced their energy, and a few such collisions caused them to be thermalized, i.e., taking on the energy characterizing the temperature of the scattering material, about 0.025 eV. If one placed hydrogen-rich material near the source, the neutrons were thermalized rapidly and remained near, making the effects easy to observe. The high probability of capture lay in the wave-nature of particles and the many resonant states that could be formed in the compound nuclei formed by the addition of neutrons. Fermi had no accelerator and hence no way of making the enormous numbers of neutrons open to the DTM group, so a mutually advantageous collaboration was entered. Edorado Amaldi, one of Fermi’s students, came to the Department in the fall of 1936 as a visiting investigator with the object of doing experiments with thermal neutron fluxes of unheard of intensity. A 1250 liter tank of water was so mounted that the deuteron beam from the 2 meter machine could bombard a beryllium target within the container. Diffusion theory allowed the flux to be calculated at various distances from the target, which allowed the absorption cross sections to be determined for small samples placed within the bath. This was obviously the best method of producing large numbers of thermals, and so the plans for the new pressure-tank Van de Graaff, the APO, included a concrete cylindrical pit 8 feet in diameter and 8 feet deep in the floor of the new target room as a water tank. When it was ready, events had caught up with science in the form of uranium fission and it was never used. Because of its atomic number of 3, lithium had been a favorite element for bombardment, and Breit made an important analysis of the energy dependence of the 8.7 MeV alphas that came from it with proton bombardment of 7 Li. There was a lower energy group of alphas that presumably came from 6 Li, and the situation became complicated when the bombarding particles
Figure 12.3 Lawrence Hafstad and Norman Heydenburg in the experiment room located below the 1 MV Van de Graaff accelerator. At the upper right is the electromagnet for separating the various mass components of the beam, generally protons, deuterons and hydrogen molecular ions. April 1936.
The nuclear force
95
were deuterons. In order to untangle this, some microscopic samples of the two isotopes separated with a mass spectrometer were brought from the Bartol Research Foundation by Lynn Rumbaugh, who in 1937 undertook a detailed study with Richard B. Roberts, a new Carnegie Fellow and future staff member. Their work disclosed 18 reactions excited by the beams of the two hydrogen isotopes. The variety of radiation that came from these reactions and the poor energy resolution of the detection mechanisms made the untangling rather more than ordinarily complicated. One reaction was initially missed from the bombardment of 6 Li by deuterons. It produced neutrons that were masked by those from the deuterium self-target. Neutrons from 6 Li should leave 7 Be as a residual nucleus, but it was an isotope as yet unobserved. It would have 4 protons and 3 neutrons and, since there was no stable 7 Be it would have to be radioactive, if it existed at all. Until then, proton-rich radioisotopes were known to emit easily identified positrons, but none were observed. Roberts did find radioactivity in the residue, but the only radiation was gamma, never seen alone in radioactive decay. Chemical analysis showed the activity was beryllium. The decay process was the newly discovered electron capture of an atomic electron by the nucleus; the gammas came from 10% being captured into an excited state of 7 Be, whence the gamma. The remaining 90% were captured into the ground state leaving no trace but a neutrino and a very weak X-ray. This new isotope proved to be a key to understanding one of the solar energy cycles. In February 1936 the nuclear physics group lost Dahl, who had undertaken most of the detail for the engineering design. He was originally recruited for the Department through the office of Harald Sverdrup, who later opened the possibility of a position for Dahl back in Norway on a visit he made to the Department in 1935.1
13 FISSION
In January 1939 the Institution and George Washington University were joint hosts for the Fifth Washington Conference on Theoretical Physics. These meetings had come about through the desire of the DTM group to draw on the best guidance possible and the need for George Gamow, Professor of Physics at George Washington, to circumvent the isolation from other theorists that resulted from his coming to Washington. Gamow had fled the Soviet Union with his wife and taken temporary residence in Niels Bohr’s institute in Copenhagen, and Tuve had recommended him for the faculty position, hoping to profit from the mind that had explained alpha-particle radioactivity. The conferences were a condition of his employment. Later Edward Teller joined Gamow at George Washington. The conference proved very popular with the leading theorists of the time because of the relaxed way in which the sessions were conducted, no formal papers or abstracts being required and, of course, no published proceedings. The subject for the first conference was problems in nuclear physics and was an acknowledged success. The fourth had probed the problem of stellar energy and was distinguished because Hans Bethe had worked out the first of the two major chains of nuclear reactions that drive the Sun while on the train back to Cornell. The 1939 conference had as its topic low-temperature physics, but had nevertheless drawn theorists with interests in nuclear physics, fragmentation not yet having exerted the strong hold it now has. The morning of the first day went normally, but after lunch Bohr made an informal announcement, prompted by rumors that had begun to circulate. He reported that before leaving Copenhagen he had learned that Otto Hahn and Fritz Strassmann of the Kaiser Wilhelm Institute had found radiochemical evidence for the fission of uranium, for which a paper was in press in Die Naturwissenschaften. Fermi immediately suggested a simple experiment to verify the astounding phenomenon. Roberts and Hafstad left the conference, which continued on with low-temperature studies, and prepared to do it. The discovery of the properties of thermal neutrons had brought with it an array of artificially produced radionuclides, all beta emitters that left the absorbing nuclei with a positive charge one unit higher. It occurred to Fermi that one could make an element never seen before by irradiating uranium, 97
98
The Department of Terrestrial Magnetism
Figure 13.1 Tuve and Fleming at the entrance of the newly completed tank for the Atomic Physics Observatory, the pressure-tank Van de Graaff. 1938.
the element with the highest atomic number, with thermals. He had done so and found radioactivity of several different half lives but with no clear-cut answers. Hahn and Lise Meitner set about to clear up the matter; they were the outstanding radio chemists of the time, had pretty much mapped out the chains of natural decay from uranium and thorium, and were pleased to accept this new challenge. But the chemistry of elements not encountered before proved tricky and the number of radioactivities grew out of all reason. Fission, the splitting of the uranium nucleus into a variety of pairs of more or
Fission
99
Figure 13.2 Construction of the Atomic Physics Observatory, the pressure-tank Van de Graaff. The 10 ton high-voltage terminal is suspended by cables from the completed tank and the porcelain support columns are being built up to it. The heavy static weight provided stability to the columns. 1938.
less equal fragments, yielded neutron-rich isotopes, which were beta emitters and which explained their bizarre data. Hafstad and Roberts joined Robert Meyer, who had replaced Dahl as the group’s engineer, in setting up the experiment. They used the new pressuretank Van de Graaff to produce copious numbers of neutrons that were thermalized with blocks of paraffin. A small amount of uranium was introduced
100
The Department of Terrestrial Magnetism
Figure 13.3 Fission demonstrated at DTM on 28 January 1939. Two days earlier news of fission became public knowledge at the Fifth Washington Conference on Theoretical Physics sponsored jointly by the Carnegie Institution and George Washington University. The experiment demonstrated it as a nuclear as well as a radio-chemical phenomenon. Left to right: Robert Meyer, Merle Tuve, Enrico Fermi, Richard Roberts, Leon Rosenfeld, Erik Bohr, Niels Bohr, Gregory Breit and John Fleming. The experiment was set up by Roberts, Hafstad and Meyer.
into an ionization counter that, on irradiation, showed enormous pulses, indicating an energy release of 200 MeV, close to the amount estimated. On Saturday evening (28 January) Roberts and Meyer demonstrated fission (not yet so named) to Bohr, Fermi, Breit, Teller and Rosenfeld among others. Hafstad had left the demonstration to the younger Roberts, once he had seen all was going well, as he had a chance to go skiing (Fig. 13.3). A similar confirmation had been done first at Copenhagen and at essentially the same time as at DTM in Berkeley, New York and Baltimore. The press had picked up the news at the conference and sensational but essentially true stories of fission – and of a possible bomb – were there for the public to read. That the process had been discovered in a Germany controlled by Hitler was in every mind. Roberts had followed the verification experiment with a discovery of major importance. It was clear that neutrons came from the fission process, although not at all clear how many. There was also the question whether they came simultaneously with fission or later from some of the many radioactive decay products, some of which might become unstable to neutrons after emitting a
Fission
101
beta. The experiment was simple and yielded a quick answer. Yes, there was a small number of neutrons that came off tens of seconds later. These “delayed neutrons” meant that it would be possible to control a neutron reactor; had all been prompt, only a bomb could have been made. Communication intensified between the various groups experimenting with uranium. It was a hot topic. An article appeared in the January 1940 issue of Reviews of Modern Physics that cited nearly a hundred articles published since the announcement of the discovery. There had been nothing to compare with it since the discovery of X-rays. Fermi concentrated on studying moderators, the materials for thermalizing neutrons, with an eye toward a reactor, but drew on DTM’s cross-section data for fission from fast neutrons in order to estimate the critical mass for a bomb. Tuve complained to Breit in a letter dealing primarily with details of the analysis of the proton–proton work about “secret meetings . . . regular war scare.” A voluntary suppression soon stopped the natural flow of US scientific publications on fission. Thoughts about uranium were mixed with those about the future direction of the group’s work. The discovery of a spectrum of artificial radioisotopes aroused interest in applying them as tracers in biological or medical investigations. To this end Department plans took shape around a copy of Lawrence’s 60 inch cyclotron, then under construction. The cyclotron differed significantly from the Van de Graaff accelerator in having a much larger internal beam, but one that was very difficult to extract to form a well-defined beam for precision measurements. Furthermore, the accelerated particle energy was not variable and the machine emitted prodigious amounts of background radiation that could overwhelm measurements. But its large internal beam was perfect as a factory for radioisotopes, both proton and neutron rich, and this was the characteristic needed for tracer studies. Plans were made for building a 60 inch machine with funds obtained from the Carnegie Corporation. The accelerator was to be located below ground with shielding that would meet the most critical present-day standards. Above it was to be a laboratory building designed for using immediately the generated isotopes (many interesting ones have short half lives) in a number of separate experiment rooms. Those planning included interested parties from the National Cancer Institute, the Department of Agriculture and the Johns Hopkins University. Philip Abelson and Dean Cowie brought their experience in cyclotrons at Berkeley to the task. Construction was well under way in summer 1939, only months behind the planning. Chapter 18 provides details of construction and operation. Experiments did not wait for the new machine. In summer 1939 Roberts produced radioactive 24 Na and helped Louis Flexner use it to measure the placental transmission between maternal and fetal blood circulation in pregnant rats. The experiments showed that the fetal mass was determined by genetic forces alone. James H. C. Smith of the Institution’s Division of Plant
102
The Department of Terrestrial Magnetism
Biology studied the way in which plants take up carbon dioxide in the dark using 11 C prepared with the pressure-tank machine. Federal organization of research on a uranium bomb began on 21 October 1939 at the instigation of President Roosevelt with the formation of the Advisory Committee on Uranium with Lyman Briggs, Director of the Bureau of Standards, serving as chairman. Tuve was named to the committee with Roberts his replacement. Although DTM was making many measurements relevant to the growing bomb project, neither Tuve nor Roberts was enthusiastic. By then the need for separating isotopes on an industrial scale began to show itself as a first and costly step, and the two thought the drain on resources would be too great for a weapon that would probably come too late to affect the course of the war. Roberts later frequently quipped: “And we were right too.” Tuve told this author: “I didn’t want to make an atomic bomb.” Both left the committee and applied their talents along with the other members of the group toward a different defense problem.
14 COSMIC RAYS
Research in cosmic rays centered on Scott Forbush, indeed one could say with fair accuracy that it was a Department field that belonged to him alone. He was joined by visiting investigators, enjoyed associations with colleagues worldwide and had the help of a capable assistant, but he was the only staff member who devoted a significant part of his time to this discipline. Forbush joined the Department in 1926 as an observer at the Huancayo Observatory and served on the Carnegie for the last part of its final cruise, on which he made, among others, cosmic-ray measurements. Exciting things were happening in cosmic-ray research. Their nature had just been settled by establishing that they were particles, not gamma photons; a significant fraction had been demonstrated to have incredible energies, and there were startling bursts or showers that became more frequent with altitude. Where and how they originated was the great question, one that attracted adventurous minds. That their intensity was to some extent modulated by the geomagnetic field naturally aroused DTM interest; indeed the first evidence for their particulate nature came from the lower intensities observed near the geomagnetic equator, a property conflicting with a photon nature. It was evident from the start that studying cosmic rays required global observation, and as the Department was preeminent in this kind of operation and the Mount Wilson Observatories preeminent in astronomy, the formation of the Committee on Coordination of Cosmic-Ray Investigations in December 1932 at the request of Robert A. Millikan and Arthur H. Compton by President Merriam came naturally. The Committee was successful in achieving the goals of coordination and sponsored a highly visible stratosphere balloon experiment for the 1933 Chicago “Century of Progress” world fair. Forbush entered the field when a Compton-Bennett precision cosmicray meter was made available for the Coast and Geodetic Survey magnetic observatory at Cheltenham, Maryland in December 1934 and was assigned to install it and supervise the collection of data. Another was set up at the Huancayo Observatory shortly thereafter. The Compton-Bennett meter is a 19.3 liter ionization chamber filled with argon at 50 atmosphere pressure in a steel bomb enclosed with lead shot to filter out the comparatively weak terrestrial radioactive background. Ions 103
104
The Department of Terrestrial Magnetism
produced by the incident particles are collected on a charged central electrode and approximately balanced against those produced by a radioactive thorium source. The difference passes onto an electrometer, which is periodically grounded and which records continuously through its mirror onto lightsensitive bromide paper. These meters quickly became standard for measuring highly penetrating radiation and are still in use. Their main virtue is the ability to compare intensities from a wide distribution of locations and spanning long intervals. Prior to this assignment Forbush had worked in reducing magnetic and gravity data and had thereby come under the influence of guest investigator Julius Bartels, who made him aware of the power of careful statistical analysis of geophysical data. This set his scientific course for the remainder of his life, for he was by no means content to operate this meter to collect data just for the Committee, and he began a careful analysis of the output and of the one at Huancayo. By 1936 he had clarified the application of the barometric corrections that were being applied, showing that the “corrections” previously used were not corrections at all but simply consequences of statistical fluctuations. With accurate barometric corrections he was able to examine diurnal variation and showed a real effect. Previous observers had attributed what they saw to variation of air temperature, but Forbush proved that there was no justification for the air-temperature corrections theretofore applied. The amplitude of the diurnal effect was 0.2% of the total cosmic-ray intensity, and the same value was obtained from the Huancayo data. Shortly thereafter P. M. S. Blackett at Cambridge showed that the varying altitude at which -mesons are generated in the upper reaches of the atmosphere would, because of their short lifetime, result in unwanted diurnal and seasonal variations, which could not be reliably calculated and removed. Forbush avoided diurnal variations for about 30 years, when he returned to them to good effect. Careful statistics allowed him to disprove evidence advanced for an anisotropic distribution claimed to result from the motion of the solar system through the galaxy by showing the absence of a sidereal diurnal effect. Similar techniques allowed him to see a quasi-persistent 27 day variation associated with the rotation of the Sun. A minimum of statistics was needed in his discovery from the analysis of data from a magnetic storm that occurred on 24 April 1937, which were recorded at the two stations having Compton-Bennett meters. A decrease in the horizontal magnetic intensity of 0.5% was closely associated with a decrease in cosmic-ray intensity of 4%. On 21 August 1937 a second magnetic storm occurred that had no effect on cosmic-ray intensities in the two stations nor at the Mexican station at Teoloyucan, which had recently been added to the network. As records of storms and associated cosmic-ray intensities, with and without decreases, were collected, the suspicion grew that the effect of
Cosmic rays
105
structure in the magnetic field well above the atmosphere was being observed, a field of study that was to make great strides when probed with rockets. The relationship of cosmic rays modulated by magnetic storms quickly acquired the appellation “Forbush decrease” or “Forbush effect” (Fig. 14.1). In the midst of this early work came an order from the Director for Forbush, who was employed as an observer not as a scientist, to become Observer-in-Charge of the Huancayo Observatory for a three-year period. Needless to say, he wished to continue the successful path that he had begun and managed to get the order canceled but with the threat that he would later have to take over the same duties at the Australian observatory at Watheroo. The Cosmic-Ray Committee was sufficiently impressed with his studies that they persuaded President Merriam to support his studies directly from the Institution for a few years, although continuing to do the work at DTM. During this early period Forbush observed a sharp increase in cosmic rays coincident with a solar flare. The behavior was different from the usual magnetic storm effects not only in the increase but also in the rapidity of the change. Being uneasy with a single event he postponed the report for a decade until he had observed another. These events were the first evidence that solar flares could accelerate particles to energies of billions of electron volts. The mechanism by which solar magnetic fields can do this was worked out by W. F. G. Swann, the long since departed DTM staff member and director of the Bartol Institute. During the war Forbush applied his mathematical skills at the Naval Ordnance Laboratory in problems dealing with protecting ships from magnetic mines and in detecting submarines from their magnetism, but he soon became competent in the new field of operations research, which made good use of his knowledge of statistics and presented him with new situations on which to sharpen his mathematical skills. He had become fascinated by statistics and occasionally took courses at Johns Hopkins and George Washington Universities. Tuve became Director after the war and in reorganizing the Department was not pleased with Forbush’s approach to science. Tuve had not viewed the old methods of “Terrestrial Magnetism” with favor. Its slow, patient accumulation of data followed by wearying reduction and analysis was just the kind of science he did not want. He favored cosmic-ray studies and recognized Forbush’s contributions, but wanted to see new observational methods planned, not just reliance on the same cosmic-ray meters turning out numbers day after day. He wanted something daring, but Forbush saw there was much to be learned from the data that came in so reliably and from older results that needed to be given a firmer footing using new insights. Furthermore, he was irritated by Forbush’s highly independent ways wherein he frequently undertook to give courses at various universities or worked on matters sometimes unrelated to geophysics. Operations analysis with a
Figure 14.1 Forbush decreases. Scott Forbush was assigned the duties of collecting data and maintaining a Compton-Bennett ionization chamber. He expanded the assignment to ever-increasing statistical analyses of the data from a worldwide array of these instruments. These data show his discovery of a correlation between cosmic-ray intensity (solid line) and a strong temporal variation of the Earth’s magnetic field (dashed line) owing to a magnetic storm. 1937.
Cosmic rays
107
Figure 14.2 Evidence for Forbush’s discovery of a 22 year solar cycle to cosmic rays. This was one of the results of his continual “data taking” with standard instruments over long periods.
British–American group during the early 1950s brought an ultimatum that returned Forbush to science. Forbush never made it easy for his bosses. Forbush returned to the critical examination of the effects that he had discovered and particularly to the weak 27 day and diurnal variations. For the latter he found a method that allowed the ¯-meson problem to be circumvented, and from the long accumulation of data he established a modulation by the 11 year Sun-spot cycle in the diurnal variation. Finally in work completed after his official retirement, he demonstrated and successfully defended the presence of a 22 year component (Fig. 14.2). Theories seeking to explain the effects that Forbush was discovering abounded, but although they interested him, he did not contribute significantly to them. His goal was to get the facts straight. A moderate understanding came as observations made by spacecraft began to augment those taken near the Earth’s surface. Research in cosmic rays ended with Forbush’s departure from the scene, which in fact only happened with his death, a few days short of his 80th birthday. Liselotte Beach, his assistant after Isabelle Lange’s retirement in 1957, continued to reduce data, which he studied in retirement. A genial personality, he prized his scientific and personal independence highly and fought to protect them. He loved music almost to a fault, and the Institution paid its last respects to him at a concert in the Elihu Root Auditorium by a string trio playing Mendelssohn and Beethoven.
15 THE PROXIMITY FUZE AND THE WAR EFFORT When war enveloped Europe in 1939 the nuclear physics group of the Department recognized the serious crisis that faced Western civilization. At top of their concerns as citizens lay concerns imposed by their knowledge of what was transpiring with uranium fission. They were themselves active in obtaining data that went primarily to Fermi for his far-ranging calculations. Roberts published a paper sketching the possibilities of fission as a source of power, and back-of-the-envelope calculations of bomb designs dominated many informal discussions. The sudden and unexpected defeat of France, followed by the Battle of Britain in the summer of 1940, brought all of these concerns to a rapid focus. The group, Tuve, Hafstad, Roberts and Heydenburg, sought a research goal with direct application to preparing the nation for the struggle they were sure was coming. There was little interest in the atomic bomb project that was coming into existence, although they cooperated in making the measurements wanted by others. They estimated that years would be required for its achievement, not to mention an enormous industrial commitment that might prove to be a misdirection of effort. Tuve’s connections with the Navy, established during the ionosphere work, soon had him in consultation with ordnance officers, who were quite receptive to his offer of research talent. The Navy’s great concern was the vulnerability of ships to air attack. In the spectacular and public demonstrations after World War I by General Mitchell it had been established that an airplane could sink a battleship. Given the completely unrealistic conditions of the tests, the Navy was unconvinced of the Army Air Corps’ dispatch of surface ships to oblivion but was nevertheless extremely anxious to improve the antiaircraft defenses of the fleet. To this end, air-warning and fire-control radars were already beyond prototypes, but the elements that went into assuring a hit on an air target were wanting. They could with some assurance cause an artillery shell to pass within a few meters of the target, but the accuracy with which the fuze could be set to cause it to burst at the desired point was poor, and only a burst within 5–10 meters would suffice. What the Navy wanted was a fuze that would sense the proximity of the plane and cause the projectile to explode at the right distance. 109
110
The Department of Terrestrial Magnetism
Tuve thus had a defense project for his group in summer 1940, and it had the unqualified support of the new Institution President, Vannevar Bush, who had just become chief of the newly established National Defense Research Committee (NDRC), to whom President Roosevelt had given the task of mobilizing the nation’s scientific potential. With an atomic bomb in the background, any kind of air defense took on added significance. Discussions with his colleagues quickly followed, listing the methods by which an aircraft might be sensed: acoustic, optical and radio. There was no agreement as to which of these furnished the best approach, but there was agreement that whichever they selected, it would have to use electronics. This meant a vacuum-tube circuit would have to be mounted in a small fraction of the volume of an artillery shell and capable of withstanding the shock of firing. It was this latter point that seemed the worst, because there was a general feeling that electron tubes were simply too delicate to stand the acceleration of firing. Roberts noted that the only reason for believing this was that everyone said so and he returned to the laboratory in the evening, attached a cheap vacuum tube onto a lead brick, suspended it from the ceiling as a ballistic pendulum (well known in elementary physics) and fired a bullet at it. The tube still worked and the calculated acceleration was 5000 times gravity. The next morning he cast a hemisphere of lead, mounted a tube to it and dropped it from the roof of the new Cyclotron Building onto a steel plate. Again success and with 20 000 times gravity, calculated from the amount of flattening of the lead hemisphere. With that the fuze project was on. The obvious next step was firing vacuum tubes and other electronic components from a gun. The machine shop made a muzzle-loading, smooth-bore cannon large enough to hold a projectile containing a miniature tube. Hafstad bought a canister of black powder at a local hardware store, and they took the gun to the farm of a friend of Tuve’s near what is today the suburb of Vienna (Virginia). Here they fired a potted tube straight up and extracted it from the projectile when it fell to the ground. The result: failure. The glass envelope survived, but the electrodes were collapsed within it. Ordnance officers explained that black powder explodes and imparts a terrific acceleration at the beginning; what they needed was smokeless powder, and the Navy furnished them a suitable gun (Fig. 15.1). This time the experiment was a success. At the time these experiments were taking place the Tizard Mission arrived in Washington. This was an event of a most extraordinary kind. An agreement had been made, over serious objections on both sides, between Prime Minister Churchill and President Roosevelt for the two powers to share technical secrets. The British, who initiated it, were sure that the USA lagged in radar and gambled on a US intervention for which they wanted the US radar proficient. They also desired to let contracts to US electronics companies,
The proximity fuze and the war effort
111
Figure 15.1 The DTM cyclotron and a 37 mm gun. In the early work on the proximity fuze, vacuum tubes and other components were fired vertically from this gun, removed of course from its carriage. This picture was apparently taken surreptitiously before returning the piece to the Navy. The incomplete status of the cyclotron sets the year as 1943.
which of course meant that the secrets had to come out. The best plan seemed to be to disclose what they had and see what they might get in return. The first exchanges took place on 9 September 1940. In general, their radar was not superior to what had been developed in the USA, although they were much further along in deploying it, but they brought the amazing high-frequency generator, the resonant magnetron, which they had invented only months before and which had obvious and important possibilities. Many secrets were exchanged during the meetings, some of which were held in the Institution’s headquarters on P Street, and two of them led to immediate and far-ranging activity. One caused the establishment of the MIT Radiation Laboratory, built on a core of excellent microwave engineers who needed the powerful magnetron as the basis for radar designs. The other added a key design element for the proximity fuze project, Section T of Bush’s new NDRC. The proximity fuze had been studied in Britain but with very little support, primarily because of the prejudice in high government circles against antiaircraft artillery. Nevertheless, one of their best electronics men, W. A. S. Butement, had designed a simple circuit for a radio fuze, which had been brought over by Sir John Cockcroft, a nuclear physicist well known to the DTM group. Tuve turned the circuit over to Roberts, who had one
112
The Department of Terrestrial Magnetism
lashed up the next day in the laboratory where its suitability became evident. The other two approaches, optical and acoustical, were not dropped but the radio fuze took strong priority. A dramatic change took place at the Broad Branch Road address as dozens of new employees arrived to work in Section T. Testing activities moved from the farm to the Navy’s Dahlgren proving grounds on the Potomac, and by 29 January 1942 prototype fuzes had been successfully fired with only half duds, the culmination of numerous intermediate development stages. At this point the project became one of production, although design improvements continued. A matter of importance was ensuring that the high-explosive projectile did not explode in the gun tube, killing or injuring the gunners, so a multiple safety system was designed to prevent this from happening. Next came the reduction of duds to fewer than 5%, predominately through quality control of the industrial product. It was a goal not easily achieved. On 12 August 1942 tests were made on USS Cleveland in antiaircraft trials in Chesapeake Bay. Roberts was aboard and describes what happened. The next day all was ready off Tangier Island and a drone approached on a torpedo run. At about 5000 yards the ship opened fire with all its 5-inch guns. Immediately there were two hits and the drone plunged into the water. Commander Parsons called for another drone and out it came on a run at about 10,000 ft altitude. Once again it came down promptly. Parsons called for another and then raised hell when the drone people said there were no more ready for use. He enjoyed this very much as he had been on the receiving end of a lot of comments by the drone people in other firing trials. The drone operators had one back-up drone ready in case of troubles but they never expected to have one shot down. In fact the Navy photographic crew who took pictures of all the firing trials of the fleet had never seen a drone shot down before.1
By spring 1942 Section T had outgrown DTM with employees spilling out of the Cyclotron Building and the Standardizing Magnetic Observatory. A garage in Silver Spring was rented and the entire operation moved to more spacious quarters. At the same time the Institution transferred administrative responsibility to Johns Hopkins University, which organized it as the Applied Physics Laboratory; Tuve became the first director. By the end of the war there were 1335 employees. Earlier it was seen that fuzes for guns had different problems than those for rockets and bombs, so this part of the project was transferred to the Bureau of Standards a few blocks away, with Harry Diamond in charge. On 5 January 1943 the cruiser USS Helena destroyed a Japanese bomber with the fifth round using an industrially produced fuze, 28 months after the first discussions of its feasibility (Fig. 15.2). Production quickly met demand, and the fleet gained a powerful defense combination in radar-directed fire with proximity-fuzed shells. Until late 1944 use was permitted only over the sea because of the danger of dud shells being recovered on land by the enemy, disclosing the secret and possibly allowing them to devise countermeasures.
The proximity fuze and the war effort
113
Figure 15.2 Schematic diagram of the proximity fuze. The top-most element (33) is one electrode of an oscillating dipole antenna that radiates an 80 MHz signal, the shell body being the second electrode. Identified in the drawing are the oscillator, amplifier, battery and booster charge.
114
The Department of Terrestrial Magnetism
Fuzes were made for use by field artillery to provide air bursts against personnel in the open or in light entrenchments. Air bursts had been used for years by artillery, but they required observers to adjust the height of the bursts, which depended on time fuzes. The new fuze, given the cryptic designation VT for variable time, allowed such fire to be conducted in darkness or fog or on targets beyond the view of observers. Fuzes were released for ground use at the Battle of the Bulge in December 1944 and on Okinawa in April 1945. The most spectacular use of the fuze was against the German V-1 flying bombs that were aimed at London beginning in 1944 shortly after D-Day. These were very destructive and came randomly in time with little chance for the population to take cover. Once British and US antiaircraft guns were deployed along the coast of southern England, equipped with automatictracking radar, automatic-tracking guns and VT fuzes, 95% of the bombs were stopped, the majority by gun fire. The destruction from which London was saved was incalculable. With the end of the war the Applied Physics Laboratory was a going concern, already on its own path. It was the country’s third largest militarytechnical project, after the MIT Rad Lab and the Manhattan Project. A total of 22 073 481 fuzes had been manufactured calling on the services of 111 companies. The call of personal ambition for Tuve as well as the other DTM people was clear – go on to greater, even more prestigious responsibilities – but the call heard by him, Roberts and Heydenburg was to return to the Department, a return to personal, not administrative science. Hafstad remained at the Applied Physics Laboratory, becoming its director. The Department personnel’s contribution to the war effort did not end with the proximity fuze. Heydenburg continued making measurements of nuclear quantities for the Manhattan Military District until all that research was concentrated in spring 1943 at Los Alamos. Generally these were determinations of fission cross section induced by fast neutrons of known energy produced with the 1-MV machine on microscopic samples of separated isotopes of uranium and plutonium. The samples were only stains on glass slides about which he had no knowledge as to the quantity or the isotope. He returned the raw data to the same courier who brought the samples. When such work outside Los Alamos was terminated, he elected to join the proximity fuze work. Philip Abelson had come to the Department from Berkeley to aid in the building and the use of the cyclotron. Unraveling the mysteries of what happened when uranium was irradiated with thermal neutrons had been his thesis topic, and he had devised a method that would have given him the answer had Hahn and Strassmann not obtained it with chemistry. He had retained an interest in fission after coming to DTM and returned briefly to Berkeley in spring 1940 when he collaborated with Edwin M. McMillan in
The proximity fuze and the war effort
115
discovering the first trans-uranic element, neptunium. When Tuve, Roberts and Hafstad began the proximity fuze development, Abelson began his own research toward a method of separating U-235 from natural uranium by thermal diffusion with equipment furnished by the Naval Research Laboratory – and independent of the Manhattan Project. His method succeeded and had to be used for the first bomb when separation at the Oak Ridge diffusion plant was late. He also chose to return to the Department. Essentially everyone at DTM was engaged in war work, either in the Department or on leave of absence. Berkner had set up the network of ionosphere sounders that provided the data for predictions of radio propagation, so necessary for military communication, and that became organized as the Wave Propagation Committee under the Joint Chiefs of Staff. DTM took on the operation of the network and the data reduction; the Bureau of Standards disseminated the propagation predictions to the armed forces. (This work has been described in Chapter 8, on the ionosphere.) Berkner’s organizational ability soon advanced him from that to the position of Director of Electronics Material for the Navy Bureau of Aeronautics, and Harry Wells took over responsibility for running the worldwide network of stations. Berkner elected to return to the Department after the war but his taste for the management of large scientific and technical projects had been whetted by wartime experience whereas Tuve’s had not. The Department’s magnetic competence was also in demand, and by the end of hostilities there were 164 temporary employees working out of the Broad Branch Road campus. Contracts were accepted from the Signal Corps for operating the ionosphere stations, with the Bureau of Ships and the US Maritime Commission for magnetic charts and compass improvements. These contracts were undertaken for cost and without overheads, and Carnegie employees served without charge to the government. The Department also constructed at its own expense an addition to the machine shop for the war work. Vestine prepared the charts, and A. G. McNish undertook compass improvements.
16 THE TUVE TRANSITION
There were indications in 1946 that postwar science would differ significantly from what had gone before. Wartime research had had unlimited financial support and had presented the country with astounding new weapons and useful applications. If such generous support yielded valuable weapons, similar support would surely produce remarkable progress in pure science, many branches of which were becoming increasingly dependent on ever costlier instruments. These thoughts were rampant among the scientists returning to their university campuses, and it was from the Federal government that they expected to obtain this new funding, expectations that were indeed fulfilled. Furthermore, in a social experiment never tried before, Congress passed the GI Bill of Rights that, among other benefits, gave war veterans the chance to obtain a university education, and a large, enthusiastic crop of new scientists resulted. The Institution was not yet aware of what all this meant for their mode of research, but as the decades slipped by and a radically new way of doing science emerged, it would find the ability to adapt. John Fleming retired as Director of the Department after having presided over fundamental changes in its scientific course; it would have been scarcely recognizable to his predecessor. He was followed in that office in 1946 by Merle Tuve, who planned to make even greater changes. In this he was in accord with Vannevar Bush with whom he had worked closely for five years. They had both returned from wartime service with enhanced reputations and prestige; both had high goals for the Institution and the Department; both had had their self-confidence reinforced, had there been any evident suggestions of doubt; both rejected government financial support with certain exceptions, and Tuve, who had been leader of one of the largest engineering laboratories in the world, became an enemy of “big science.” As a Carnegie director he had very strong powers that allowed him to hire and fire, and allot funds pretty much as he wished, so long as the President and the Trustees were not opposed, the same rights that had been fully exercised during the tenures of Bauer and Fleming. Added to this was Tuve’s time as leader of an enterprise for which he had complete responsibility, and the path he would take for reforming the Department can well be imagined. Terrestrial magnetism as such was abolished. The Huancayo and Watheroo Observatories were turned over to the Peruvian and Australian governments, 117
118
The Department of Terrestrial Magnetism
respectively; who continued to operate them, as had been intended when they were established; their termination freed funds for other purposes. As mentioned earlier, Vestine was assigned the task of compiling 40 years of data into two magnificent volumes, leaving an enduring scientific legacy of the highest form. He made use of an IBM automatic punched card calculator in reducing these data to final form.1 This was a digital computer in which the code for making the calculations was entered on punched cards, each card having a single instruction. Cards functioned as the memory for the code and the data. There was to be a dramatic end to the Department’s research in atmospheric electricity, done with real flair. The electric field gradients for which DTM had collected enormous amounts of data at sea and on land required a source of negative electricity to the Earth for their maintenance, a source for which indirect evidence pointed toward thunderstorms. The total current that the air conductivity sustained from this gradient required the contribution of about 1 ampere from each of the thunderstorms active worldwide. If one could determine the sign and magnitude of the current within a number of thunderstorms, the matter could be settled quantitatively. Tuve saw that the capabilities of the huge B-29 bomber allowed the transport of the necessary instruments into the very top of such storms, which he found better than requiring the observer to wait passively for storms to pass over his specially prepared instruments. During the thunderstorm seasons of 1947 and 1948 the Army Air Forces were persuaded to place a suitable aircraft from Tinker Field, Oklahoma at the disposal of G. R. Wait and O. H. Gish, who equipped it with electrical gradient and air conductivity apparatus. They made 61 adventurous passages at altitudes up to 14.5 km into 21 storms and found the average current to be about 1 ampere per storm and of negative sign. Wait extended these techniques to sampling stratospheric dust on a regular basis and brought the first evidence of the explosion of a Soviet atomic bomb. Nuclear physics, the ever-growing part of the Department’s research until 1940, proved somewhat of a puzzle for the new scheme, but one that worked itself out. There was first of all a group of very competent nuclear physicists who wanted to return and whom Tuve held in high regard: Abelson, Cowie, Heydenburg and Roberts. Second, the Department had three scientifically valuable accelerators that represented significant investment in capital and time, and the cyclotron had momentarily no superior in energy or beam intensity. But Tuve found the new world of nuclear physics not to his liking. Accelerators were being planned nationwide, around which excellent centers of nuclear research would evolve. Furthermore, Tuve had missed five years of highly active research in the subject. Whereas the MIT Radiation Laboratory and the Applied Physics Laboratory functioned to a great degree using well-established scientific knowledge that had already been codified
The Tuve transition
119
into engineering practice, the Manhattan Project had made many advances in the underlying science of nuclear physics, and Tuve and his colleagues, Heydenburg excepted, had not shared in them. To whatever extent these matters weighed in Tuve’s decision, they led him to remark often: “I got out of nuclear physics when it changed from a sport into a business.” To a great extent the problem of nuclear physics resolved itself. The cyclotron and its building had been built expressly for manufacturing and using radionuclides for biological and medical research, and the group had collaborated before the war on biology experiments. Cowie had come to the Department by way of the National Cancer Institute and had taken on prime responsibility for building the cyclotron during the war with biological research the goal. Dr. Winifred Whitman’s early work on the effect of gamma radiation on rats had been the first example of the growing interest. Thus when Abelson, Cowie and Roberts announced that they wanted to enter wholeheartedly into the new field of biophysics, they got an immediate go-ahead from the Director with skeptical approval from Bush. The bio group would use the cyclotron as a factory for radioisotopes but had little use for the Van de Graaff machines. This was not a problem because Heydenburg did not like the cyclotron. It had an internal beam that was orders of magnitude more intense and much higher in energy than the electrostatic machines, but its beam was just that – internal to the machine. A target placed at the edge where it intercepted the outward growing spiral beam was still within the magnetic field; the target received so much energy that it generally had to be water cooled and could produce profuse quantities of radioisotopes and neutrons, but extracting a suitably focused beam out into the laboratory in order to perform an experiment of the kind needed for nuclear spectroscopy was very difficult and never accomplished. Furthermore, any experiment had to be done in the presence of tremendous background radiation that would frequently overwhelm the detectors and make close access to them by the experimenter impossible. And, finally, the beams generated by the cyclotron had only one energy (different according to the particle being accelerated), and this energy had significant scatter in its value. Heydenburg wanted none of this and outlined new work for the two Van de Graaff’s for which he got a go-ahead. (During the next decade a new cyclotron design overcame the handicaps of the old, constant-field machines without sacrificing the advantages of high currents.) If Tuve had seen his skills in nuclear physics tarnish a bit during the war, he had seen his radio skills polished and began searching for the best application. Berkner had returned after his distinguished service as the Executive Secretary of the Joint Research and Development Board, and Tuve made him chairman of a section on exploratory geophysics where he took up ionosphere research once again, but he accepted the call to become President of Associated Universities in 1951, placing him at the head of “big science.” The
120
The Department of Terrestrial Magnetism
following decade saw many disputes over big and little science between Tuve and Berkner. There had been reports of cosmic radio noise in observations by Karl G. Jansky followed up by Grote Reber before the war, and these signals had imposed themselves from time to time as problems for the radar engineers, problems that had subsequently become the basis for postwar research. It was a field in which the British forged ahead but one that Tuve felt complemented the astronomy of the Institution’s Mount Wilson Observatory, and he set out to enter radio astronomy. After the war television became one of the new experiences of the nation, and whether for good or evil remains a matter of dispute. Tuve noted a matter for which there was no dispute: when photographic newsreel cameramen worked alongside television news men, the former required more light for useful exposure than did the latter. It was a basic fact that only a few photons were required to create a television picture element, but hundreds of photons were required to make a single blackened grain in a photographic emulsion. This had a significant meaning for astronomy. One needed to use the enormously increased sensitivity of photoelectric devices, generally one photoelectron for every five photons, to enhance the sensitivity of astronomical imaging devices, for if one increased the sensitivity by a factor of nine, one in effect tripled the size of the telescope! Clearly something for the physicists at the Department to undertake. Tuve did not wish to leave the study of the Earth, indeed he had extensive plans for geophysics in three new directions. Perhaps as a consequence of his military activities, he considered using surplus wartime explosives and the friendship gained with the Navy for studying the continental shelves by setting off large explosions in the ocean and observing the seismic waves that arrived at a network of stations situated on land just for this geophysical experiment. There were obvious uses for radioactive decay in studying the ages of rock and of the Earth itself, and mass spectrometry was the instrument that would allow this field to be opened up. It was, in fact, a field that had yet to have its basic discipline established, and mass spectrometers were a physicist’s instrument; the subject was even generally classified as nuclear physics in the 1940s and 1950s. This too got a green light. All these new fields of endeavor will be described in chapters devoted to them, but there was a third geophysical subject, involving terrestrial magnetism, best dealt with here, as it ended after less than a decade of effort. It began with great hopes, but was not continued to a possibly glorious conclusion that came within a decade of its termination. Before the war McNish and E. A. Johnson had begun research in paleomagnetism. Tuve had looked with favor on the subject as a way to examine
The Tuve transition
121
Figure 16.1 John Graham’s mobile rock sampling laboratory. Graham measured the residual magnetism in samples, which were generally gathered by coring into the rock. Determining the amount of magnetism was carried out in the Standardizing Magnetic Observatory in very delicate experiments. 24 January 1951.
the history of the geomagnetic field and possibly answer questions about polar wandering. In 1947 John W. Graham, a doctoral student in geology from Johns Hopkins, came to the Department to do the research for his thesis in the subject, but McNish and Johnson left the Department at about the same time, and with their departure he was the only person working on the subject. The approach was to measure the remanent magnetism of sedimentary rocks for which a fair idea of age could be ascertained. The initial observations were successful. Tuve pushed the work and secured a special grant from the Carnegie Corporation for it. Graham learned to measure reliably the magnetism of rocks, establishing the strength and direction of their fields, but he soon found disturbing complications (Fig. 16.1). The rocks began to disclose indications that they did not retain the direction and magnitude of the field in which they formed. Folded strata gave clear evidence that pressure affected the remanent magnetization. Graham undertook laboratory experiments that raised additional doubts about the stability of his samples, and these negative results steered him toward the study of the mechanisms of rock magnetism. He and Tuve could not agree on a course of research to get them out of this tangle, and he left the Department in 1958 for a position at Woods Hole Oceanographic Institution.
122
The Department of Terrestrial Magnetism
But the story was not finished. In less than a decade the most important discovery in geophysics of the century, indeed possibly of all time, was made using three disciplines for which DTM had led the way: mapping the geomagnetic field, measuring paleomagnetism and dating rocks through the isotopes they contained. One had been discontinued because it was seen to be mere data gathering, another because it showed little had been accomplished after seven years. When postwar instrumental techniques were applied to gaining magnetic measurements at sea they supplied key evidence for plate tectonics, and paleomagnetism substantiated these findings by studying igneous rocks, which had stable remanent magnetism and for which isotope dating provided reliable ages. How different the story might have been had the Department at least continued to function as the clearing house for magnetic data. The strange wiggles that began to appear in the magnetometer traces could hardly have failed to attract the attention of E. H. Vestine, who would have been afflicted with sleepless nights until the effect was explained. That they might have resulted from remanent magnetism would have certainly invigorated and altered Graham’s work. The reader can draw his own moral about planning future science. In spite of one failed opportunity the research of the Department prospered during those years. Explosion seismology and isotope dating opened lasting and valuable research sections, and the bio group did not “come back with their tails between their legs,” as Bush had predicted. Image intensification for astronomy made a tremendous improvement in observational astronomy and eventually led to optical astronomy becoming a fixture of the Department. Radio astronomy had some fine moments but was soon outclassed by observatories with comparatively unlimited government funds. But these stories remain to be told. Tuve participated personally in some of the projects and enthusiastically followed all of them. His leadership pointed the Department toward the basic form to which it has now evolved. Bauer had founded an international quarterly journal in 1896 entitled Terrestrial Magnetism, expanded three years later to Terrestrial Magnetism and Atmospheric Electricity, of which he and later Fleming was the editor. There being few other publications for papers on the physics of the Earth, other subjects besides those indicated by the title appeared with growing frequency. The editorship devolved almost automatically on Tuve when he became director, but he changed the name to Journal of Geophysical Research. The number of papers slowly grew, but quarterly issue was maintained until 1958, the year the editorial and publication responsibilities were turned over to the American Geophysical Union (AGU), where it remains to the present. The transition to the AGU was triggered by an author–editor dispute between George Wetherill and Tuve that is described by Wetherill.
The Tuve transition
123
Figure 16.2 The DTM Lunch Club in session. The club was organized in 1948 as a method of combining lunch with conversation and has continued to the present day, despite or because of the requirement that the eaters take their turns as cooks. This photograph appeared in the Washington Post Magazine on 14 October 1984. Back in the 50s, I refused to submit my geochronology paper to the DTM house organ (Journal of Geophysical Research). I said that no one would read it, because that journal contained only terrestrial magnetism and electricity. The Director became very disturbed, and with shaking hands (and a few wrong numbers), called an A.G.U. official on the phone and said “This is Merle, you know J.G.R. I’m giving it to you. Wetherill sticks his nose up at it. I’m giving the damn thing to you.”2
There arose during these early postwar years a social curiosity at DTM which deserves the description unique, something easily regarded as of minor
124
The Department of Terrestrial Magnetism
consequence but which has done much to shape the Department’s character, something widely admired but never copied. In fall 1947 – the exact date escaped recording – a few members of the bio group, tired of driving away for lunch or bringing it from home, devised an alternative. One of them would prepare a simple, hot lunch for all, buying the ingredients, caring for kitchen and table, and continuing these services for a week, then to be relieved by another. Others outside bio joined, and it proved to be an excellent way to keep persons in disparate lines of research informed and to share scientific gossip. Tuve had a kitchen installed in the Standardizing Magnetic Observatory, one component of which was a then new dishwashing machine, without which it is doubtful that this kind of activity could have survived. With time most of the scientific staff joined, but Forbush was an exception. The Director joined under the proviso that he not cook but pay for his meals; he also made a point of being absent at least one day a week so the staff could conspire against him (Fig. 16.2). Surprisingly enough, the Lunch Club has endured to the present day. The 25th, 40th and 50th anniversaries of its founding were festively celebrated – at real restaurants. The extreme rarity of academic bickering at DTM, unfortunately all too common in universities, owes its absence in great measure to the relaxed atmosphere that it has created, perhaps a consequence of the psychological effect of cooking for the others and eating their food.
17 POSTWAR NUCLEAR PHYSICS
Prior to 1939 the public’s perception of nuclear physics came from Sunday supplement articles about splitting the atom, generally joined to articles about the wonderful possibilities of enormous sources of power from the atom. The headlines that announced the discovery of uranium fission in January 1939 added to this the ominous prospects of terrible explosives, but uranium disappeared from the newspapers with the outbreak of war to return prominently with the use of atomic bombs against Japan. The reaction of the public and their elected officials was predictable: there must be a great secret that had to be retained at all costs. This attitude was reflected in one version of the bill – supported by Institution President Bush – that established the Atomic Energy Commission to replace the wartime Manhattan Engineering District; all research in nuclear physics was to be placed under military control. Fortunately, a quickly organized lobbying effort by scientists caused a much more sensible form to pass and be signed into law on 1 August 1946. Although Tuve thought nuclear physics had changed from a sport into a business, those who had access to accelerators found that the game still had plenty of sporting elements, and during the following decades they unraveled the structure of hundreds of nuclei and determined theories that explained them with success comparable to what had and was being done for atoms and molecules. Furthermore, this branch of science – which moved further and further away from weapons and energy production – became crucial for the understanding of the origin and evolution of the universe. Heydenburg’s first task on returning was to get the two electrostatic machines back in operation, incorporate improved methods for voltage control for them and make replacements for obsolete detector electronics. At Breit’s urging he undertook a long, tedious attempt to improve substantially the measurement of proton–proton scattering for the lowest attainable energies. The earlier work had been reproduced by a simple nuclear potential function, but Breit thought there might be a long-range tail on the potential that would manifest itself in this experimental region and would, needless to say, have meaning for theory. Interspersed between other measurements, which included discovering the isotope 15 C, this work went on for more than a year. No tail was found, and Heydenburg did not publish the results. 125
126
The Department of Terrestrial Magnetism
Some of the first reactions investigated were measurements of the angular distributions from two reactions that disintegrated through the emission of two alphas, which prompted the study of alpha–helium scattering, the inverse process that studied the same nuclear compound state, 8 Be. This alpha–alpha scattering presented the most beautiful example of the pure quantum-mechanical effect called Mott scattering, the curious consequence of the impossibility of distinguishing target and incident particle when they are identical. This effect had had to be incorporated in the analysis of proton– proton scattering, but the Coulomb force between two alphas at energies below 0.4 MeV was sufficiently strong to prevent the nuclear force from having significant effect and the pure quantum-mechanical effect dominated. At higher energies nuclear effects were noticeable and the results were reproduced with a single function of energy, the S-wave phase shift, just as had been done for the proton–proton scattering 20 years earlier. Although this was a beautiful little experiment, it led to much more important things. By having mastered the techniques of producing beams of alpha particles, Heydenburg and new staff member Georges M. Temmer, who had begun working with him as a guest investigator in 1952, were ready to exploit in a grand manner a new nuclear interaction, Coulomb excitation. When two particles near one another are sufficiently close for the nuclear force between them to cause coalescence into a compound nucleus, a highly excited state results that decays through a variety of modes. This is typical for protons or deuterons of a few MeV incident on relatively light atoms. States in this region are very densely spaced several MeV above ground state and where structure is best reconciled with theory statistically, in contrast to states that lie only a few MeV above the ground state. Coulomb excitation opened a completely new method of studying these low-lying states in mediumand heavy-weight isotopes. The Coulomb force between incident alphas of a few MeV and such targets is so strong that the two do not approach one another close enough for the nuclear force to interact. However, if the target nucleus is not spherically shaped, its non-symmetric electric charge causes it to be set in rotation by the incident particle – in discrete rotational states allowed by quantum mechanics, of course. This was doubly new: the Coulomb excitation mechanism and the unsuspected discovery of deformed nuclei. The initial 1954 discovery was not made at DTM, but Heydenburg and Temmer were ready and spent the next months in a rollicking series of experiments in which a new isotope target was studied seemingly every day. (By this time it was possible to obtain isotopically pure or enriched samples from Oak Ridge National Laboratory for the targets.) The experiment was simple: place a scintillation counter next to the target and observe the gammas having discrete energies that came from the decay of the excited states back to ground.
Postwar nuclear physics
127
After this splendid series of experiments the Department’s nuclear physics program faced a double problem. Electrostatic accelerators of greater energies from a commercial supplier, which had Robert Van de Graaff as one of its founders, were becoming common at university physics departments and other laboratories. Furthermore, nuclear spectroscopy showed every sign of becoming a discipline wherein a variety of well-understood techniques were employed to obtain basic data, which was not the kind of science thought suitable for DTM. This situation led them to consider exploiting a characteristic of the pressure-tank machine for which there was no real rival: the large size and structural stability of its high-voltage terminal. It could very likely be the home for an ion source producing polarized protons. Consider the scattering of a proton that has its spin perpendicular to the plane defined by the scattering. If the energy is high enough such that the orbital angular momentum of target and proton is greater than zero, any force component proportional to the scalar product of the spin vector and the orbital angular momentum vector will cause a left–right asymmetry in the scattering. If the incident protons are not polarized, the usual case, the scattering will not be asymmetric but the scattered protons will be polarized, the degree depending on the energy and scattering angle. Such an electromagnetic force, called the spin–orbit force, is present in atomic physics and causes the fine structure of atoms. An analogous nuclear force had been considered in the late 1930s and declared very weak by the theorists, who were bolstered by the erroneous interpretation of one experiment and the disregard of another, both done around 1940. It proved not to be a weak force but a strong one – a very strong one. Maria Goeppert Mayer was attempting to form a model of nuclei, concentrating on explaining why certain numbers of neutrons and protons formed very stable configurations, much the same as the electrons in noble gases. Her attempts succeeded when she ignored the wisdom of the decade and assumed the existence of a strong spin-orbit force. The existence of this strong force component was soon demonstrated directly by a scattering experiment. In it, one produced polarized protons by scattering them from helium; when these were subsequently scattered a strong left–right asymmetry was observed. Unfortunately, the intensity of a polarized beam so generated was extraordinarily weak, and ideas for producing an ion source capable of dispensing polarized protons occupied many, with more than a dozen published suggestions appearing during the following decade. Vernon Hughes at Yale was building such a source using the straightforward techniques of atomic beams and was interested in a collaboration that would put his source, when it worked, in the terminal of the Atomic Physics Observatory. It was just the sort of thing that would give nuclear physics at DTM a new life, and preparations were made to provide a source of electric power in the terminal sufficient to operate a small laboratory at high voltage.
128
The Department of Terrestrial Magnetism
The Yale work dragged on and another idea was tried out at the Department that ended in failure, although it succeeded years later when conditions were altered. Tuve, Heydenburg and Temmer began to search for some other alternative means of doing nuclear physics. It came in the form of a new kind of accelerator, a tandem Van de Graaff that utilized negative ions, accelerated them to a positive terminal where the electrons were stripped off and gave them another push to ground state. The third such machine had been purchased by Florida State University (FSU) as the basis of a very successful improvement in their research capabilities. FSU wanted a director for the new instrument, and in November 1959 after months of discussion Heydenburg and Temmer went to FSU for three years as members of the faculty while remaining on the staff of DTM. At the end of that period the arrangement was to be reevaluated. In June of the following year Temmer encountered a group in Basel, Switzerland that had built a polarized-ion source that fitted the DTM pressure-tank machine perfectly. Tuve agreed to invite the University of Basel to build a second source, bring it to Washington and begin collaborative experiments. Beginning in fall 1961 Heydenburg returned for a year on a half-time basis to help Louis Brown, who had designed the source, install it in the terminal (Fig. 17.1). It was intended that Brown would remain long enough to get Heydenburg and Temmer familiar with the new technique, but Tuve had had second thoughts. After a few months Heydenburg was sufficiently pleased to decide that he wished to return full time. Tuve wanted him to devote half his time to radio astronomy, which on reflection he decided not to do, and he resigned from the Institution to became a professor at FSU, where he became department head and later rector of the university. Temmer became head of a new tandem laboratory at Rutgers University. This left Brown and the Swiss collaboration in an awkward state to say the least. Tuve acknowledged Brown’s wish to extract science from the source, to which he had by then devoted four years, and granted him an indefinite temporary position. The alternative was abandonment of the project and the unique scientific capability of the accelerator. Research with a polarizedproton beam then began that extended for 15 years with a series of Swiss collaborators. Brown was eventually taken on to the staff. During these years no other polarized beam was available in the region below 3.5 MeV, and all of the classic nuclear reactions and scattering for which there was enough polarized current were reexamined. The polarized beam yielded an abundance of beautiful data – beautiful in the contour maps of polarization analyzing power that accrued – that simplified the complicated structures of many nuclei (Fig. 17.2). One of the nicest results came from the clarification of a long puzzling structure in the important though brief-lived nucleus 8 Be. There was an excited state thought to be located very near the threshold for neutron emission when 7 Li was
Postwar nuclear physics
129
Figure 17.1 Schematic diagram of the polarized proton source as mounted in the high-voltage terminal of the pressure-tank Van de Graaff. Atomic hydrogen was formed by a radio-frequency discharge at A allowing free atoms to pass into a quadrupole magnetic field at B. Owing to the large amount of gas fed into the discharge, four diffusion pumps, F and G, were used to remove it and secure a high vacuum in the quadrupole field. The resulting polarized atomic beam passed to the electron bombardment ionizer at C whence the polarized ions were injected into the accelerator tube. Ion getter pumps, H and I maintained the vacuum at the ionizer. The source was designed and made in Basel, Switzerland and mounted in the Carnegie machine in 1962. It formed the basis for a 12 year Swiss–US collaboration.
bombarded with protons, but observing the state with the emitted neutrons indicated it was very broad, about 1 MeV wide, whereas another examination using gamma emission indicated the width to be about 40 times smaller. From elastic scattering with polarized protons and phase-shift analysis it was learned that this threshold state was indeed only 0.025 MeV wide, but the neutron emission was so distorted by the nearness to the threshold that it appeared very broad in that channel; a mystery of long standing was cleared up and a verification of the theory of threshold states. The bombardment of tritium with protons yields a neutron and leaves 3 a He residual. Early workers had measured the polarization of the emitted neutrons, and the DTM polarized beam was used to measure the polarization
130
The Department of Terrestrial Magnetism
Figure 17.2 Polarization analyzing power of the elastic scattering of protons from helium as function of proton energy and scattering angle. The contours present the fraction of a perfectly polarized beam scattered into a detector located at the angle given on the ordinate with the energy given on the abscissa. Note: there is an energy and angle at which almost the entire incident beam goes into one detector, whether left or right depending on the sign of the polarization. With few exceptions the results of the experiments with the polarized beam could be presented in this manner.
analyzing power of the reaction, showing the two effects to be identical, resulting in a minor theorem concerning the isospin quantum numbers. By the early 1970s the nuclear physics work at the Department had reached a familiar dilemma. The vein of nuclear ore accessible with the instrument at hand had been mined out. To continue doing this kind of research would
Postwar nuclear physics
131
Figure 17.3 Examples of three designs of accelerator tubes used in the pressure-tank Van de Graaff shown at the rear. The left-most is a portion of the original tube that, owing to intrinsic flaws, never sustained voltages above 2.8 MV; the next was installed in 1951 and was electrically excellent, attaining voltages of 3.5 MV, but developed leaks far too often; the right-most was installed in 1971 and was in all respects satisfactory. From left to right: George Assousa, Richard Roberts, Dean Cowie, Louis Brown and Urs Rohrer (Swiss postdoc). 1973.
132
The Department of Terrestrial Magnetism
require a source of more modern design having more current and a greater degree of polarization than the 50% of the old Basel source. More elaborate and expensive target room equipment would also be needed, all at significant expense. Given that nuclear physics was not a research goal of the Department, which had supported it because of the unique scientific advantages offered by the pressure-tank machine, such an investment was not seriously considered. Some interesting work was done with the machine in heavy-ion X-ray investigations and in beam-foil spectroscopy, but these did not lead to an open research field, and the accelerator work was terminated in 1975. The Atomic Physics Observatory, a landmark of sorts, remains to this day (Fig. 17.3).
18 THE CYCLOTRON
The cyclotron represents a curious path in the history of the Department. During the 1930s particle accelerator development generally followed two paths: machines in which high voltage is applied to a vacuum tube within which electrodes form an electric field, and machines in which the particles spiral outward in a quasi-homogeneous magnetic field, acquiring an increment of acceleration from a synchronously time-varying high voltage. The Van de Graaff machine belongs, indeed dominates, to this day, the former, and the cyclotron, which has given rise to a large number of devices that now provide the ultimate in high energies, dominated the latter. During those early years Tuve was the undisputed master of electrostatic machines while his childhood friend, Ernest Lawrence, held the same position for the resonance machines. In their early forms, Van de Graaffs with their well-focused mono-energetic beams became the instruments for nuclear spectroscopy, whereas cyclotrons became the factories of radioisotopes. Lawrence and Tuve both pushed the development of a larger machine as soon as one model had proved satisfactory. By 1939 Lawrence had built the largest accelerator in the world, called the 60 inch from the diameter of the magnetic field that controlled the motion of ions created at the center. The energy of the ions was determined by the field of the electromagnet, which was of the order of 15 000 gauss. The accelerating voltage was applied through a 20 MHz generator in step with the motion of the ions. From the energy picked up with each passage between the electrodes, called Dees because of their shape, the particle increased its radius. Unfortunately, the magnetic and electric fields did not function well in holding the beam in the central plane of the magnet, which led to current loss. To offset this it was necessary to make the high-frequency voltage as large as possible in order to complete the total acceleration as quickly as possible; this required a large gap between the pole faces of the magnet in order to avoid sparking. These characteristics yielded a beam of large current, sometimes over 1 mA at 16 MeV (16 kW of beam power!) which made the machine superior to any other for producing proton-rich isotopes directly or copious numbers of neutrons for making neutron-rich isotopes. Owing to the defocusing actions of the magnetic field at its edge, it was difficult to extract the beam from its 133
134
The Department of Terrestrial Magnetism
near-circular orbit and to use it for the kind of experiments for which the electrostatic machine was superb. As Lawrence’s 60 inch approached success the DTM group saw a copy of it as a source of isotopes that opened new fields of research, suggested by their biology collaborations. They soon found there was interest in such a machine at Washington from the National Cancer Institute, the Johns Hopkins University, the George Washington University, the Catholic University, the Navy Medical Center, the Bureau of Standards and the Public Health Service. Plans for such a machine began to take shape in spring 1939 and included the whole-hearted cooperation of Lawrence and his associates in Berkeley. The project quickly received the support of Institution President Bush. Roberts and Tuve on a visit to Berkeley succeeded in hiring Philip H. Abelson and George K. Green, two experienced members of Lawrence’s group, to guide construction. A complete set of blueprints for the 60 inch had already arrived the month before, and by the end of September orders for the principal components had been placed. The initial cost was estimated at $150 000 for accelerator and building. The weight of the magnet, iron and coils, was 250 tons. Construction began in 1940 and culminated in a beam suitable for the production of isotopes in May 1944 (Fig. 18.1). The radio-frequency generator was not a copy of the Berkeley machine, on which Roberts looked disparagingly. He consulted his brother, an RCA engineer, and designed one using a power amplifier rather than an oscillator for the output. They also coupled it to the Dees with a quarter wavelength instead of half. When a gas discharge shorted the Dees, an open circuit appeared at the generator, thereby protecting the output tubes from overload. Green held a commission as a reserve officer in the Army Signal Corps and was ordered to active duty in late 1940 but was able to postpone his entry until April 1942 because of the interest in the cyclotron by the military services. Abelson had begun devoting his time to the thermal diffusion method of separating uranium isotopes, and Roberts was utilized on the proximity fuze, so Cowie was left with responsibility for completing the task. Government interest in obtaining radioisotopes helped secure supplies requiring a priority but did not allow additional scientific personnel. Cowie had the services of an electrician, an electronics man and a machinist (Fig. 18.2). In January 1944 he received the help of Captain Jean S. Mendousse, a scientist with the French Military Mission, whose status did not allow him to work on secret projects, but the cyclotron was not a secret. Indeed Lawrence had made every attempt to inform the whole world of its details. Tuve had always been very cautious in matters concerning radiation hazards, as witnessed by Whitman’s early experiments and the rules to which they gave rise. Because of the high radiation levels that would emanate from the new machine and because the Department was located in a residential
The cyclotron
135
Figure 18.1 The magnet with coils for the cyclotron. The space between the magnet poles would be filled with the accelerating electrodes within a vacuum tank. The oscillator driving these electrodes would be placed on the balcony from which the photograph was taken. 12 December 1940.
neighborhood, extraordinary care was taken to shield the machine by burying it in a thick-walled concrete vault covered with soil. The first beam was attained on 31 December 1943, and the machine began to function routinely by May 1944. Demands for its products grew until 24 hour operation became necessary. It became a vital service for the local scientific and medical communities, and the Navy sent skilled personnel to help, mostly with the chemical separation of the isotopes from the target material. The machine quickly became the central instrument of the new biophysics group. During 1944 Cowie suffered a serious radiation accident, one of six suffered by cyclotron men throughout the country and the most serious. According to Roberts1 he inserted a quartz plate into the beam at an early tune-up of the machine on the assumption that the beam would be very weak, expecting to need weeks of adjustments to bring it to proper operating conditions, only to see the plate melt. Cataracts began to affect his vision noticeably by 1947. The surgical techniques of the time repaired one eye but not the other. It
136
The Department of Terrestrial Magnetism
Figure 18.2 Charles Ksanda and Dean Cowie examining the vacuum system of the cyclotron. Cowie has an arm on the lower magnet coil. The wooden box at the upper left covers large glass feedthroughs that connect the oscillator to the accelerating electrodes. 1946.
is somewhat ironic that before coming to the Department, Cowie carried out for the National Cancer Institute a nation-wide survey that revealed serious radiation hazards in hospitals, both in their use of radium and X-ray machines. In December 1948 it became known that at least five cyclotron physicists were developing cataracts. A committee set up by the National Research Council sent questionnaires to accelerator laboratories throughout the country to ascertain the extent of the problem. Ten serious overexposures were located, and a meeting was organized in Washington on 16–17 January 1949 at which time all were examined at the Johns Hopkins School of Medicine. The accident victims made estimates of their exposures, which allowed some kind of quantitative basis for establishing radiation limits.2 One experiment in nuclear physics done with the cyclotron deserves mention. In 1947 nuclear reactions were thought to proceed through a compound nucleus in which the target and projectile nuclei coalesced into a whole wherein all remembrance of the initial conditions was lost and decay was unrelated to the incident beam direction. Roberts and Abelson tested this assumption with a 15 MeV deuteron beam and found that neutrons were
The cyclotron
137
Figure 18.3 Stephen J. Buynitzky sits at the controls of the newly completed cyclotron. The “professional” appearance of this unit came about primarily from his skills. He was operator of the machine during its time as a factory for radioisotopes. August 1946.
distributed strongly in the forward direction.3 This was the first evidence presented that contradicted Bohr’s compound-nucleus picture, and it opened a new branch of activity for theory and experiment called direct reactions, carrying the ribald appellations of “stripping” for the (d,n) and (d,p) and “pick up” for the (n,d) and (p,d) reactions. As the 1950s progressed, the value of the DTM cyclotron began to diminish. National laboratories began to furnish isotopes, both stable and radioactive for modest sums, leaving the cyclotron for those of lifetimes too short to allow shipment. Heydenburg and Temmer drew up a plan, which they did not present with much enthusiasm, to provide an external beam transported by quadruple lenses to a distant well-shielded experiment room. They indicated that the project was probably too expensive and that they preferred putting their efforts into a polarized-ion source. The machine made its last isotopes in 1957 and was shut down for modifications that were never completed. Modern spiral-ridge cyclotrons, which have non-uniform magnetic fields that hold the beam in the central plane, allow variable energies and easy beam extraction and have made the “classic” cyclotron hopelessly obsolete. Curiously, the older pressure-tank Van de Graaff continued to be a useful
138
The Department of Terrestrial Magnetism
research instrument until the early 1970s. How different was the world of science in 1955 from the one of 1939 when plans were made. When it was finally decided that this huge machine was no longer to be operated, various plans were explored for its disposal, of which only one, giving it to the Johns Hopkins University Hospital, advanced to a serious stage. Unfortunately, the cost of transport and modernization was sufficiently high to prevent the proposed recipient from obtaining the necessary funds, so the huge machine remained. The oscillator and the vacuum system with the Dees were removed to gain storage space. In 1995 the Department acquired an ion microprobe for the geochemistry section but all the suggested spaces for it competed with one another to be the most unsuitable. The Director then decided that the cyclotron must go. The task had been made markedly more difficult because the tunnel through which the magnet pieces had to be extracted had had its width reduced during the renovation by the construction of an elevator, but a rigging company was found that would undertake the task. That the iron slabs had to be rotated onto their sides in order to move them through the constricted space caused the one-month job to become a rigger’s saga. The vacated space was converted by the Carnegie support staff into a two-story laboratory and storage room for the ion probe and strainmeter work as well as five offices. The south side of the vault was uncovered, had a door and windows cut through 12 inches of reinforced concrete, and the exposed side converted into one of the more curious architectural features of the neighborhood.
19 BIOPHYSICS
The wholehearted entrance of Abelson, Cowie and Roberts into biological research after the war was not as remarkable or surprising as it has sometimes been represented. It is true that they were experimental physicists very much at home with soldering iron and lathe rather than microscope and petri dish, but as previous chapters have indicated there had been a growing interest in approaching biological problems through the use of radioisotopes, and not a few collaborative experiments had been conducted. Indeed, in 1939 half of the time used by the new pressure-tank machine was devoted to visiting biologists. Cowie’s original association as a Fellow of the National Cancer Institute was to help them with their experiments. More to the point, the cyclotron had been intended from the start to be a factory for isotopes. The building above it was designed for the conduct of relevant experiments that needed close access to the sources, necessary because it was thought many were expected to have lifetimes too short for distant transport. To keep these physicists from straying too far from the paths of biological righteousness, many biologists and physicians visited to criticize, suggest and learn. Particularly important were Hugh H. Darby and Louis B. Flexner, Carnegie Research Associates, and William R. Duryee of the National Cancer Institute. In addition to radiochemistry, another experimental technique, liquid chromatography, came into use at about the same time and proved just as valuable to the new group. Although the basic idea has a history dating to the latter part of the nineteenth century, it was not until the German chemist Richard Kuhn and his French student Edgar Lederer reported its use in separating biological compounds in the 1930s that the method began to become part of the standard repertoire. If a liquid having a suspension of molecules is allowed to pass through a porous solid material, by gravity through a column of ion-exchange resin or by capillary action up a sheet of filter paper, the solute will flow with the heavier molecules lagging behind at rates depending on their species. This quickly proved to be an extremely sensitive method of separating compounds, far superior to the older methods of crystallization, solvent extraction and distillation. If the liquid had components marked by radioactive tracers, identification could be made with extreme sensitivity and selection. 139
140
The Department of Terrestrial Magnetism
Carrying on the tradition formed before the war, the Department was host to the Ninth Washington Conference on Theoretical Physics from 31 October to 2 November 1946 with as subject “The Physics of Living Matter.” In July and August of the following year Roberts took a course in the techniques of bacteriophage followed by one on genetics that filled him with enthusiasm for bacteria and viruses. During the same summer, Abelson worked at Woods Hole Marine Biological Laboratory, learning the philosophy and motivations of conventional biology. The first experiments undertaken by the group followed the lines indicated in the prewar work of Flexner and Roberts by measuring the rates of exchange of various substances through the membrane into the fetus. By 1949 the group, which by 1951 had added Ellis Bolton and Roy Britten to its ranks, entered onto a subject they were to make their own, one for which the new techniques were perfect and which built solidly on their early studies of the permeability of cell walls. Escherichia is a genus of bacteria of which four species are recognized, one of which is Escherichia coli, invariably and understandably referred to as E. coli. It is regularly present in the intestines of warm-blooded animals, but some strains of it are pathogenic and cause diarrhea, meningitis and urinary tract disorders. The group singled out E. coli for a long series of tracer-chromatography studies that had the strong imprint of physicists, who recognized it to be a microscopic chemical factory. They chose it as the principal test organism because of its convenience, rapid growth and similarity to other living systems, especially the fact its membrane was permeable to most ions and molecules, while obstructing the passage of only the giant molecules of proteins and nucleic acid. Its carbon-dioxide function was found to have the same pathways as in mammalian tissue, yeast and mold. The experiments were simple in concept and practice. The bacteria were cultured in a medium in which glucose generally served as the source of energy. The growth of the colony was measured through the increase in the optical density of the liquid. The culture medium also contained compounds that entered into the chemistry of the bacteria, such as sulfates, phosphates and various ions, and that were labeled with 32 P, 35 S and 22 Na. The medium was also controlled by such factors as pH, temperature and buffering agents. The group devised a method, called “isotopic competition,” that consisted of making more than one source of carbon available to the bacteria and labeling one with 14 C. In this way they could follow the way in which the cell selects a carbon source for a given biosynthesis. The chemical compounds that resulted from the bacteria feeding on the materials offered in the culture were analyzed either by separating the fractions in an ion-exchange column or by two-dimensional chromatograms. In the column the liquid remaining from harvesting the bacteria is passed through a column with additional volumes of the solute; this leads to a
Biophysics
141
separation into various fractions, which reside at different parts of the column. These fractions are then extracted by passing a solvent that causes the resin to release them immediately, allowing them to be collected in separate containers. In the chromatogram a vertical sheet of filter paper is dipped into the liquid, which climbs under capillary action with various fractions distributed by height. The paper is then dried, rotated 90◦ and dipped into another liquid having different rates of mobility for the fractions. The result is a two-dimensional distribution with fractions located by the beta rays of their tracers. During the 1930s Hans Adolf Krebs discovered the series of chemical reactions known by his name, also as the tricarboxylic acid cycle, that involve the conversion in the presence of oxygen of substances formed by the breakdown of sugars, fats and amino acids to carbon dioxide, water and energy-rich compounds. Using the isotope competition method, the DTM group determined in detail and quantitatively the chains of reactions in the cycle leading to the synthesis of 15 of the 19 amino acids found in E. coli. The experiments showed quantitatively through the flow of carbon, sulfur and phosphorus the synthesis of an extraordinary variety of organic chemicals. Not only were the various pathways of synthesis traced but also their rates of incorporation were shown in plots that definitely indicated the approach taken by physicists. By 1954 the group was ready to circulate in book form the experience of six years of work on E. coli. From this mass of material – 36 publications bearing directly on E. coli – grew the classic laboratory handbook on E. coli.1 Abelson left the group in 1953 to become the new director of the Geophysical Laboratory about a mile to the south. Having determined the pathways of synthesis for relatively simple constituents of living matter, amino acids, purines and pyrimidines in E. coli the group began to approach the more difficult problem of the synthesis of proteins and nucleic acid. Rather than approach the problem head on, they selected a preliminary step that they had identified, the manner in which the cell concentrates a “pool” of these simpler components that are subsequently linked into macromolecules. Torulopsis utilis, a yeast-like organism, was used in addition to E. coli for these experiments, in part because the metabolic pools of amino acids are very much larger in it than in E. coli. By pool formation one means the ability of the cell to obtain nutrients from a low-concentration medium for synthesizing macromolecules. One question seemed a reasonable statement of possibilities: are the concentrated substances in solution and thereby free to react, or are they held in a more complicated manner? Evidence favored the latter, interpreted by means of a carrier molecule, called by the Pasteur group “permease” to the displeasure of the DTM people, as it implied an enzyme that had not been demonstrated, but nothing proved to be as simple as they hoped. Amino acids competed
142
The Department of Terrestrial Magnetism
with one another for entry into the cell, and some compounds formed macromolecules without equilibrating with the pools. Nevertheless, knowledge of the properties of the pools, especially the kinetic delays they introduced, was necessary in order to understand later protein and ribosome synthesis. DNA, the cryptic abbreviation by which deoxyribonucleic acid is known to both the scientific and lay communities, is an organic molecule of enormous size – indeed the largest molecule that exists – a polymer sequence of complementary nitrogenous bases joined together by hydrogen bonds to form a double helix of great stability. It was isolated in 1869 but without information about its structure, and it was not until 1943 that its role in genetics was perceived. Its characteristic structure was unraveled in 1953: four bases, two purines and two pyrimidines, make up the “letters” whose arrangement in the sequence specifies the molecule. The hydrogen bonds allow the two spirals to be separated gently, each then forming a template onto which simpler molecular components attach themselves to form two new and identical DNAs. Associated with DNA in the same cells is a class of compounds known collectively as RNA, ribonucleic acid, that function as carriers for limited portions of the encoded information in DNA. Ribosomes are molecular aggregates found in living cells made up of protein and RNA. They account for a large fraction of the RNA and are the sites at which information about the genetic code is converted into proteins, which detach themselves for use elsewhere in the cell. One source of difficulty was found to be the extreme speed with which the small building blocks find their proper order and link, a time measured in seconds. It seemed likely that when a template was exposed it was rapidly covered with amino acids. As this left little hope for establishing intermediate steps, one was left to the kinetics of the processes. In 1958 the group acquired new analytical techniques. An ultracentrifuge was capable of producing density gradients that allowed the separation of different classes of ribosomes in a sucrose density gradient. It was complemented by new chromatography methods, and tritium was added to the useful radioisotopes about this time. Owing to its very weak beta emission, a new counting technique had to be used in which the sample was placed into an organic liquid scintillator; the technique worked equally well for 14 C, and the counts for both were measured and recorded automatically. The analytical techniques, which were so successful in studying the chemical productivity of E. coli, did not yield clarity when applied to the synthesis of macromolecules. Indeed, the group found no real focus during those years, and their results that expressed synthesis by means of precursor ribosomes went up against messenger RNA in confused discussions at meetings between investigators with fixed opinions. There were many visitors to the group, but now all were specialists in the field. Roy Britten was the last of the nuclear physicists to enter; the language
Biophysics
143
had changed, and the detailed background knowledge now required was great. The problem of language was perceived to be so troublesome that the group devoted a large fraction of their report for Yearbook 66 to “An Instructive Glossary.” What was needed obviously was a radically new analytical technique, and it came in 1963 in the form of the agar column invented by Bolton and Brian J. McCarthy, a recent appointment to the staff (Fig. 19.1). Agar is a gelatin-like product made primarily from algae and is best known as a solidifying component of bacteriological culture media. A solution of DNA is heated to 100◦ C, which causes the weak hydrogen bonds to separate, leaving two strands, called single-stranded DNA. This solution is mixed with a hot solution of agar, and the mixture cooled. On cooling the agar forms a gel and immobilizes the single strands of DNA mechanically. This semi-solid mass is forced through a sieve and formed into a column through which even macromolecules could pass freely. Strands of the immobilized DNA are thus made available for combining with similar sequences of secondary single-stranded DNA that is introduced and allowed to equilibrate with the immobilized host. If the secondary DNA is labeled, one can determine the degree to which it became attached to the DNA in the agar matrix by washing out the unattached; the bound DNA could then be recovered by washing with a different solution or a higher temperature. It was obviously a useful technique, and its power quickly became clear; it brought about a notable advance in the study of DNA–DNA and DNA– RNA interactions, one that transformed the science of microbiology. Messenger RNA was isolated in useful quantities free from the much more abundant ribosomal RNA, and the kinetics of messenger-RNA synthesis were measured. An early and very important finding was that whereas the DNA from various organs of a creature were retained identically, the RNA of its various organs were retained differently. It was found that unrelated RNA would not bind, but that RNA from closely related species did bind to some extent. The technique allowed the observation of the degree of relatedness of the DNA of different bacteria. Nucleic acids make up the essential elements of heredity and are the root substances of evolutionary systematics, so the next experiments brought remarkable results. A single-stranded DNA immobilized in agar was allowed to react with labeled single-stranded DNA from another species, and the degree of interaction depended on the two species having similar complementary sequences of polynucleotides. Intercomparisons established quantitative degrees of relationship between species. When human DNA was loaded into the agar column, chimpanzee and human DNA were retained to a high degree, 75%, but E. coli DNA passed through with virtually no retention. Other species were retained according to their distance from man in evolutionary divergence (Fig. 19.2).
Figure 19.1 Ellis Bolton shown with the agar column experiment. The startling and initially inexplicable findings of the experiment led Roy Britten to discover repeated sequences in DNA molecules, something that is still not understood. 1962.
Biophysics
145
Figure 19.2 Results of the Bolton–McCarthy agar column experiment. Single-stranded human DNA was frozen into an inert agar matrix and ground into powder that was placed into a column. Through this column were passed solutions of radioisotope-marked single-stranded DNA from a variety of creatures. Most of the radioactivity remained in the column for human and chimpanzee DNA, but increasing amounts passed through for animals as their departure from a common ancestor increased. The abscissa indicates roughly the time in millions of years that separates the test animals from a common human ancestor.
The method brought with it a puzzle: how did it work at all? Given the incredible length of the strands and the complexity they represented, it should take a long time for the chain of bases to match in order to be retained, but in fact it proceeded rapidly. A clue presented itself in what was known as “mouse satellite,” a component of the rodent’s DNA that had a lower density than the remaining portion of the molecule. The satellite behaved differently from typical DNA in reassociating precisely and at a high rate rather than in a disordered manner and slowly. Britten came to the conclusion that this was because the satellite was composed of relatively short strands and that a large fraction of closely related DNA must have base sequences that were identical or very similar. Estimates indicated that there must be a huge number of these repeated sequences, possibly as many as a million.
146
The Department of Terrestrial Magnetism
These findings raised a host of new questions: how widely were these sequences distributed in the molecule and among the various species, how did they originate, what were they for, what was their evolutionary role, were the sequences identical or just similar? Answering these questions was to occupy biology for years. An early survey showed repeated DNA in more than 50 species throughout the range of phyla, and it appeared that repeated sequences make up a large fraction of the DNA of all eukaryotic organisms but not in bacteria and viruses, where only occasional trace amounts are found. Britten approached the problem from another point of view, one that allowed quantitative measurements. He dispensed with the agar column and examined a solution of DNA, which when heated dissociated into single strands. Double-stranded DNA has a greater degree of optical activity than single-stranded DNA, so measuring optical rotation followed the degree of dissociation. When the solution was allowed to cool the degree of reassociation could be measured, but the large number of repeated sequences caused the reassociation of the single strands to be quite complicated with large three-dimensional networks forming. To circumvent this, Britten sheared the original DNA in a press at 3300 atmospheres that yielded strands of about 500 nucleotide pairs in length. This avoided network formation and allowed the measurement of reaction rates of reassociation. Britten and David Kohne, a Fellow from the Public Health Service and later a staff member, demonstrated that reasonably accurate and reproducible reassociation rates could be measured by this method and that the rate of reassociation was inversely proportional to the number of different sequences present. Breaking the DNA into many small segments demonstrated that the sequences were scattered throughout the molecule. As Britten and Kohne continued their studies on repeated sequences, they became even more baffled. They found the same sequences in whatever eukaryotic creatures they looked at – protozoans, sponges, sea urchins, crabs, squid, sharks, salmon, frogs, chickens, nine mammals, thirteen primates and eight plants. Eukaryotes have repeated sequences, prokaryotes do not. It was obvious that repeated sequences were important, that they had things to say about evolution, but just what these were was a mystery – and remains so today. Britten left the Department in 1971, forming with Eric Davidson of Rockefeller University the Developmental Biology Group in the Kerckhoff Marine Laboratory at Caltech. The Institution shared in their support with Britten becoming a Carnegie Senior Research Scientist. The two had collaborated since 1967 and had independently inferred that repeated DNA sequences were ubiquitous in higher organisms. In 1955 Roberts began some collaborative studies with Flexner on the biochemical basis for long-term memory. Behavior patterns, such as bird
Biophysics
147
songs, can be inherited and thus must be transmitted through DNA. The experimental approach was to train statistically significant numbers of mice and observe the effect of substances related to protein synthesis injected into their brains. A protein synthesis inhibitor was found to disrupt memory even when injected weeks after training, but another inhibitor was found to have no effect, and other substances were found that could restore memory. Most of the work was done in Flexner’s laboratory, but Roberts kept his own mouse colony (the odor of which remained until the renovation of the Cyclotron Building in 1989) which he demonstrated to staff and visitors. Solid understanding did not come from this work. Roberts conducted other experiments on the brain that had no support from the Institution. For years he examined claims of extrasensory perception, visiting and testing such persons at his personal expense, and conducted experiments of that nature using unenthusiastic members of staff. After subjecting all his data to statistical analysis, he concluded there was no basis for the proposed effects. It was typical of his daring mind to do such work, especially given that physicists were adamant in rejecting any medium for such communication. The new DNA techniques were widely copied and engaged a large part of the group’s time. Nancy R. Rice and Bill H. Hoyer joined the group in 1968, and Bolton was named Associate Director by Tuve in anticipation of his retirement in 1966. Research concentrated on DNA and probed the myriad details that presented themselves, but the world of biophysics was changing rapidly. The name “biophysics” has fallen from favor, generally subsumed by “microbiology.” The experimental methods were neither expensive nor difficult to master, and the subject attracted active minds all over the world. Abelson had become Institution President in 1971 and become increasingly aware of the DTM group becoming a small part of an immense scientific community. He and the Trustees saw this as not the kind of activity that the Institution should support and decided, barring some new turn of events that might give it a unique position, that it should be dissolved. This happened in 1974, when Rice and Hoyer were asked to find new employment within two years, taking what equipment they could use with them. Roberts was near retirement and remained as an emeritus staff member, primarily occupied with his brain studies. Cowie, who had but a few years left before retirement, became a Carnegie associate at the Institut Pasteur in Paris until his death in 1977.
20 EXPLOSION SEISMOLOGY
In examining the records from 29 seismic stations following a large earthquake in 1909, Andrija Mohorovicic of the University of Zagreb concluded that there was an interface approximately 50 km below the surface, above which waves traveled at 5.7 km/sec and below which at 7.8 km/sec. This discontinuity, now generally referred to as the Moho, had not had a clean verification by 1948 for a number of reasons. The observatories used for the recording of earthquakes were too few to yield data over the relatively short distances needed to show the Moho, and their recording speeds were too slow, a consequence of the need to record continuously in order to capture the randomly occurring quakes. The timing and ranging were also poor because arrival times had to be used to learn origin time, epicenter and depth of focus. Add to these problems station locations not optimum for close crustal studies and non-uniform recording instruments at the observatories, and one easily understands why nearly 40 years after its discovery, the location of the Moho was not known and the structure of the crust was a mystery. Immediately after the war Tuve foresaw possibilities for studying the Earth’s crust in an improved manner using seismic waves generated by large explosions, a method used previously to a limited degree for this purpose and for locating oil deposits, but the magnitudes of those explosions were sufficient only to examine the structure of formations a few kilometers deep. The results indicated little more than that the crust was complicated, although some insisted the crust was a layer of granite over a layer of basalt. In selecting this scientific goal Tuve was influenced by Leason H. Adams, Director of the Geophysical Laboratory, and enjoyed extensive collaboration in the planning with Roy W. Goranson of that Department. This venture into seismology was not the Institution’s first, however. Following the 1906 San Francisco earthquake, the Chairman of the University of California’s Geology Department organized an extensive study of the disaster in all its detail for which the Carnegie Institution provided funds.1 Harry Oscar Wood, a member of the university faculty, was one of the investigators. Later he wanted to study the seismicity of the region systematically and took his plan to Carnegie’s President Merriam, who appointed in 1921 an Advisory Committee in Seismology with Arthur L. Day, Director of the Geophysical Laboratory, as chairman. The Committee decided to initiate a program of 149
150
The Department of Terrestrial Magnetism
observation that included, in addition to a network of seismometers, triangulation and leveling of landforms, and relevant meridian-circle observations by astronomers. Wood, in collaboration with John Anderson of the Mount Wilson Observatory, designed a seismograph suitable for the network, which he supervised from Mount Wilson. The Seismological Laboratory was organized in 1926, and Carnegie transferred its administration and support to Caltech in 1936. On his death Wood left an endowment to the Institution for the support of seismology, and DTM has made extensive use of the Harry Wood Fund over the years. Tuve knew that the Navy had huge quantities of high explosives of which they wished to dispose, and he wanted to shoot large charges to generate waves to disclose the location of the Moho and the structure of the nearby crust. The use of large charges exploded in mines, quarries or water allowed the exact time and location of the origin to be known, and radio announcements of the shots allowed the observers to start high-speed chart recorders providing adequate detail and timing accuracy. Thus in one step the experimental problems that beset studying the crust were removed. In this work Tuve was aided by a new staff member, Howard E. Tatel, a nuclear physicist who had worked on the proximity-fuze project. Inquiries yielded not only the requested explosives but naval participation in detonating the charges. It was, of course, necessary to obtain skilled crews with seismometers and radios to observe the arrival of these waves, and Tuve pried loose people at DTM from their normal work, trained them in their new duties and sent them into the field. Enthusiasm for this forced multidisciplinary science was remarkably strong, and after a few expeditions the press gang restrained itself. Other institutions were found to collaborate, and in the first of several such expeditions, undertaken in spring 1948 and designed to study crustal structure in New Mexico and then around the Chesapeake Bay and in the Appalachian Highlands, the Department was joined by the Geophysical Laboratory, the New Mexico School of Mines and Columbia University. The depth bombs shot in the Chesapeake Bay were of only 1 ton but could be repeated almost at will. In addition to the science, these shots allowed techniques to be refined for the much larger but infrequent mine explosions. A total of 140 good records resulted from the Bay shots, observed with linear arrays of seismometers out to ranges of 350 km. The seismometers constructed in the DTM shop were based on a mass with a spring restoring force (Fig. 20.1). Such an instrument had a resonant frequency, which passed a narrow band of frequencies when the damping was small. Resonant frequencies ranged from 2 Hz to 8 Hz, typical of what was encountered in seismic waves. The moving mass induced a signal in a coil coupled to an amplifier. By varying the load resistance connected across the coil, the system could be damped so as to provide a wider band of frequencies to which the system could respond. The amplifier had sufficient gain to
Explosion seismology
151
Figure 20.1 A DTM fabricated horizontal-motion seismometer. At the left one sees a vertical hinge on which swings the inertial mass comprised of a thick plate with two rectangular lead weights attached. At the right are the spring and the pickup coils for the signal. Such instruments were used in the South American network. 1963.
drive an oscillograph of the type standard among oil prospectors. Everett T. Ecklund designed a transistor amplifier for field use, which required far less power from the restricted sources available in the field; within a few years paper charts were replaced by magnetic tape having frequency-modulated signals. Every expedition included improvements in instrumentation. The amplitudes of the signals at the seismometer stations were of the order of 0.01 m with predominant frequencies lying between 4 Hz and 15 Hz; the electric signals presented to the amplifiers were of the order of microvolts. The amplifiers were modified so as to operate satisfactorily at the high gains needed for some of the later phases but not be overdriven by the first arrivals. Data analysis turned on plots of travel time for the first arrivals against distance; it was clear that the signals that followed the first arrivals carried information, but their interpretation was delayed until experience and understanding had grown. The Moho showed up clearly in the data of the New Mexico expedition at a depth of 33 km with seismic velocities above and below it of 6.1 and 8.1 km/sec. Later in the year, the Tennessee Valley Authority detonated three very large charges as part of an engineering project: 250, 750 and 400 tons each. Sufficient notice was given to allow an array of seismometers to be set up with signals received at ranges of 1500 km. These showed a P (pressure)
152
The Department of Terrestrial Magnetism
wave that passed through the Moho, traveled at the characteristic high speed just under the discontinuity then refracted up to the detector. These clear results contrasted with the confusing structure observed for the crust above the Moho. The data did not substantiate a characteristic crustal velocity– depth relationship needed for a layered crust, the prevalent model at the time, and much complexity arose from conversion of P waves to S (shear) and to surface waves. The inhomogeneity of the crust was demonstrated when measured arrival times differed as much as a second from one another when observed for different directions of propagation, even though the range was exactly the same. Not only that, the depth of the Moho showed local variation. During summer 1951 another expedition, undertaken with the Geophysical Laboratory, went forth to study the Canadian Shield: 11 men, 7 vehicles and 29 shots in the Mesabi Range in cooperation with mining companies that timed their huge blasts accurately. During the same summer there were 17 shots in Puget Sound, set off by the Coast Guard. The model of the crust made up of a succession of horizontal layers did not survive this work either. Summer 1955 saw a seismic expedition to Alaska and the Yukon Territory, involving the greatest logistic effort yet by the Department, requiring six field-equipped cars to motor from Washington along the Alaska Highway to the field sites. The Coast Guard set off the charges though interfered with by storms, disabled ships and calls of distress at sea. A total of 22 tons of depth charges were used. In celebration of the International Geophysical Year there was an expedition to the Andes in 1957, making use of the explosions in open-pit copper mines in southern Peru and northern Chile where 30 ton or 100 ton shots were fired several times a week (Fig. 20.2). It also made use for the first time of a grant for field expenses from the National Science Foundation. The observation of shots to the east, across the altiplano, the high Andean plateau, brought a surprise. They were so severely attenuated that no arrivals were recorded beyond 230 km, yielding data from which the depth of the Moho could not be extracted, although arrivals observed on the western flank of the Andes disclosed both crustal and mantle layers (Fig. 20.3). The expedition to the Andes was the last for Howard Tatel, whose untimely death in November 1957 left a hard-felt gap not only in the Department’s seismology program, made up until then of Tatel and Tuve with the able technical assistance of Ecklund and Paul A. Johnson. It also marked the entry of Thomas Aldrich into seismology, expanding thereby his research interests beyond isotope dating. A replacement for Tatel was appointed three years later in John S. Steinhart, who had worked with Robert P. Meyer in the long-standing University of Wisconsin collaboration with the Department. He undertook as an early assignment writing with Meyer a summary of the explosion work.2
Explosion seismology
153
Figure 20.2 A Peruvian mine shot used in the seismological studies of the Andes during the International Geophysical Year, 1957.
Out of this expedition grew a new branch of the Department’s seismic interests. The rich source of earthquakes in the region presented the opportunity for a much improved use of their waves to study structure there. A number of local scientists were encountered who were willing and able to oversee a network of continuously operating stations for which DTM fabricated battery-operated seismometers with pen recorders and the Office of Naval Research loaned chronometers that were calibrated against the time signals of the Bureau of Standards station WWV. To improve future capabilities for South American collaboration, Tuve arranged for the support of pre-doctoral students at US universities. The anomalous results from the altiplano caused the investigators to try again, owing to improvements in instrumention, the existence of a sensitive, semi-permanent seismic station in the region, and the growing competence of South American colleagues. In 1968 another series of lake shots was organized
154
The Department of Terrestrial Magnetism
Figure 20.3 A truck with seismic equipment deployed for explosion studies in Peru as a part of the International Geophysical Year. Note the radio antenna. Radio communication was vital because the station recorder had to know the instant that the shot was fired and note the absolute time from National Bureau of Standards station WWV. 1957.
with six other organizations participating, but little knowledge was gained beyond that obtained in 1957. The simple, layered model of the crust, whose principal proponent had been Beno Gutenberg, failed completely to satisfy any of the explosion data, although Gutenberg held fast to it for a curiously long time. The layer model was not the only theory to fail as a consequence of the explosion work. Isostasy is the theory that assumes a balance of all large portions of the Earth’s crust as if they were floating on a dense underlying layer for which the pressure at some fixed depth, of the order of 100 km, is the same everywhere, even though the elevations of the surface differ significantly from place to place. At
Explosion seismology
155
the onset of these studies, two theories proposed to account for the irregular heights of the Earth’s surface: the Airy hypothesis assumed that the crust had a uniform density, resulting in the thicker parts sinking deeper into the substratum to compensate for the higher elevations; the Pratt hypothesis assumed a depth of compensation well below the crust, perhaps at 100 km, and that the elevated portions had a low density. The seismic data clearly showed the Moho deeper beneath the continents than the ocean, 30–40 km compared with 8–10 km respectively, but although this certainly contradicted the Pratt model, it did not fit the Airy model quantitatively. The crust did not have a uniform seismic velocity and had been found to be significantly less homogeneous than thought. One failure irritated the group: they were unable to observe nearly vertical reflections from the Moho, an achievement reserved for others years later. Joint expeditions to Montana and Wyoming with the University of Wisconsin occupied the summer of 1959 with a return the following year to clear up questions that arose out of the data. These were smaller projects than before, requiring only three people from DTM, but in 1961 there was a major expedition in which the Department’s people were joined by those of six other organizations to study the continental crust from explosions in the Gulf of Maine. In seismology a model of the crust means the wave velocity expressed as a function of depth, and the test of a model is the degree to which it reproduces arrival times. Attempts to reproduce the Maine data made use for the first time of an electronic computer, an IBM 7090. Tatel had pointed out in earlier work that reproduction with a particular model did not prove its validity, as a successful model was not necessarily unique. The newly found computing power allowed many models to be tried, and the crust disclosed itself to be even more troubling than just inhomogeneous. Models were found that had velocity as a smooth function of depth. The happy part was the stability of the location of the Moho and the velocity just below it. Another multiorganizational expedition used shots off the coast of north Carolina in 1962 with results that were not encouraging in the search for a simple understanding of the crust. The last expedition for which DTM assumed a major organizational role took place the following year at Lake Superior with 14 groups from five nations participating. Here a long line of explosions was fired from a Coast Guard vessel in the lake with the shot locations fixed by judiciously placed hydrophones and the seismic data recorded at 46 arrays of portable seismometers. If evidence for complicated crustal structure were desired, Lake Superior supplied it. The Superior basin is not topographically impressive. The maximum depth is typically 400 m and the heights of the surrounding landforms seldom over 600 m, yet it disclosed both the thickest and thinnest crusts
156
The Department of Terrestrial Magnetism
so far observed in North America. The results disclosed the clear boundary of the Moho and a relatively constant velocity of propagation just below it, but also showed a crustal structure that was unremittingly inhomogeneous and complicated. In addition to seismic studies the region was later given extensive scrutiny through heat flow and gravimetry measurements. The complications of the Lake Superior data troubled the group, and they returned to them in 1965 using a mathematical technique known as timeterm analysis that provided an alternative to trying out families of planelayered models to reproduce travel times. Crucial to the method are data for which the position of shot and seismometer are interchanged for a few cases. The method proved so successful that it stimulated a return to the Atlantic coast in a 100 shot seismic experiment organized by Anton Hales of the Graduate Research Center of the Southwest, called the East Coast Onshore-Offshore Experiment, conducted in summer 1965 and planned to maximize the number of shot-seismometer reversals. A crustal thickness of 30 km of the coastal plains was found to increase to 60 km beneath the Appalachians, but clarity about the structure of the crust remained elusive. Except for the reexamination of the altiplano already mentioned, this ended the Department’s work in explosion seismology. It also saw the departure of Steinhart, who had become increasingly interested in environmental studies, as well as of T. Jefferson Smith, who had joined the group after Steinhart but who wished to return to teaching. There was an increasing belief that the explosion work had been drained of what understanding could be extracted with the instruments and techniques at hand and that it was time to examine the structure beneath the Moho using earthquakes as sources.
21 ISOTOPE GEOLOGY
There can be no satisfactory knowledge of when humankind began to wonder about the age of the Earth, but given the multitude of answers proffered in the world’s religions and myths, one must assume it was very early. Answers supported by some kind of objective questioning of the evidence observed in the Earth and the solar system are, however, relatively recent, dating from the Age of the Enlightenment. In 1748 conjectures by Benoit de Maillet from (wrong) interpretations of fossil evidence contradicted the biblical periods, indeed suggesting an age of 2.4 billion years. Numerous attempts were made during the following century and a half based on the cooling of the Earth and Sun, solar orbital physics, ocean chemistry, erosion and sedimentation. The only common element of these attempts was that all were orders of magnitude greater than what was found in Genesis. The discovery of radioactivity at the end of the nineteenth century altered things substantially in the minds of the investigators of the time by providing a method for determining the ages of rocks and by disposing of Kelvin’s age estimates, which had been derived from erroneous assumptions calling for much shorter ages than geology required. Radioactivity provided a heat source within the Earth, and presumably within the Sun, that evaded the heat flow problem. The recognition that radioactivity was the key to terrestrial age determinations did not circumvent the problems of using it, problems that would not be surmounted for half a century. It was not until the end of the 1930s that the decay patterns of uranium and thorium had been tortuously mapped, indeed the isotopic composition of lead was not known completely until 1939. In 1906 Rutherford used measurements of uranium to helium in some uranium ore that yielded an age of 500 million years, and a few years later A. Holmes determined the age of the Earth to be 1600 million years using isotopic data. The mass spectrometer was the instrument capable of reading the radioactive clocks that nature provided, but it was not an easy device to master. Ionization proved difficult for some elements and often yielded interfering ionic species having the same mass as the isotope sought. Obtaining data from stains on glass slides at the spectrometer focus was only marginally quantitative. Francis W. Aston of Cambridge discovered the lead isotopes 206, 207 and 208 in 1929 using tetramethyl lead obtained for him by Charles 157
158
The Department of Terrestrial Magnetism
Figure 21.1 DTM’s first mass spectrometer. This instrument was a copy of Alfred O. Nier’s, purchased from his technician. It was set up and modified by L. Thomas Aldrich, who had been one of Nier’s students. Circa 1951.
Figure 21.2 First generation of DTM-designed mass spectrometers. The magnet coils are shown prominently with the ion source and diffusion pump extending in front of them. The detector is seen at the upper right. The operator is Bernard Burke, generally remembered as a radio astronomer but here serving as photographer’s model.
Isotope geology
159
S. Piggot of the Geophysical Laboratory, but it was a series of papers by Alfred Nier in 1938 and 1939 that brought together the requirements of the modern instrument, in particular measuring the ion currents electronically. Of particular importance was his discovery of 204 Pb that provided a hint of there being a primeval lead component in addition to the radiogenic. Thereafter mass spectrometry has seen an unbroken series of technical advances to the present day with machines five or ten years old requiring significant renovation or replacement. Tuve entered DTM into this kind of research in September 1950 by adding to the staff L. Thomas Aldrich, a student of Nier’s, followed a few months later by George R. Tilton (Figs. 21.1 and 21.2). The project was a collaboration with the Geophysical Laboratory, represented by Gordon L. Davis. Aldrich and Davis saw in the decay of 87 Rb into 87 Sr a good pair for study because there was but a single parent and daughter. Furthermore, both elements are easily ionized thermally, a simple technique that worked by depositing microscopic samples on a filament that was subsequently heated to serve as the source of ions in the spectrometer. Only a few of the chemical elements can be so ionized efficiently, although special techniques, whose inscrutability has led them to be called “witchcraft,” have extended the method beyond what was possible in 1951. Thermal ionization has the important laboratory virtue of emitting ions with a distribution of energies characterized by the temperature of the filament; negligible compared with the energies to which the ions are accelerated. These excellent experimental properties nevertheless produced an experimental problem: 87 Rb and 87 Sr could not be resolved from one another by the spectrometer, and the standard chemical extraction techniques could not separate the two elements satisfactorily. The solution to this came from an unexpected source, the Department’s biophysics group. They had begun to use ion-exchange resin for separating a variety of chemical species, and Aldrich tried the technique on their inorganic materials with a complete separation of rubidium and strontium. In order to use the measurement of 87 Sr for the age determination of a mineral crystal, one had to measure accurately the amounts of elemental rubidium and strontium present. Given the very small sample sizes this was impossible using normal chemical methods, but another method, called isotope dilution, came forward at about the same time and made the determination relatively simple. It became possible to obtain from Oak Ridge Laboratories highly enriched isotopic samples, the product of the huge mass spectrometers originally used to separate the isotopes of uranium for the atomic bomb. By adding a measured “spike” of isotopically enriched rubidium and strontium to the rock sample at the time of dissolution, the mass spectrometer was presented with rubidium that was a mixture of the natural element plus the spike. From the measured spectrum it was simple to calculate the ratio of the
160
The Department of Terrestrial Magnetism
natural to the spike, and consequently the amount of rubidium in the rock, or even in a crystal of the rock. This combination of techniques carried with it an internal check on its validity. The field geologist can recognize that a given rock must have solidified from a common melt. The individual crystal grains held varying amounts of rubidium, a minor component of the mineral structure. A plot of the decay product 87 Sr (normalized to 86 Sr) obtained from a group of mineral separates against the amount of the parent 87 Rb (also normalized to 86 Sr) in each mineral must be a straight line, if the determinations are correct, and the slope of the line establishes the age. Failure to form a straight line can have various causes, such as the rock having suffered alterations or metamorphism, but all mean the age cannot be trusted. Thus within a couple of years a laboratory method needed to determine accurately the age of rocks had succeeded, although the use of these skills required more than ordinary attention to the physics and chemistry of rocks. The Rb–Sr method is easily disturbed by geologic processes and the U–Pb method proved not only to be much more robust but also to be the “gold standard” of isotope dating, all other systems referred to it. There was an initial laboratory problem of freeing the samples from the ubiquitous lead contamination of the laboratory and the reagents in order to be able to run such small samples, but here again the mass spectrometer prevailed by allowing the chemical procedures to be checked by the ultrasensitive isotope dilution. It was known by then that the eventual decay products of 238 U, 235 U and 232 Th were the lead isotopes 206, 207 and 208 respectively. Tilton had determined uranium–lead ages of granites for his dissertation before coming to DTM and followed this technique through his career. A troubling problem was the evidence that radiogenic lead was not being completely retained. This had two obvious causes: direct loss of lead from the crystal and indirect loss resulting from the transport of the intermediate products between the original uranium and thorium and the final lead, all having varied chemical properties and half lives. Both kinds of loss were related to the integrated radiation damage to the crystal structure, although the use of zircons got around some of the problem because they are resistant to chemical change even at high temperature. Furthermore, when crystallizing they are receptive to having the zirconium replaced by uranium and are highly resistive to the incorporation of lead, beginning thus with very little primeval lead. (A curiosity, indeed a kind of paradox, is that zircons maintained at higher temperatures lose less lead than those near surface temperatures, contrary to expectation, because high temperatures heal the continuing radiation damage of the uranium decays.) Despite these attributes, lead is lost from zircons. The problem of lead determinations attracted the attention of George W. Wetherill, who had joined the Department in 1953. His approach was to
Isotope geology
161
Figure 21.3 The U–Pb concordia diagram. The decay of the two naturally occurring isotopes of uranium gives rise to two independent chronometers. If the mineral sample remains closed to gain or loss of U and Pb, the plot of 206 Pb/238 U against 207 Pb/235 U forms the smooth curve marked “concordia.” Ages are indicated on this curve by numbers in millions of years: 200, 500, etc. If the mineral undergoes an event that causes it to lose Pb at 500 Ma the data will lie off the concordia and form a straight line, the discordia. Wetherill invented this manner of plotting data to determine mineral ages in 1955.
make a universal plot of 206 Pb/238 U against 207 Pb/235 U as functions of time. If no lead was lost from a given crystal, any measured data would plot on this curve, which he called “concordia.” Discordant data, where lead has been lost, would lie below the curve, and because the two isotopes of lead would be lost in proportional amounts, the discordant data would form a straight line intercepting concordia at the crystal age. The combination of the zircon’s characteristics and two quite dissimilar decay rates of uranium have given uranium–lead dating using the concordia method the highest accuracy when done by artisans trained in selecting samples and performing the analyses (Fig. 21.3). The solution of the problem of making accurate uranium-lead ages brought another problem to the fore. Some rocks were more suited to 87 Rb/87 Sr dating and to the 40 K/40 Ar method that had been added to the group’s techniques, the latter possible owing to the availability of the isotope 38 Ar, but the decay rates of both 87 Rb and 40 K were poorly known compared with the decay rates of uranium. To circumvent these problems the group
162
The Department of Terrestrial Magnetism
obtained a suite of rocks of varying ages from which lead, strontium and argon determinations were made and determined the decay rates for rubidium and potassium using the much better known values for uranium derived from single crystal pegmatite uranite. This was followed by a counting experiment to confirm the rate of potassium decay because there was suspicion of loss of argon in the rock samples. 40 K decays through two branches with 89% by electron emission to 40 Ca and the remainder by positron emission to 40 Ar, from which two easily countable gammas are emitted. This experiment indicated that for mica samples the result had a value about 5% higher than that determined by comparison with uranium, confirming the suspicion of argon loss. Unfortunately, the accuracy of this determination reflected the uncertainty of the knowledge of the branching ratio, something not satisfactorily known to this day. The next step for the group was the obvious one of making a global assessment of the Pre-Cambrian rocks, which they initiated in 1957, briefly interrupted by the group’s participation in the Andes seismic expedition. By then they had techniques that gave reproducible values of a few percent; they had surveyed the various minerals that had shown themselves to be most suitable for dating and had settled on zircons for U–Pb, biotites for K–Ar and Rb–Sr and feldspars for Rb–Sr. For reasons of convenience, concentration was initially centered on North American rocks. Similar work was carried out by the Lamont Geological Observatory, the University of Minnesota, the University of California and MIT, all of which were doing isotope dating. A large area was found with ages exceeding 2500 Ma from western Quebec through Saskatchewan and northern Minnesota. Another large area in the western United States contained rocks of about 1350 Ma. Rocks from the Appalachians showed significantly younger ages, ranging from 310 to 1150 Ma. The work expanded into an ever-widening global study. Wetherill accepted a faculty position at UCLA in September 1960, the same month that Stanley R. Hart joined the group as a Carnegie Fellow. By that time Tilton had transferred to the staff of the Geophysical Laboratory, continuing with Davis the close collaboration of the two departments. As the global surveys began pointing directions for geological understanding, Hart began to introduce use of the group’s techniques for studying the chemistry of early rock formation. The entry in Year Book 63 was entitled “Isotope Geology” with no reference to geochronology. The section was soon to call itself the “Geochemistry Section.”
22 RADIO ASTRONOMY
In 1936 Karl Jansky was studying the noise in a 20 MHz radio receiver at the Bell Telephone Laboratories and observed a source that could not be ascribed to the amplifier, atmospheric discharges or man-made interference. He then constructed a directional antenna that could be rotated about its vertical axis and after a long series of observations showed the noise to be extraterrestrial and to come from the constellation Sagittarius. Astronomers were perplexed as to what to do with this, and no one continued such study except Grote Reber, a radio amateur, who was able to have his sky survey published in the Astrophysical Journal in 1944. During the war radar engineers working with steerable meter-wave equipment encountered Jansky’s “cosmic noise” among other sources. The paths of meteors entering the atmosphere left ion trails that could be tracked by radar, and Bernard Lovell used radar signals reflected from them to show that the meteors were from the solar system, a question open since the days of Newton. The radar engineers also rediscovered strong radio emissions from the Sun during sunspot activity, something observed before the war by amateurs whose work was not appreciated by the learned. Dutch astronomers, inspired by Reber’s article, pointed out the theoretical possibility of observing a hyper-fine transition from atomic hydrogen in space and saw their predictions verified after the end of hostilities. Given the Department’s interest in radio studies of the ionosphere, it is rather surprising that Tuve did not enter the field until 1952. The impetus came from the Mount Wilson Observatory, where the reports from England and Australia of “radio stars” awakened serious interest, but for which they had no observational capabilities. It was evident that radio was disclosing strange and wonderful events in the cosmos, and Carnegie’s DTM was the obvious candidate for their investigation. The beginning may have been a bit delayed, but it was entered with characteristic zest in studies of active radio sources on the Sun, radio star scintillations and the 21 cm hydrogen line, all undertaken within 18 months, and cataloguing radio point sources was added within a year. The personnel involved were Tuve, Tatel, Wells and new staff member Roy Britten, who quickly left to join the biophysics group, his place being taken by Bernard F. Burke and John Firor. Francis Graham-Smith, from 163
164
The Department of Terrestrial Magnetism
Figure 22.1 A 7.5 m German W¨urzburg radar antenna mounted equatorially for observing 21 cm radiation from neutral hydrogen in our own galaxy. Also shown are the Experiment Building (center), its Annex and at the far right the Atomic Physics Observatory, the pressure-tank Van de Graaff accelerator. At the left is the house for protecting a searchlight that was used to study dust in the atmosphere. Paul Johnson designed and built the structure holding the old radar dish. 27 July 1953.
Cambridge University (later Astronomer Royal) a distinguished pioneer in the new field, provided guidance for the tyros and was also a welcome visitor with the Carnegie people in Pasadena. The services of Ecklund and Paul Johnson, both with excellent knowledge of both mechanics and electronics, weighed heavily in the scales. The Kensington field station had had to be vacated, so land was purchased near Derwood, Maryland in April 1946 for the ionosphere and cosmic-ray work, and it became the home for much of DTM’s radio astronomy. The first experiments were exciting. A simple interferometer for 207 MHz (1.45 m) with two pairs of dipole antennas spaced 50 wavelengths apart, suitable for locating active regions on the Sun to within 1 minute of arc, was quickly put into operation. Impressive records showed clean interference fringes, sometimes with large temporal bursts. The following year they repeated a study of the extent of the solar corona by observing the passage of the radio source from the Crab nebula through the corona of the quiet Sun. The refractive scattering of the source caused its normal size of 5 minutes of
Radio astronomy
165
Figure 22.2 Data showing that an intense point source of radio noise originated from Jupiter. Before this remarkable discovery, it had been assumed that the planets would not be sources of radio emission. Bernard Burke and Kenneth L. Franklin had been checking the reproducibility of their data and found a wandering source. 1955.
arc to expand to over a degree even when as far removed on the sky as 25 solar radii. These observations of point or quasi-point sources caused a dipole array to be laid out at a site along River Road near Seneca, Maryland in a form known as the Mills Cross, built under the supervision of its designer, B. Y. Mills from the Radio Physics Laboratory, Sydney, Australia. It drew on standard radar design. One line of dipoles defined a fan-shaped beam that could be traversed by altering the lengths of the coaxial cables connecting the individual dipoles. With two orthogonal dipole arrays the two fans overlapped and fixed the direction of a pencil beam. The Mills Cross was the instrument of a major discovery in early 1955. Burke and visitor Kenneth L. Franklin were looking for objects at 22.2 MHz in the vicinity of the Crab nebula and noted one with intense bursts of energy, some driving the recorder off scale. Its curious extreme temporal variation caused a repetition of measurements, which disclosed the movement of its position on the sky, identified to be that of the planet Jupiter (Fig. 22.2). Searches for similar emission from Venus, whose thick atmosphere it was thought might produce electric discharges of the kind presumed to be the cause of the Jupiter emissions, yielded nothing.
166
The Department of Terrestrial Magnetism
The technique of making accurate position measurements with the Mills Cross proved difficult to master, and it was not until 1960 that they generated a table of the right ascensions of 19 radio stars from the Cambridge catalogue with agreement to a few seconds for strong sources. Although an interesting accomplishment, it could hardly be compared with the few hundred sources located in right ascension and declination by the huge Jodrell Bank 75 m diameter steerable dish that went on line in 1957 with incomparably higher antenna gain than a Mills Cross, so DTM suspended observations of radio stars. Similarly, solar observations that resulted in much data but little clarity were abandoned at the same time. In November 1954 a meeting of the National Science Foundation’s Advisory Panel on Radio Astronomy was held at the Institution’s headquarters on P Street. This was the culmination of a series of meetings during the year to discuss government support for radio astronomy with particular emphasis on a large steerable dish of the kind then being built in England at Jodrell Bank. Central to the discussions was a proposal by Associated Universities, Inc. to organize what was to become the National Radio Astronomy Observatory (NRAO) in Green Bank, West Virginia. Associated Universities had come into being as a corporation to operate a laboratory for high-energy physics for universities in the northeast, and its dynamic President was Lloyd V. Berkner, a former DTM staff member active in ionosphere work. Berkner’s proposal, which requested $105 000 for planning, took an approach to which Tuve was strongly opposed. Tuve saw the need for a shared observatory with the best equipment but did not like it being organized from the top down as a corporation. He did not like the idea of the equipment being designed by engineers who would not face the realities of its use, and suggested that one of the interested parties, possibly Harvard, design the observatory and make it available to others. Berkner’s approach prevailed, and NRAO resulted.1 This dispute between two former colleagues had begun in 1948 when Berkner, on resuming his position as leader of the Department’s ionosphere research, wanted to make use of government funds to expand the scope of his program. The funds would have been obtained easily, given Berkner’s scientific status and the interest in the subject by military and intelligence organizations, but Tuve refused to allow him to tap such sources and was backed by Bush. Tuve saw that “under any system of Government grants there is necessarily a sub-structure of guidance and broad direction of emphasis by financial controls.” Berkner saw important scientific needs for large amounts of money and had experienced first hand the success of such organized technical efforts. Tuve, although of proven competence in controlling a large technical organization from his proximity-fuze work, saw science as the result of individual curiosity and initiative and participated at a close level with his associates in the laboratory. Berkner saw immense opportunities for organizational efforts for which he had developed proven ability, and he had
Radio astronomy
167
found administration productive and satisfying. He soon left the Department as a consequence of Tuve’s position. They repeated their debate on other occasions in the councils of the scientific mighty over the next decade, a grand debate in which they set forth their opposing philosophies of big and small science.2 During the war German forces had fielded an excellent air-interception radar that used a paraboloid of 7.5 m diameter, the W¨urzburg Riese (giant), several of which had come into the hands of the Army Signal Corps. These superbly engineered dishes were quickly placed into service in the Netherlands, Britain and the United States for radio astronomy, and Tuve obtained one through the Bureau of Standards and had it mounted equatorially on the Department grounds in 1952. Emission with a wavelength of 21 cm from galactic clouds of atomic hydrogen had been discovered in 1950 by Edward Purcell and Harold Ewen and a Dutch group led by Hendrik van de Hulst, who had predicted it. Observation provides maps of its intensity over the sky, but as the clouds are in motion their speed relative to the observer produces a Doppler shift in the frequency that is easily measured with radio heterodyne techniques. Owing to the dispersed nature of galactic hydrogen, the signals received have a spread of frequencies, so the usual heterodyne method would be very tedious to use, especially considering that the sky had to be covered a square degree at a time. Thus the first order of business was the construction of a receiver to observe several different frequencies simultaneously. Here entered a problem that the group never succeeded in overcoming: every step they took to improve hydrogen-line instrumentation seemed to lead to the immediate need for yet another step because the science for which the previous apparatus had been designed had been accomplished elsewhere. The sky surveys of radio hydrogen with a W¨urzburg were completed by the Dutch and Australian groups first, so the DTM dish soon served as a test for new electronics and as a guide for the design of an 18 m antenna to be completed at Derwood in 1960. This splendid dish and the 54 channel receiver for Doppler spectra suffered the same fate from the nearby University of Maryland, whose people used equipment at Green Bank (Fig. 22.3). As soon as the 18 m dish was completed, construction began for a 30 m equatorially mounted dish of a clever design to be erected at La Plata, Argentina in cooperation with the University of Buenos Aires, the costs of which Tuve covered in great part with NSF funds (Fig. 22.4). (Tuve was not averse to using NSF funds for large projects that involved DTM as a collaborator, so long as the funds did not alter life at the Broad Branch Road location.) This project fitted into Tuve’s efforts, already begun in geophysics, of helping South American scientists develop by utilizing local advantages, in this case the southern sky, in the former the Andes. A second 30 m transit dish was made for both Derwood and Argentina to function as wide-based interferometers. Little science came from the La Plata observatory, in no small
Figure 22.3 The Derwood 18 m equatorially mounted dish and the DTM “portable radio receiver” housed in a truck trailer. This receiver was used with the 90 m transit dish at the National Radio Astronomy Observatory to observe the rotational structure of the galaxy M31. Seen inside are Charles Little and Bernard Burke. 1964.
Radio astronomy
169
Figure 22.4 The 30 m equatorially mounted dish constructed for a collaboration with Argentine radio astronomers at La Plata. Everett Ecklund designed and built this highly innovative antenna. Two more identical dishes were built but only for transit motion. Each was intended as the second element of an interferometer, one for La Plata, the other for a site near the 18 m at Derwood. 1965.
part because of the unsettled political conditions that prevailed in Argentina. Indeed, little science came from the 18 m dish at Derwood either. During its 25 year operation only a single publication for a refereed archive journal was published from its use, but it was a paper of exceptional significance. It was the long-deferred all-sky hydrogen-line survey made by Tuve in retirement.3 His huge amount of data, provided through almost unlimited observation time, had the virtue of having been recorded with the same antenna and receiver (and by himself ). The conclusion of his study was a strong statement that a survey of Milky Way hydrogen did not lead to an understanding of the structure of the galaxy, in contradiction to prevailing thought. No amount of ingenuity could relate the intensities and the Doppler spectra along the lines of sight with a unique map of the hydrogen. Such data, however carefully taken, were sorely lacking in information. In anticipation of the completion of a 90 m dish at Green Bank in October 1962, the 54 channel Doppler receiver was mounted in a truck trailer to be ready for connection to the new antenna. In order to bring the largest dish possible within monetary and temporal constraints, this dish, generally
170
The Department of Terrestrial Magnetism
referred to as the “300 foot,” was mounted for transit motion only, which meant that the sky was continually moving through its beam, but the group adapted to this in an ingenious manner. Their object was to measure radio hydrogen in the nearby galaxy M31 in Andromeda. The large dish provided good, if fleeting, angular resolution of the galaxy, and Burke and Tuve, joined now by Kenneth C. Turner, mounted a feed at the focus of the dish that moved so that its 0.13◦ beam tracked the radio image in right ascension at fixed declination. This provided them with 150 seconds of observation time before the feed was quickly returned to its starting position for another point. This allowed an excellent map of the motion of the hydrogen clouds in this distant galaxy to be made, whence they determined a detailed picture of the rotation of the galaxy, the information from which its mass was determined. (It should be noted that the problem of distance for the clouds along the line of sight, so troublesome for the Milky Way hydrogen, was not present here, as all of M31 is located at approximately the same distance. Furthermore, its velocity relative to the Milky Way was sufficiently different for its hydrogen to be easily identified.) This differential movement of the feed became standard for the dish. Another use of the 90 m dish came to the group, one that proved to be rather spectacular in the means of data acquisition. There were conflicting interpretations of the continuum radiation from the sky that impelled the group to use the giant antenna to scan the entire sky available to it. The feed was fixed and equipped for 234 MHz (128 cm) giving a beam width of 1◦ between the half-power points; observation time was compressed by scanning the transit motion at a rate of 10◦ /min over a range of 50◦ in back-and-forth motion. Hour after hour the giant dish nodded to the north, then to the south. By the end of November 1964 almost the whole sky visible to the instrument had been observed twice every half-beamwidth. Owing to interferences the data proved difficult to reduce, but did show that the continuum came primarily from the galactic disc with only a hint of a large spherical halo. The existence of NRAO began to exert an unfortunate pressure on the Department’s radio astronomy. The Green Bank engineers put together excellent up-to-date equipment that became easy to use by outsiders, just as Berkner had wanted. Indeed, their only concerns were providing the “customers” with good service; and their monetary preeminence in instrumentation made competition by anyone difficult. These problems began to affect the staff. Burke left in October 1965 for a faculty position at MIT. John Firor had left in September 1961 for the High Altitude Observatory in Colorado. Heydenburg had resigned in 1962 rather than switch to radio astronomy half time. Wells, who did not get on with Tuve, went on leave in January 1960 to serve as Scientific Attach´e at the Embassy in Rio de Janeiro and resigned in July 1962.
Radio astronomy
171
When it became evident that there was no future in using the 18 m dish for hydrogen, a major effort was made at the La Plata observatory. When operations there seemed stagnant, Bolton, who was by then Director, sent Kenneth Turner to act as observatory director in 1971–73, but without any rejuvenating effect. The 18 m dish with the 54 channel receiver was used by graduate students from the Universities of Maryland and Johns Hopkins for instruction. The accuracy of its paraboloid led to an attempt to look at water-line emission around 1.35 cm. Norbert Thonnard in 1971 set about building the electronics for its use to this end. His efforts were successful, but the project came to what was by then a characteristic end – it was outclassed by wealthier competitors. Radio astronomy has remained at the Department to the present day, but its nature changed markedly when George Wetherill became Director in 1974. In shutting down the operation of DTM’s radio telescopes he was acting in accord with President Abelson and most of the Department staff members directly concerned. The decision did not remove radio astronomy as a discipline for the Department but required that observational work be done elsewhere, a circumstance long the practice for optical astronomers. The 18 m and 30 m dishes operated in Washington were given away. The 18 m went to the National Geodetic Survey where it was mounted at Richmond, Florida; its superb structural stability made it highly prized as one of the antennas used for observing extremely small terrestrial motions from signals received from quasars,4 eventually measuring the motion of the Earth’s tectonic plates. Unfortunately, its structural stability was powerless to prevent its destruction by a hurricane. The 30 m was given to the State University of New York at Albany. The construction of a much improved receiver and feed for the La Plata observatory was continued to completion at significant NSF expense. The W¨urzburg dish, long a fixture of the grounds, was offered without success to radio amateurs and then cut up for scrap. The course of radio astronomy at DTM presents a thoughtful person with a variety of puzzles and lessons. Tuve’s entrance into the field six years later than the British and Australian radar veterans was caused by the discovery by those pioneers of discrete radio sources – radio stars as they were called at the time. This discovery obviously interested astronomers more than wide distributions of “cosmic noise” or even the possibility of a spectral line from hydrogen clouds, and the group at Pasadena naturally turned to Washington for possibleobservers. Tuve’s response was prompt and enthusiastic, reinforced by Tatel, whose instincts for science paralleled Tuve’s. Their entrance was a perfect example of exploratory science. No one knew what the field really contained, and their actions during the early years were perfect for mapping this new intellectual continent, but the establishment of NRAO marked the end of exploration. The general nature of what was to be
172
The Department of Terrestrial Magnetism
Figure 22.5 A comparison of rotational velocity measurements of galaxy M31 using optical and 21 cm Doppler shifts. The optical data (points) were taken from regions known to have strong hydrogen Balmer line emission; the radio data (curve) use all of the 21 cm hydrogen emission that is incident in the antenna lobe. The radio telescope used was the 90 m transit dish at the National Radio Astronomy Observatory with the DTM portable multichannel 21 cm Doppler receiver.
observed – discrete sources, continuum, hydrogen clouds, solar phenomena – had been established and their various equipment needs determined for at least a decade or so. Unfortunately, Tuve saw NRAO as just the way he did not want to do science. It was rooted in his nature that a scientist had to have intimate contact with his instruments, and the idea that the instruments were to be designed by engineers hired by a corporation and operated by its technicians was abhorrent to him. As it turned out they were not faceless engineers with no serious connection to the science, as Tuve had feared, but devoted and skilled men such as John Findlay. Tuve was determined that his group were to be “hands-on” radio astronomers, and it was this determination that lay behind the decision to build from Carnegie funds the 18 m dish despite knowledge that it would be duplicated at Green Bank. (Its Green Bank counterpart was named in Tatel’s memory.) Viewed from the security of hindsight, the decision to build the 18 m was a costly mistake, one that marked the beginning of a downward spiral. The group’s subsequent significant work was done at Green Bank, and the Derwood observatory became an ever-growing effort sink, one not compensated by its function in supporting the La Plata operations and one that
Radio astronomy
173
became a serious drain on the energies of young staff members. Yet the Department’s radio astronomy yielded three major achievements: the discovery of Jupiter as a radio source, the detailed mapping of the rotation of the large galaxy in Andromeda (Fig. 22.5), and an unanswered challenge to the use of radio hydrogen as a method of mapping the Milky Way. Furthermore, it provided fertile soil for the growth of optical astronomy from the seed of image intensification, bringing forth rich results later. It would have been a mistake for the Department not to have entered radio astronomy.
23 IMAGE TUBES
When Galileo pointed his telescope at the sky in 1609 he initiated an optical evolution that continues to the present and shows no evidence of termination. First came bigger lenses with reduced aberrations, then large reflectors that opened the discovery and investigation of objects other than stars and planets, and by the 1890s photographic emulsions had increased sensitivity by allowing the light to be integrated over long periods. The pace of invention has grown geometrically so that the passage of a generation brings startling changes to practitioners of the art. There was a decade and a half when a single device completely transformed observational astronomy, providing improvement that effectively increased the apertures of all telescopes by a factor of three. This device was the image tube, the development of which owes much to DTM. By 1950 improving the sensitivity of photographic emulsions gave every impression of being in a condition of diminishing returns, but hope seemed to reside in the ratio of two numbers: whereas it required about 1000 photons to produce a single blackened grain in an emulsion, a photoelectric surface will emit an electron for about every five incident photons. Inferior sensitivity was only one of the emulsion’s flaws. Its response to light was non-linear, saturation limited dynamic range, heavily blackened regions overwhelmed faint images nearby, but worst of all was an affliction called reciprocity failure, the failure of a long exposure to record a proportionately fainter image. In the emulsion the blackened grain was the end of the process, but with photoelectrons the process had just begun; surely ingenuity could so manipulate the electrons as to provide a superior image. Television had begun as experiments for radio amateurs during the 1920s and had reached the stage of all-electronic, high-definition (405 interlaced lines) in 1936 when broadcast television was introduced in Britain. By 1950 television was beginning to grasp the attention of the world, and it was based on the manipulation of electronic images. Astronomers took note of A. Lallemand (Observatoire de Paris), who had begun experimenting with an image tube before the war and had demonstrated that the technique could produce impressive plates. Some had been using the photomultiplier 1P21 to measure the intensity of faint point sources and wished they could do more. 175
176
The Department of Terrestrial Magnetism
In his system the image was focused onto a semi-transparent photocathode from which the photoelectrons were electrostatically focused onto a photographic plate placed in the vacuum system. The plate had to be loaded and the vessel pumped to high vacuum before the photocathode could be evaporated, and since the emulsion emitted gases at room temperature that destroyed the photocathode, the plate had to be cooled to −100◦ C. When the exposure was over the vacuum was broken, the plate extracted and developed, and the whole thing begun again for the next exposure. Later versions had provision for loading more than one plate into the vacuum. These were conditions that no working astronomer could tolerate, but nonetheless it offered a starting point. With time, techniques derived from the development of the present television camera, which can transfer data digitally to a computer, came to dominate astronomy, but the television camera of 1950, the image orthicon, suffered from noise and an imperfect image memory that excluded it from consideration. Had those faults not eliminated it, its potential strength as the input to a computer would have been ineffectual, owing to the absence of a suitable computer. Thus the two working electronic-image techniques of 1950 – the Lallemand tube and the orthicon – were inadequate. Serious efforts toward exploiting electronic imagery took the form of a committee called into being in February 1954 by Vannevar Bush, President of the Institution; it had four members representing four organizations: William A. Baum (Mount Wilson and Palomar Observatories), John S. Hall (Naval Observatory, but soon of Lowell Observatory), Ladislaus L. Marton (Bureau of Standards) and Merle A. Tuve, Chairman (DTM). The committee decided that an astronomer could not be expected to operate a highly technical physics experiment at the telescope, so Lallemand’s approach was ruled out from the start; the image tube, as the device was soon called, must be no more difficult to employ than an additional lens system and must be manufactured commercially. It was also realized that electronic imagery was becoming an advanced industrial art and that competition between companies offered obvious advantages over having a laboratory associated with the committee, such as DTM or the Bureau of Standards, design a prototype to be fabricated commercially. So the committee decided to contract the development to companies with electron-optical capabilities and test their experimental models in laboratory and on telescope. Such a course of action was a new way of doing science and an expensive one at that; furthermore, the expense was open ended and had no guarantee of success. That neither Bush nor the committee hesitated shows the influence of their recent wartime experiences. The committee arranged at the outset for a grant of $50 000 from the Carnegie Corporation to start an exploratory program, but these funds were quickly used up and the National Science Foundation (NSF) accepted the long-term funding of the project. NSF funds
Image tubes
177
were not used to pay salaries or to cover work done in the Department’s shops nor was overhead charged. Four design approaches were undertaken, referred to as (1) the cascaded tube; (2) the mica-window tube; (3) the Lenard-window tube; and (4) the barrier-film tube. The first three used magnetic focusing, the fourth electrostatic. Electrostatic focusing was natural to try because it had had success during the war for night vision equipment, something with evident similarity to astronomy’s needs. It was simple to construct and use but suffered from spherical aberration that limited sharp focus to a small circle. Magnetic focusing was simple and had neither inherent aberrations nor limitation on image size but was clumsier to implement. (The magnetic lenses used in electron microscopy are different and saw no use with image tubes.) Consider a planar photocathode in uniform magnetic and electric fields aligned perpendicular to its surface. Photoelectrons from a point on the cathode leave it in all directions but come together a distance down stream that is easily calculated. Thus an electronic image on the photocathode is transferred to another plane but with the electrons having received significant amounts of energy from the electric field. The region of sharp focus is almost the same size as the cathode. In the cascaded tube a thin mica sheet was placed at this image plane with a fine grained phosphor on one side and another photocathode on the other. An energetic electron generated an intense scintillation in the phosphor that was then amplified into a large number of new electrons, which were in turn focused onto another. This yielded a highly amplified light image that could be transferred to an emulsion with optical lenses. The phosphors were coated with aluminum to prevent light feeding back to the cathodes. The mica-window tube substituted for the phosphor-cathode sandwich a thin mica sheet, strong enough to support atmospheric pressure and with a phosphor coating, against which a photographic film was pressed. The shape of the window was rectangular as it was intended for spectra. The Lenard-window tube was much like the mica-window tube except that the thin mica window had no phosphor but was thin enough to pass the electrons into an emulsion pressed against it. This provided excellent resolution resulting from the direct recording of the electrons in the emulsion, electronography as it was called. Its window was also rectangular. The barrier-film tube sought to make the Lallemand technique practical by introducing the photographic plate into the vacuum through an air lock. A thin film kept the low-grade vacuum of the air lock from poisoning the photocathode, although it was not strong enough to hold atmospheric pressure. Experiments with this mode were conducted by W. Kent Ford at the University of Virginia in cooperation with Hall and Baum. In September 1957, Ford became a DTM staff member and began to assume responsibility for testing
178
The Department of Terrestrial Magnetism
tubes. By then five electron-tube manufacturers were constructing various kinds of prototypes: RCA, ITT (formerly Farnsworth Electronics), General Electric, Westinghouse and Midway Laboratories. Tests on telescopes were done primarily with the Naval Observatory’s 40 inch at Flagstaff, Arizona and the nearby Lowell Observatory; in 1957 a 24 inch telescope that could be devoted to testing image tubes was given to Lowell by Ben O. Morgan of Odessa, Texas. By 1957 the barrier-film tube had been rejected as too difficult to use for routine measurement, and evidence was growing that magnetic focusing was superior to electrostatic. Richard G. Stoudenheimer and associates at RCA had produced a cascaded tube with magnetic focusing that showed promise, having a 30 fold gain in speed over a good emulsion and a resolution of 15 line pairs per millimeter. It was a sealed-off tube that was very handy. It had problems with non-uniform phosphors, but impressed everyone that these and other difficulties could be overcome. Professor J. D. McGee, who had been given responsibility in 1933 for cathode-ray tube design in Marconi–EMI television, made remarkable advances at the Imperial College of the University of London with a Lenardwindow tube, and the committee supported McGee’s work by providing a fellowship for one of his students. So a race was on. By 1959 the 24 inch Morgan telescope became busy examining various tube designs. Of special interest was a tube with good response to infrared around 1 m made by ITT Laboratories. It had a phosphor-coated mica window for contact with film and was successful when used with a spectrograph. The Smithsonian Astrophysical Observatory reintroduced the image orthicon in a form devised by General Electric that had improved image integration with readout scans taken at intervals ranging from a small fraction to a few seconds. The Army Engineer Research and Development Laboratory entered a similar orthicon system, and General Precision, Inc. entered a vidicon (photo-conductive) type television camera. All were tested by Baum on the 20 inch at Mount Palomar. Confidence in the cascaded tube caused Ford to examine the best form of the optics needed to transfer the phosphor output to a photographic plate, which took the form of two f/1.3 lenses mounted front to front to give unity magnification. He also built a spectrograph in the DTM shop, made extraordinarily strong because of the weight of the magnetically focused tubes anticipated. It was used with a 6000 line per cm grating with first-order blaze at 1 m, suitable for stellar spectra and exploration of the infrared (Fig. 23.1). The years 1961 and 1962 were years of decision. ITT and RCA laboratories had advanced magnetically focused two-stage cascaded and mica-window tubes, and McGee produced impressive resolution with the Lenard-window tube. The problems remaining with the cascade tubes were spurious emission,
Figure 23.1 W. Kent Ford testing image tubes. Prototype tubes were manufactured by various companies and sent to Ford at DTM for evaluation. One of these, which was not adopted, is seen on top of the electronics chassis. Ford examines the photograph of a pattern designed to test resolution and fidelity of the tube. 1962.
180
The Department of Terrestrial Magnetism
phosphors that were neither sufficiently fine grained nor homogeneous, poisoning of the phosphors from the materials of the photocathodes, and deterioration of photocathodes. These defects were to be reduced to acceptable limits within a year. An unexpected problem appeared when image tubes were used for long exposures on telescopes: the movement of the tube through the Earth’s field altered the focusing field, causing measurable shifts in the image. This was corrected by shielding the focusing magnet. A great advantage of the cascaded tube for the astronomer using it was the similarity to normal telescope operation; plates were handled just as if there had been no image tube. The stability of the glass plate over film worked against the mica-window and the Lenard-window tubes that required the flexible medium. Given this, the final system design was adjusted to create on the average the creation of a single blackened grain for each photon, as obviously more than one was redundant and failure to form a grain meant loss of information. Despite the best lenses, the transfer optics from the output phosphor to the recording plate was inefficient. The magnification of the tube was fixed by magnetic focusing to be unity, so in attaining the photon-blackened-grain condition the tube had to have enough light gain to compensate for the loss in the transfer optics. By 1963 RCA had fabricated more than 100 of its C70056 cascade tube, continually making modifications, and number 107 met the demands of the committee. With new transfer optics, using two f/1.8 lenses, resolution of 30 line pairs per mm was achieved at a “speed gain” of over 10, which meant the same information could be recorded ten times as fast as without the tube. This in effect increased the aperture of a telescope using the tube by slightly more than a factor of three. In 1964 the committee decided to use the RCA cascade tube type C33011 for distribution to observatories. Support for research on other tube types, specifically the Lenard-window tube, was not curtailed, but the C33011 had a level of performance that would be difficult to exceed for a number of years. RCA delivered twenty of them, and Ford prepared a tested, operating system that included magnetic focusing, high-voltage, transfer optics and associated hardware for each observatory, installing the first system in February 1965 on the Yerkes Observatory’s 40 inch refractor. Installation followed at Kitt Peak, Lick, Lowell and Mount Wilson. A Carnegie–NSF Allocations Committee was formed to decide on the distribution of the remaining tubes. The complete image-tube systems were furnished through NSF funds and installed by Ford. When he departed an observatory the local astronomers were using the apparatus routinely. By this time Tuve decided that future development at the Department should be done with the guidance of an astronomer and added Vera C. Rubin as a Staff Associate in April 1965. Rubin approached the project by planning observations of galaxies, objects of sufficiently low brightness to challenge
Image tubes
181
Figure 23.2 Vera Rubin adjusting the image tube focus on a DTM spectrograph that is mounted on the Kitt Peak 84 inch telescope. The clothing indicates the observing conditions of the time. Circa 1965.
image-tube capabilities. Her appointment also marks the beginning of optical astronomy as a research discipline for the Department (Fig. 23.2). Preceding her, Ford had been helped by Alois Purgathofer, who had assisted him in the early installations and who returned from time to time as a visitor to design the system for the infrared tube that RCA supplied. By the end of the NSF funding 34 systems had been supplied to a worldwide distribution of observatories, in the process transforming Ford into a worldwide traveler. The robustness of the Carnegie image tube, as the C33011 was called, is revealed by the low incidence of failure and breakage. Of the 45 standard tubes tested at DTM six were known to have failed by 1970 owing to loss of cathode sensitivity or intolerable increase in background, and two damaged by accident. In addition to improving the light gain of telescopes, the image tube also improved the quality of the data obtained. In extracting spectrometric data from plates exposed directly to light, one has the problems that the response of the blackening is dependent on wavelength and that calibration to light of varying wavelength is difficult. When used with an image tube the plate is blackened with the same phosphor light regardless of the wavelength, and this
182
The Department of Terrestrial Magnetism
darkening can be related to light intensity through the more easily measured response of the photocathode. The non-linearity of the emulsion to phosphor light was calibrated by a phosphor made uniformly luminous with the beta particles from a film of material impregnated with 14 C, furnished through the kind offices of the biophysics section. Although this improved matters, it was inferior to counting the photons in the spectral lines, and accomplishing this was the last part of the image tube development done at the Department. Ford approached the problem of digital recording in 1969 by noting that individual scintillations persisted on the phosphor for several milliseconds. He replaced the plate holder of the system with an image dissector, a device that functioned as a scanning photomultiplier. By selecting the sweep speed of the dissector so that there was a high probability of it responding to a scintillation before it had died away, one electronic pulse was obtained on the average for each scintillation. These pulses were manipulated electronically so that their height was made proportional to the amplitude of the scanning voltage and fed into a pulse-height analyzer, furnished through the kind offices of the nuclear physics section. The result was counts proportional to the number of photoelectrons generated in each spectral line and was the first instance of astronomical spectra being recorded digitally. Ford transferred the technique to Kitt Peak and it found further development at Lick.1 By the early 1980s a new television technique, the charge-coupled device, universally known as the CCD, swept all other imaging techniques in astronomy aside. The product of the tremendous push in the market for better television and home movies, the CCD became the ideal detector for astronomy, as the design could be modified for integrating long exposures and as sensors for spacecraft.2 Virtually all observations are now recorded in digital form that allows unparalleled possibilities of data analysis with the computer, and the image tube has joined the large array of scientific instruments that have solved old mysteries and disclosed new ones, only to be surpassed by something better.
24 COMPUTERS
Computers were a vital component of the Department from the moment of its formation, but they were human beings, not machines, usage of the term having changed over the years (Fig. 24.1). They spent their days at mechanical devices capable of the four arithmetic operations surrounded with a generous supply of mathematical tables. But it is only the electronic computer that comes to mind now when that word is used, and it is the electronic computer that has done more to change the way in which the Department goes about its daily tasks than any other single thing during the past century. From the vacuum-tube circuits of the 1946 ENIAC came the first commercial computer, UNIVAC, its first model sold to the Census Bureau in summer 1951. The cost of these and the rivals that soon appeared were beyond the means of the Department, which had no research at the time that demanded such unheard-of computing power, but the 1950s established that there was a strong market for computers and IBM, after a successful try with their IBM 650, decided to exploit the transistor and announced a small machine for businesses in 1959 followed shortly by the IBM 1620, which had storedprogram architecture and was intended for scientific work. Pressures to buy a computer during the early 1960s grew in the Department among the seismologists, who had experienced the value of such a machine in 1961 when computing travel times from models of the Earth’s crust for the Gulf of Maine data. In fall 1965 the advantage of the enhanced computing power and the price of a machine capable of supplying it came together in the IBM 1130. It had a core memory (RAM for modern readers) of 16 kbyte (soon increased to 32), punched-card and keyboard inputs, a line printer and a 1 Mbyte disc, which was removable so as to allow each user to store his own data and programs independently of others. A plotting device and 9-track magnetic tape units were added later (Fig. 24.2). The effect of this machine on the Department was extraordinary. As expected, the seismologists made good use of it in calculating travel times from models and for locating earthquake sources, but the rest of the staff, who had demonstrated little enthusiasm before, found a remarkable number of uses for it. Liselotte Beach transferred Forbush’s statistical analyses from laborious sheets of hand calculations to decks of cards and disc memory. 183
184
The Department of Terrestrial Magnetism
Figure 24.1 Data reduction was from the beginning an important part of DTM science. Above is a display of the forms used for recording and reducing data by hand calculation around 1920.
Geochemists found the calculation of isochrons significantly easier. Rubin, who had acquired a two-dimensional measuring engine for determining the exact position of the spectral lines from the 50 mm square plates that the image tube was providing, found the extraction of Doppler-shifted galactic velocities very much easier. Britten analyzed the data supporting the existence of repeated sequences on the 1130, and the nuclear physics group became a
Computers
185
Figure 24.2 The Department’s first digital computer, an IBM 1130. This unit was purchased in 1965 and remained in operation for a decade. It went through a number of upgrades: the core memory was increased from 16 to 32 kbytes, two 9-track tape units followed as did a plotter and assorted units to feed data directly into the machine. Its effect on all branches of science at DTM was great and unexpected.
major user for phase-shift analysis. Not a single section of the Department was untouched by the arrival of this device. Before any of these things came to pass there was a curious exhibition of the staff passing an examination on their home study of the computer language Fortran, followed by instruction in the operating system at the IBM offices. (An exception to those taking the examination was Britten, who announced that he had taken enough examinations and simply ignored IBM’s requirements, to the admiration of his colleagues.) The next step in computer usage was taken in 1971 by the geochemists for the operation of a mass spectrometer. The application of isotopes to the study of geochronology and geochemistry utilizes measurement of the ratios of their respective signals. The signal produced by the daughter isotope of a radioactive decay is compared with that of the parent, and the ratio determines the age. This ratio must be obtained from a repetitive series of ratio measurements because the ion beam is not constant with time. A second error is introduced because of the time required for the accumulation of data, which causes a mass fractionation of the sample on the hot filament. This
186
The Department of Terrestrial Magnetism
latter error can be corrected by measuring in the same series the ratio of two primeval isotopes, whose ratio is unaffected. This approach is capable of determining some ratios to precision of a few parts in 100 000, but requires a very large number of repetitions. Initially this was done by recording the output graphically, with obvious limitations. Next came electronic circuits that converted the analogue voltages of the detector into digital signals, and these were printed in tabular form for data reduction, giving much improved results. In 1965 the Digital Equipment Corporation (DEC) began manufacturing a small, low-priced computer called the PDP-8. It was limited in its capabilities but could be made to operate certain kinds of machines. Out of this design came in 1970 an improved but still relatively inexpensive machine, the PDP-11, which quickly propelled DEC to the forefront of the trade and which was seized upon by mass spectrometrists for automatic recording and reduction of measurement runs. A PDP-11 was operating one of the Department spectrometers in 1971. It required tedious coding in machine language, as its structure was too limited for a high-level language. Advances in computer technology have resulted in mass spectrometers having had their computers replaced about every five years since then, making use of all possible methods of improving performance. This use was noted by the seismologists, who obtained one for extracting seismometer data from magnetic tapes. As DEC expanded the capabilities of their machines, the seismologists procured a more advanced model, the PDP-1134, which allowed a high-level programming language and was shared by others in the Department through a number of terminals and inputs. It thus replaced the IBM 1130 after a decade of service. From this moment computers began to assume ever greater importance in the work of the staff, with terminals soon found in nearly every office. Programs for editing made the production of manuscripts increasingly easy and reduced the need for secretaries to transform hand-written text into magnificently formatted pages. The PDP-1134 was replaced by the VAX, the DEC machine based on the PDP-11 that reached for mainframe performance, and a large computer room filled with tape units and large hard-drive discs along with all manner of inputs and outputs except cards. In the 1990s the computer became the primary mode of communication for scientists over the internet, an unforeseen development that is transforming the world. To follow all this in detail through work stations and personal computers provides little of historical merit and would only serve to oppress the memory and perplex the attention of the reader.
25 EARTHQUAKE SEISMOLOGY
Two sources of seismic waves are used to study the Earth’s structure: explosions and earthquakes. The magnitudes of the former are generally restricted to what can be obtained from one to several tons of chemical explosive, when the cooperation of mining and quarrying companies can be secured. Nuclear explosions can increase the energy release, but they are relatively rare and cannot form a reliable basis for an extended research program. Earthquakes provide waves capable of exciting seismometers located around the world, but the simple, straightforward results obtained from the explosion studies initiated in 1947 contrast with the difficulties faced in making use of the natural sources. The explosion work allowed the source to be accurately located in space and time and the recording instruments to be positioned to take advantage of this knowledge. Earthquakes are distributed primarily – and fortunately – in restricted regions, and seismometers must be located where stations can be operated and maintained continuously, awaiting the arrival of unpredicted waves. Such a network was beyond the capability of the Department, but it could make use of the WorldWide Standardized Seismograph Network (WWSSN) operated by the US Coast and Geodetic Survey, whose data were available to all. This network, a consequence of the Department of Defense’s need to monitor nuclear explosions, proved to have enduring scientific value. The Department’s earthquake research had been initiated by the creation of the small network of stations in the Andes that resulted from the 1957 participation in the International Geophysical Year. The explosion studies of the northern Andes had brought more questions than answers, so the idea of a network that made use of the many local earthquakes to provide an abundance of data had obvious merit. Such a distribution of stations required personnel for continuous operation and maintenance, and they were found among scientists from the region who embraced the project with alacrity. Very quickly 18 stations were set up in Peru, Bolivia and Chile. The principal scientists were A. Rodriguez (Universidad San Agustin, Peru), R. Salguerio and Father Ramon Cabre (La Paz, Bolivia) and Father D. German Frick (Antofagasta, Chile). These men and their students demonstrated serious interest and ability, heightened by the need for a better understanding of the deadly 187
188
The Department of Terrestrial Magnetism
phenomena they encountered in their daily lives. Important instruction in technique using earthquakes as sources came in 1959 from Toshi Asada, the first of many Japanese seismology postdocs. Although located physically near one another, the South American collaborators proved to be separated by national boundaries that constrained their activities and communication. In addition, the technical qualifications of some of those participating were insufficient for their duties. To offset this Tuve organized a seminar during January and February 1963 at the Department where instruction in the details of instruments and analysis was given, as well as the chance for those in attendance to work with one another (Fig. 25.1). The Department established a seismic analysis center at Lima with facilities for copying, archiving and redistributing data from the various stations and set up a radio network that allowed them to remain in communication with one another. In 1962 the interest of I. Selwyn Sacks reinforced earthquake studies, and the boundary of the core and mantle became his first object of investigation. Rays observed at stations on a great circle from the earthquake offer a simple method of observing this boundary. Pressure waves (P-waves) are diffracted by the core, traveling along its periphery until proceeding again to the surface. P-waves can be identified by the amplitudes of their arrivals, which decrease as the well-defined function of the separation of source and receiver that had been derived by Harold Jeffreys in the 1930s. Selecting arrivals at two stations so located that the ray had traveled a significant distance along the core for both allows the velocity at the base of the mantle to be determined. The Alaska earthquake of 28 March 1964 was received not only by the WWSSN but also on three temporary but suitably located stations operated in South America by DTM fellow Roger Sumner; the data yielded a velocity and a core radius. The size of the core could also be estimated from rays other than diffracted P-waves (reflected shear waves, S-waves, and rays converted from P to S and S to P) and from the manner in which these data were analyzed. The results did not give consistent values, so research was directed to discover the nature of the layer at the boundary, a goal Sacks kept in mind. A number of scientists in the nineteenth century had remarked on the striking similarity in the coast line of eastern South America and western Africa, ascribing it to the sinking of a large portion of a super continent to form the Atlantic Ocean. In 1912 Alfred Lothar Wegener proposed that at one time a super continent had separated into the present continents, which had drifted apart. Geologists found fossil evidence for this, but physicists rejected the idea out of hand, as there were no forces capable of moving the continents over the “ocean” of underlying rock, whose viscosity was far too high, and this point of view was taken as “revealed truth” by the physics-dominated DTM.
Figure 25.1 Participants of a seminar held at the Department, January and February 1963, for the South American collaborators. These men had participated in the explosion studies of the altiplano and operated seismic stations in various locations. From left to right: A. Rodriguez, R. Cabre, I. S. Sacks, G. Saa and F. Volponi. Their continued efforts provided much excellent data from earthquakes. The table had elements that formed an analogue computer for determining the hypocenters of local earthquakes.
190
The Department of Terrestrial Magnetism
Postwar science began to disclose observations that reopened this question. First, paleomagnetism was uncovering evidence that the Earth’s magnetic field had changed its direction several times. Second, modern magnetometers pulled behind research vessels brought up records of spatial variations that made little sense; they were the kind of records that would have left Bauer, Ault, Vestine and their colleagues with no rest until enough data had been accumulated to see the phenomenon in detail, but the magnetic data were only a minor component of measurements that were swamping the new and very rich field of oceanography, and the underlying simplicity had to wait more than a decade for its startling interpretation. The reversal of the geomagnetic field was imprinting a paleomagnetic signature on the rock spreading from mid-ocean ridges that were located at the sides of enormous mantle convection cells carrying the continents apart – as rafts floating on the surface, not ships plowing through a mantle ocean. The various clues to this grand theory, the greatest discovery of the century in geophysics, culminated in a paper in the 16 December 1966 issue of Science by Fred Vine, following which opposition began to crumble. A consequence of the spreading of tectonic plates from mid-ocean rises was that these same plates had to be consumed, and locations for this process were quickly identified at the deep, off-shore trenches that rim the Pacific Ocean. The manner in which these plates are subducted at such locations was to engage the attention of the Department seismologists and their visitors for years to come, and located right on top of such a subduction zone was the South American network. The network and its analysis center was already locating the many small earthquakes to be found under it, and the pattern followed what was correctly presumed to be a subducting plate. It had the power to delineate crustal features that had marked the explosion method. Plate tectonics immediately opened a path for explaining the origin of mountains, something that had plagued geologists and geophysicists for a century. Mountains had to be forming continually because erosion carried so much material to the sea that the continents would otherwise be robbed of their highlands. The trick was to explain how they grew. In 1970 David James approached the specifics of reconciling the geology of the Andes with the manifestations of tectonic motion. Some problems in geology get solved. The Department expanded the study of the Andes to include their comparison with Japan, extending collaborative work begun with Toshi Asada and later cemented with the long-term cooperation with Shigeji Suyehiro and many others. The two groups had set up elaborate establishments for their local studies. There were obvious similarities and differences in the two tectonic structures to make comparison worthwhile. Both resulted from the collision of tectonic plates marked by ocean trenches, dipping planes of seismic activity and volcanoes, but South America is a continent whereas Japan
Earthquake seismology
191
is an island arc, and there were significant differences in seismic behavior. In Japan seismicity is continuous down the subducting plate with shallow activity to the west. In the Andes the pattern of earthquakes is less well defined, with many in the wedge above the plate. Of particular interest in these comparisons was the degree of attenuation to waves by the various structures that gave evidence of the mineral conditions. Neither Japan nor the Andes presented investigators with simple rock configurations, and the DTM group in 1973 organized an explosion study of the northern half-continent that brought in people from Colombia and Ecuador in addition to their older partners from Bolivia and Peru. Geophysics institutes from Germany, Texas, Wisconsin, Hawaii and Washington (state) also participated, with Colombia and Hawaii each furnishing a ship for ocean shots. The project deployed four linear seismometer arrays to observe shots released along two ocean profiles and fired by a DTM crew from a fixed location in a lake in southwestern Colombia. Despite the large endeavor, Project Nari˜no disclosed little about the crustal structure and evolutionary history of the region other than it was complex, which was already known.1 A similar expedition to southern Peru in 1976 with numerous off-shore and mine shots had a similar outcome. Studies of complicated structures in 1973 and 1976 were severely handicapped by the instrumentation available. The attenuation of seismic waves can be a valuable means of determining the type and state of the rock but presents the seismologist with phenomena for which analysis resting on arrival times and amplitudes is not sufficient. Attenuation results both from intrinsic energy loss as the rays pass undeflected through the medium and from scattering by reflection, refraction and conversion to other ray types by the irregularities of the medium. Insight into these mechanisms requires knowledge of the frequency spectrum of the waves received that the seismographs depending on photographic records and having limited dynamic range were unable to provide. The scientific interests of the group were tending toward determining the anelastic structure of the Earth and understanding the earthquake source mechanism. The former required a large frequency range and the latter a large, undistorted dynamic range. Tests on commercially available seismometers indicated false lowfrequency response because of spring and suspension resonances, particularly when excited by local earthquakes, so Sacks designed a radically new kind of instrument.2 This overcame these problems by shifting spurious resonances above 50 Hz or dampening them by enclosing some of the parts in oil. The mechanical elements of the device were mounted in a vacuum that reduced noise from convective movement of the air (Fig. 25.2). The electrical output used chopper-stabilized, low-noise amplifiers. Because the instrument had to operate at remote sites, data were recorded on slow-moving analogue magnetic tape having three months storage capacity and a band width of 0.01 to
192
The Department of Terrestrial Magnetism
Figure 25.2 A broadband horizontal-component seismometer with the vacuum housing removed. This instrument differs from previous designs in having the mass mounted on an inverted pendulum, so the spring does not have to support the mass, it only has to apply a restoring force. This allows the spring tension to be set very low, thereby making the natural frequency correspondingly low, usually with periods of 15–100 seconds. Violin-mode vibrations of the spring are suppressed by immersing it partially in oil. The hinge is located near the bottom from which a plate extends with lead weights. The spring is behind this assembly and has a screw adjustment at the top. At the upper right is the assembly with the coils and magnet for generating an output signal. Movement of air is a serious source of noise, so it is housed in a vacuum tank. This was the most advanced seismometer design until the electrically active instruments came two decades later. The designer was Selwyn Sacks. Circa 1966.
Earthquake seismology
193
Figure 25.3 Alan Linde, Selwyn Sacks and Shigeji Suyehiro replaying tapes from Sacks’s broadband seismometers. A tape recorder at each station ran continually, recording seismometer output in analogue form, which resulted in large lengths of tapes with essentially nothing of use. In the operation pictured, events would be located on the tapes and their seismograms digitized for analysis by computer. 1971.
20 Hz. Seven sets were built, deployed at nine stations. Signals for the events were digitized and analyzed using the IBM 1130 computer (Fig. 25.3). Some details of the structure under the Andes became clearer in the late 1970s, a consequence of many earthquakes recorded by many seismic stations. Notable was the manner in which the central portion of the subducting slab descends at an angle of about 30◦ until it reaches a depth of 125 km, at which point it turns almost horizontal and extends under a relatively thick continental crust leading to an understanding of the function of the buoyant slab. These observations also demonstrated that oceanic slabs could contort and did not necessarily tear, as had been assumed. Surface-wave data from a broad-band seismometer placed in Iceland, in conjunction with laboratory measurements at the Geophysical Laboratory of velocity and anelasticity under controlled conditions, enabled the temperature and degree of partial melting in the vicinity of the sea floor north of the island to be determined.
194
The Department of Terrestrial Magnetism
We have seen how the Department seismologists made the transition from using explosions as the sources of the waves they observed to using earthquakes. This came about to a great extent because of the South American array that they had organized. Although an older method, they were to bring to it significant changes through radically new instrumental and analytical techniques, but that story must wait. In April 1965, David E. James came to the Department as a student for the research leading to his Ph.D. degree at Stanford University. He remained and was appointed to the staff in 1969.
26 STRAINMETERS
In 1967 Dale W. Evertson of the Applied Physics Laboratory, University of Texas, visited Sacks to demonstrate a curious yet remarkable instrument, called a solion. It was capable of detecting the flow of extremely small ion currents in solution between two electrodes and had been developed as an acoustical detector for very low-frequency waves, the outgrowth of antisubmarine research that had found it of no use. Impressed by its sensitivity, Evertson searched for an application and constructed a seismometer with it, hence the visit. Sacks found it unimpressive as a seismometer but saw in the solion another application and initiated a field of study that was to occupy both men for the remainder of their careers. Ever since earthquakes had been associated with faulting there had been a general belief that the key to understanding, even predicting them lay in determining the strain that built up in crustal structure. This naturally led to the invention of a variety of devices for measuring strain. These were generally based on the accurate measurement of the distance between two piers in a cave or tunnel having little temperature variation. From these efforts beginning in 1900 little of substance had come for reasons soon to be explained. Sacks saw in Evertson’s device a new, and as it proved, successful approach to this old problem. The action of the solion came about from the motion of ion solution caused by a change in the cell volume, and Sacks envisioned it connected to a large volume of liquid in a stainless-steel cylinder encapsulated in a hole drilled into the bedrock. Changes in the rock’s strain would be reflected in small changes in volume of the liquid, which could be measured with the solion. The first model of such a strainmeter was constructed in the DTM shop and mounted in a hole drilled on the grounds in July 1968. The expanding concrete required to seal the cylinder to the rock needed about three months to cure, during which temperature reached equilibrium. When observations were made, the strain resulting from the Earth tide, 4 × 10−8 , showed up with remarkable amplitude. Microseisms having strains of 10−11 showed up clearly at frequencies of 0.1 Hz. At frequencies around 0.001 Hz noise appeared that proved after a moderate puzzlement to be mimicked by a microbariograph at the top of the hole! The new instrument was a remarkably good seismometer as well. 195
196
The Department of Terrestrial Magnetism
Figure 26.1 Schematic diagram of a Sacks–Evertson strainmeter. The entire cylinder is encapsulated in a borehole, typically hundreds of meters below the surface. Strain connection to the rock is assured by seating it in an expanding concrete. Strain causes tiny changes in the volume of the oil in the sensing volume that alters the length of the bellows. The bellows movement displaces one coil of a differential transformer that generates a signal transmitted to the surface. In the event that the bellows becomes too distended, a valve allows pressures to equalize.
The solion in principle measured ion and hence liquid velocities, causing the sensitivity to drop with decreasing frequency, just the region where hopes of long-term strain measurements lay, and it was thought that another detection method was needed for this inherently low-noise and broad-band device. A new design for the next instrument eliminated the solion and replaced it with a tiny stainless steel bellows, whose volume change could be measured in two different ways. Very long-period (possibly of a year or more) response was measured by the displacement of the bellows by a lever arm that moved a differential transformer, which proved to be a stable sensor. Short-period response was measured by the deformation of a quartz crystal. There was the clear need to test the device against the best existing strainmeters and in a region where earthquakes were occurring and strain must be changing. Such a region was available in Japan near Matsushiro, a city being shaken by dozens of small quakes daily and where a wide variety of
Strainmeters
197
Figure 26.2 Selwyn Sacks, Shigeji Suyehiro and Michael Seemann assembling the instruments for three borehole strainmeters to be installed at Matsushiro, Japan. 1971.
instruments of every kind had been set up as a consequence. The Department built three borehole strainmeters of the new design for emplacement in this interesting zone in cooperation with Suyehiro (Fig. 26.2). The equipment, engineered and to a great extent manufactured by Michael Seemann of the DTM shop, lived up to the fond expectations. Two of the instruments were planted in January 1971 only 9 meters apart and in the vicinity of two extensometers, quartz rods of 15 and 100 meter length so suspended as to allow the change in the distance between two piers, firmly embedded in the rock, to be accurately measured (Fig. 26.3). The two borehole instruments gave the same results and showed the extensometers frequently furnished random data, which measured motion of the badly fractured upper levels of rock, although the 100 meter rod gave passable long-term comparisons. A critical test for the three strainmeters was their record of an earthquake of magnitude 3.9 located midway between the installations. All three gave identical records, but more exciting was the measurable change in the dc component of the strain (the component that varied very slowly with time) after the event, proving that the device could monitor strain, perhaps at an absolute level. Long-period noise originated in ground water movement and
198
The Department of Terrestrial Magnetism
Figure 26.3 The first installation of a borehole strainmeter outside the United States. The location picked was Matsushiro, Japan, a tectonically very active area. On the ladder is Selwyn Sacks; standing and holding the strainmeter is Shigeji Suyehiro; standing to the far right is Dale Evertson, co-inventor with Sacks of the instrument. 1971.
rain, a serious matter if one wished to examine strain growth in anticipation of an earthquake (Fig. 26.4). The success of the tests with three strainmeters near Matsushiro was justification for the Japanese to install a network of strainmeters, operated by the Japanese Meteorological Agency under Suyehiro’s direction, along the seismically active Pacific coast of Honshu, south of Tokyo. These instruments were manufactured by the Japanese under Seemann’s supervision, and their outputs were connected to a central office where anomalous signals could receive attention, possibly even provoking an earthquake alarm. By 1989 more than 30 stations were in operation. This array recorded a remarkable event of a nature that was to fix the attention of Sacks and Alan Linde, who became increasingly active in strainmeter research. It had previously been believed that the energy stored in strain was released by the easily recognizable earthquake, but the magnitude 7 Izu-Oshima quake in January 1978 showed large strain releases in episodic events both before and after the main shock. These events took place over
Strainmeters
199
Figure 26.4 Response of a Sacks–Evertson borehole strainmeter installed at Matsushiro, Japan, to a magnitude-4 earthquake. The measured strain is plotted as a function of time. The instrument was able to follow the rapid fluctuations because of the instrument’s inherently broad pass band. The strain in the rock surrounding the instrument was altered by the earthquake, dramatically shown by the “dc” offset. 1971.
200
The Department of Terrestrial Magnetism
periods of the order of 10 seconds and showed up clearly on strainmeters but not on seismometers. The energies released by the normal (fast) and silent (slow) quakes were comparable. Sacks and Linde had encountered similar phenomena two years earlier in Iceland in examining data from a Carnegie broadband analogue seismometer that recorded events following a volcanic eruption and that showed significant low-frequency activity. These happenings stressed the value of installing strainmeters in Iceland, which had already been planned because of Iceland’s unique tectonic condition – it sits on both a hot-spot volcano and a mid-ocean rise. The latter property causes the island to spread apart during earthquakes, and it was important to have strainmeters on site for the next spreading. Iceland’s use of hydrothermal power had resulted in a number of holes having been drilled into hard rock but for which no heat had been tapped. These holes, the most expensive part of a strainmeter installation, were given to the project, and eight instruments were installed during August and September 1979 through the cooperation of R. Stefansson of the Iceland Meteorological Office. Unfortunately, these holes had significant amounts of aquifer noise. Studying earthquakes with borehole strainmeters can be a maddening affair for researchers with an inadequate reserve of patience. They must be installed at considerable effort and expense (at least to the extent sums are allotted to this kind of science) and then produce little or nothing of scientific value until a seismic event takes place that allows critical interpretation. In apparent disregard of the patience precept three strainmeters were installed in northern Honshu in late 1982 in time to observe strain conditions before and after an off-shore earthquake of magnitude 7.7 a few months later. One of the three recorded slow deformations before the quake but on time scales much longer than had been experienced at Izu-Oshima in 1978. A dramatic increase in slow quakes took place during the month preceding and the month following the normal quake. After this there were no slow quakes for six months, in contrast to the usual two or three that had characterized a similar period before. Evidence continues to collect indicating that slow earthquakes precede main shocks and that they are part of a redistribution of strain in a manner that causes the main shock. These redistributions are easily seen with the borehole instruments and do not generate seismic waves. Here indeed is hope for earthquake prediction, but there are large earthquakes that show no such precursors. The reader can assume without prompting that the interest in earthquakes in California would call for the installation of strainmeters there, and a program for placing them in three different regions was set up with the Geological Survey and University of California, San Diego. The first instruments were installed in spring 1982 near Pi˜non Flat where a variety
Strainmeters
201
Figure 26.5 Alan Linde and Selwyn Sacks install a borehole strainmeter 3.2 km below the surface in a South African gold mine. Strains in the rock resulting from the overburden compress the rock and cause many small, nearby earthquakes that the instrument records, allowing the study of these relief mechanisms. The location presented the two scientists with such extremes of temperature and humidity that the installation had to be accomplished in a minimum of time, as their ability to function under these conditions was limited. 1978.
of other geophysical instruments were grouped for comparison, this being a region of uncommonly low levels of seismic activity. A well-sited 100 meter quartz extensometer located there had proved to be a good measure of Earth tides, and the comparison of the strainmeters with it was gratifying. Others have been placed along the San Andreas fault along with every other kind of diagnostic device. Patience brought an unexpected result from the Iceland array when the volcano Hekla erupted in January 1991. Although the closest instrument was 15 km away, strain changes caused by the rising magma were very clear and had excellent signal to noise ratios. A definite signal was perceived 30 minutes before any other evidence of an eruption was noted. The data allowed the speed of magma ascent to be calculated. In 2000 a similar event provided a half-hour prediction of the volcano’s eruption that resulted in air authorities keeping aircraft away, and avoiding the danger of jet engines being filled with ash. In 1986 Sacks, Linde, Seemann and Poe installed an array of eight DTM strainmeters in the People’s Republic of China. This came about as the result of an earlier visit to China by Sacks when he learned that seismologists of the State Seismological Bureau had constructed strainmeters based on their reading of the DTM publications. They had encountered problems that were quickly solved through discussion with the instrument’s inventor.
202
The Department of Terrestrial Magnetism
Subsequently they added 20 more of their own construction. The result has been a very profitable collaboration. Because the borehole strainmeter requires electrical connection to the encapsulated cylinder, it could be rendered useless – and irreparable – by a strong electrical surge propagated down the conductors into it, a surge such as might happen from a lightning strike. In order to prevent this it was desirable to place circuits at the surface that might prevent passage of a surge down the hole even though the surface equipment might be destroyed, as it could be replaced. To test various designs a week-long series of longremembered experiments was conducted around 1970 using the Carnegie pressure-tank Van de Graaff to generate copious numbers of 400 kV pulses at the end of a high-voltage coaxial cable to which protection circuits were attached and destroyed. The end was a satisfactory design but at the cost of scores of destroyed components of various kinds, not to mention a damaged oscilloscope, later repaired.
27 THE BOLTON AND WETHERILL YEARS
With the agreement of President Haskins, Tuve selected Ellis Bolton to follow him as Director and to this end named him Associate Director in July 1964. Bolton, known for his contribution to the agar column, a crucial preliminary to the discovery of repeated DNA sequences, felt insecure in dealing with so many physicists and recommended Aldrich to be his associate; he was named Assistant Director the following year. Thus arrangements were made for a smooth transition. When Tuve stepped down on 30 June 1966 he left the most diverse department of the Institution. In numbers of staff and staff associates the bio group had five, seismology four, isotope geology two, radio astronomy two, image tubes and optical astronomy two, cosmic rays one, and nuclear physics one. Administratively it lacked cohesion. Seismology, isotopes and cosmic rays fitted well with the new directions selected in 1946 for expansion in the study of the Earth using the methods of physics. They were productive of ideas and results, and of the 14 Fellows listed in Bolton’s first year, 10 were in this group. Radio astronomy had come into being to explore remarkable postwar discoveries, a task that had been well handled, but it now had only Turner (staff associate) and Tuve (emeritus), others having left the Department during the preceding few years. A large investment had been made in an 18 m steerable dish, the operation and maintenance of which was substantial. The future of radio astronomy was now tied to the National Radio Astronomy Observatory (NRAO). Image tubes and optical astronomy were following productive paths. Ford was equipping observatories all over the world with the new device, and Rubin was making good use of it in examining diverse properties of galaxies. In an excursion into a completely different field, they even mounted an image-tube spectrograph on the Van de Graaff to examine beams of various atoms that had been accelerated and excited by passing them through thin carbon foils. These experiments determined the lifetimes of excited states of atoms and ions, a spectroscopic quantity otherwise very difficult to determine. Biophysics certainly did not fit into the “return to geophysics” goal of 1946, but it had had some brilliant moments, and the Director had been selected from that productive group. During Bolton’s tenure bio began to 203
204
The Department of Terrestrial Magnetism
find itself replicated at many laboratories, removing the unique status it had enjoyed in the 1950s. Nuclear physics existed only through a fluke and indeed had never fitted into the Department’s goals in geoscience. The prewar efforts had been defended because of the unique and highly successful science that came from the energies of Tuve, Hafstad, Heydenburg and Roberts. (Justification resting on magnetic nuclear properties had stretched credibility beyond reality.) After some remarkable studies the immediate postwar work had come to an end because of the limitations of equipment, self-imposed limitations resulting from the refusal of the Department to accept Federal funding. In 1961 it had returned from the grave to which Tuve had hoped he had assigned it because a source of polarized protons, developed at the University of Basel, fitted into the high-voltage terminal of the pressure-tank machine as if the two had been designed for one another. No other accelerator could accept the source, so DTM had no competition for polarized beams of energies below 3.5 MeV for nearly 15 years. The earlier nuclear connection with bio no longer existed, as the radioisotopes they required were industrial products. The polarizedbeam work was supported in part by the Swiss and was not a serious drain on resources. Other than grouping the various sections into astrophysics, geophysics and biophysics Bolton did not make administrative changes, and the consequence was drift, a striking change from when the activist Tuve was in charge. Radio astronomy continued a downward spiral with various persons entering and leaving the section without measured effect. On becoming President, Abelson, a founder of the bio section, began to examine it critically, pointing out that they had become one of many and needed a new and original line of work, which they failed to find. In the summer of 1974 the President and the Trustees decided to terminate the bio section over the succeeding two years; as a result Bolton resigned on 23 September, and Aldrich was named Acting Director. The selection of a new Director took a non-traditional path. Abelson said he would consider candidates proposed by the staff, which elected a committee made up of Hart, Rubin, Sacks and Brown, which maintained good communication with their colleagues. The committee and the President found they had common criteria and settled on former staff member George Wetherill, then a professor at the University of California at Los Angeles. Wetherill accepted and entered office on 1 April 1975. The reorganization that the Department required followed quite naturally. The decision affecting the biophysics section had already been made. The decision to terminate the operation of radio telescopes while continuing the science using the equipment at NRAO found general staff agreement. The nuclear physics program had by that time reached a logical conclusion, as those experiments that could be made with the original polarized-beam
The Bolton and Wetherill years
205
apparatus would soon be completed. To remain in the field required extensive investment in equipment, which was inappropriate because over the years a few laboratories had built polarized sources for accelerators of higher energy, and development was expected to cover the DTM range of 3.5 MeV. The work had been supported because the Van de Graaff was unique and the experiments could not be done elsewhere. The collaboration with the University of Basel, which had contributed the residence of five Swiss fellows, was ended, and Brown tried his hand at mass spectrometry. At UCLA Wetherill had followed his interests from meteorites to theoretical studies on the dynamical processes by which meteorites and near-Earth asteroids maintained a steady state population in the vicinity of the Earth. After coming to DTM, he extended this to developing quantitative models of terrestrial planet formation, thus establishing a branch of science connecting astronomy and the isotope work, which in turn linked with seismology. Within a couple of years the Department had attained a form retained until Wetherill retired as Director in 1991. When Tuve left office the Department’s use of government funds for research was quite restricted and generally involved collaboration with other laboratories. The largest of these was the channeling of NSF money into the industrial laboratories that were developing and later manufacturing image tubes. NSF money also supported the large field expeditions for seismology as well as the construction of an observatory for radio astronomy at La Plata, Argentina. But in administering these funds none of the money was used at Broad Branch Road. Nothing was used to cover salaries, construct equipment, nor was overhead charged. The rule had been, “if we accept outside money, it must hurt.” Individual scientists were not allowed to submit proposals for grants. Bolton relaxed the rule to allow grant proposals for equipment but without salary or overhead. When James D. Ebert replaced Abelson on the latter’s retirement in 1978, the Institution policy on grants changed abruptly. Staff members were strongly encouraged to apply for grants in the same manner as did investigators at universities, including partial salary support and overhead. Not all staff members complied. The latter half of Wetherill’s tenure was marked by a long-running dispute with the President and the Trustees concerning the locations and physical plants for DTM and the Geophysical Laboratory, then located about a mile to the south on Upton Street. The buildings of both departments were old and in need of renovation, and this opened discussion not only about improved housing but also about changes in the location of one or both departments. For reasons that have escaped the author, it was held that the Upton Street location was no longer appropriate for the expanded laboratory space required. For reasons that are easier to understand even though they were not appreciated by everyone at the time, it was decided that the
206
The Department of Terrestrial Magnetism
two departments should occupy the same location, either on the extensive grounds of the Department or adjacent to some university. The circulation of proposals and associated rumors caused significant tension among scientific and support staff. Neither the Director nor the staff disputed the need for renovation of the buildings but they were strongly opposed to moving from the present site, and most of the staff were not desirous of having their sister department move onto their grounds, fearing that it would reduce the collegial atmosphere that came about from a relatively small number of employees. An improvement in the scientific interaction between the two departments was not seen to outweigh the advantages of small size. During these months of tension the DTM Lunch Club served as a unifying force, as information was shared and discussed with the result that the staff did not become divided and stood with the Director on important issues. The Director had serious misgivings concerning the financing of the project and even more regarding plans to reduce staff size and significantly increase the number of postdocs. Despite initial assurances that outside funds would be available for the work, indeed even for an equal enhancement of the endowment, it soon became clear that the necessary funds would have to be provided by the Institution, and Wetherill correctly foresaw that this would place additional pressure to secure soft money for operating expenses. A final decision was made to co-locate (a word selected after careful thought) the two departments at the Broad Branch Road site. The Department’s Main and Cyclotron Buildings would be renovated and a large new building would be constructed. The Standardizing Magnetic Observatory, of wood construction and home of the Lunch Club, was demolished to make room for the new Research Building. In summer 1990 the new campus was finished and the Geophysical Laboratory moved into their new quarters. The old main building housed the administration for both departments, a seminar room and a new kitchen and dining hall for the Lunch Club and other culinary affairs, but the greatest space was given over to the combined libraries of both departments. A new librarian, Shaun Hardy, had been added to the DTM supporting staff only months before the varied activities associated with construction and renovation were to begin. He was selected to administer the combined libraries, which placed him in the advantageous though not enviable position of being able to work with the collections as they were being moved and stored. Both collections were carefully culled with care taken to identify rare books. Duplicates were eliminated and sold to second-hand dealers where suitable. In combining the two, their old DTM classification, devised by J. D. Thompson of the Library of Congress, for the Department, was replaced by the Library of Congress system and the catalogue computerized. Hardy became the utmost authority on using computers and the internet in locating
The Bolton and Wetherill years
207
research materials. He also accepted with enthusiasm the tasks of archivist and curator. By the end of 1991 the new Broad Branch Road Campus was functioning. The two groups got along well, and having them nearby brought gains that made up for disadvantages, the significance of which lessened with time. The Lunch Club was opened to all inhabitants and a few from the Geophysical Laboratory did join. The shakedown of the new physical plant had rough moments. Fire engines raced to the rescue of four false alarms resulting from new safety installations. Air conditioning became a curse for directors and support staff, causing utility bills to soar to heights scarcely imagined when compared with those of the two departments separately. The consequence of this cost resulted in little comfort – indeed, often great discomfort – for those working outside conventional hours, which was for many more the norm than the exception. The new campus was much more complicated to run, with substantially more machinery for its operation and numerous functions controlled by computer, resulting in the need for an engineering staff of six, larger than needed by both departments before. The disruptions of the move and the years of uncertainty that preceded it soon receded into the past.
28 ASTRONOMY
Astronomy was not supposed to take root at DTM, yet it has existed there longer than the discipline for which the Department was originally founded. The astronomy section grew out of the amiable partnership of Vera C. Rubin and W. Kent Ford and was nurtured by a unique environment formed by shared offices around a central basement library meeting room that naturally encouraged discussion. Rubin, an Assistant Professor at Georgetown University, encountered the Department through her thesis advisor, George Gamow, who used its library for conferences with her, convenient for him because he was visiting the biophysics group. After obtaining the Georgetown position she visited Burke from time to time to discuss her work on the rotation of galaxies and inquired about the possibility of some kind of position. The query came at an opportune moment because an optical astronomer, Alois Purgathofer, who had been working with Ford on image tubes, had given notice that he was returning to Austria, and Tuve wanted an astronomer to provide guidance, so she was immediately hired. It was the first that Rubin had heard about image tubes. The two made an immediate and excellent combination. Rubin’s compelling passion for astronomy matched perfectly with Ford’s instrumental skills and soon transformed the physicist into an astronomer. The atmosphere that they created began to attract increasing numbers of students, postdocs, visitors and staff members. The first use of the image-tube spectrograph that was not imposed by engineering tests was the determination of the red shifts of 14 quasi-stellar objects. These were the optical sources generally identified with very intense radio emission and were the dominant puzzle in astronomy. It was not clear whether they were extragalactic, galactic or even within the solar system, and their red shifts, indicating velocities of recession, were of extreme importance. The enhanced sensitivity of the image-tube spectrograph allowed a red shift as large as 2.012 for 3C9 to be measured, presumably indicating a great distance in an expanding universe (Figs. 28.1 and 28.2). Rubin found the intense competition for quasar red shifts distasteful and determined to find an uncrowded field in need of study where she could work at her own pace. Research settled on a study begun in 1966 of the rotation of the galaxy M31 in Andromeda, the object being to compare the velocity structure obtained optically with what had been obtained three years earlier 209
210
The Department of Terrestrial Magnetism
Figure 28.1 Vera Rubin using a two-dimensional measuring engine. Spectra obtained with the DTM image-tube spectrograph were recorded on 5 cm by 5 cm glass plates. The stellar or galactic spectrum would be located along a center axis of the plate with a calibration spectrum from a discharge lamp on each side. The measuring engine had two orthogonally mounted micrometers that moved a microscope. A spectral line was located in two coordinates by the operator with the values entered onto a punched card for a computer. A deck of cards then provided the data for a given plate, and the IBM 1130 computer reduced the data according to a code that yielded the wavelengths of the plate’s spectral lines. 1972.
by the DTM group with the NRAO 90 m dish using 21 cm emission. The basis was a group of 67 emission regions of small size that had been identified and that yielded sharp spectral lines from H, He, N, O and S in various intensities and degrees of ionization. Observation of these regions yielded Doppler shifts from which velocities could be extracted, the total providing a rotation map. The task required more than two years for completion and made use of the 2 m reflector at Kitt Peak and the 1.9 m at Lowell. The agreement with the 21 cm data was remarkably good and certainly expected. The center of M31 had no identifiable emission regions, but some emission lines on top of stellar continua were observed, although the velocities so extracted from 16 positions were not compatible with a simple dynamical model. The puzzle of the nucleus was to remain (Fig. 28.3). As a student Rubin had made use of published red shifts to question the uniformity of their distribution over the sky, but evidence based on inadequate data did nothing to alter belief in the uniformity of expansion. The
Astronomy
211
Figure 28.2 Optical spectrographic data disclosing the presence of dark matter in the galaxy NGC 801. The spectrograph slit was placed across the galaxy shown below. The shifts in the wavelengths of a few spectral lines are evident. Also evident is their maintaining a constant displacement, hence velocity, for positions far from the center. This is contrary to what was expected, if the mass of the galaxy is concentrated at the center. 1979.
success of measuring red shifts with the image-tube spectrograph for distant galaxies opened the possibility of an investigation with a statistically significant number of galaxies. To this end a list of 208 ScI galaxies having magnitudes between 14.0 and 15.0 was compiled from the Palomar photographic sky survey. Such galaxies, well-defined spirals, were taken to be “standard candles,” as there was evidence that their dispersion in absolute magnitude is small, and a program was begun in 1971 to determine their red shifts. If the expansion was uniform, the sample of galaxies within the shells defined by two magnitudes would show no systematic velocity distribution over the sky (Fig. 28.4). Within two years red shifts had been measured for 75 of the sample, and a striking non-random distribution was apparent. One third of the sky that contained 28 had an average velocity of 4966 ± 122 km/sec; another third containing 22 had an average of 6431 ± 160 km/sec, and there was almost no overlap of the two groups.
Figure 28.3 The Andromeda galaxy (M31) copied from the Palomar Sky Survey to illustrate the observational evidence for dark matter. The optical velocities from ionized gas clouds, measured in 1970 by Rubin and Ford, are shown as open and filled circles. Velocities from neutral hydrogen radio observations, measured by M. S. Roberts and R. N. Whitehurst in 1975, are shown as filled triangles and remain high far beyond the limits of the optical disc. The failure of these velocities to decrease in a Keplerian manner is interpreted to mean that M31 has a large, non-luminescent coronal mass. Figure composed by Janice Dunlap
Astronomy
213
Figure 28.4 Evidence for an anisotropic expansion of the universe. Rubin and Ford selected a large sample of ScI galaxies having magnitudes lying between 14.0 and 15.0 and measured their red shifts. They assumed the galaxies were “standard candles” lying at approximately the same distance. Were the universe expanding homogeneously, the red shifts would be the same, but the results showed this not to be the case. The filled circles indicate the position on the sky (in equatorial coordinates) of galaxies with speeds between 5400 and 6100, the open circles speeds of 4000 to 5400, and the filled squares speeds 6100 to 7500 km/sec. The two regions of the sky indicate the red shifts are preferentially faster in Region II than in Region I. 1972.
Publication of this anomalous velocity distribution, for which no cause was proposed, brought criticism that the results came about through sampling effects and other observational matters. All of these were examined, and all caused the anisotropy to be enhanced rather than diminished. To increase the amount of data, some of the 208 were observed with the 90 m radio dish at NRAO for hydrogen emission, observations made together with Norbert Thonnard. This not only increased the number of red-shift measurements but also established the radio-hydrogen luminosity as a second distance indicator, and one that is not affected by absorption by the dust of the Milky Way. After including data for all 208 galaxies and making all corrections, it was concluded that our galaxy was moving relative to this group of galaxies at a speed of 450 ± 125 km/sec, directed toward galactic coordinates l = 164◦ and b = −3◦ . Various other explanations were considered and rejected. Elements of a cosmological puzzle presented themselves when at about this time very preliminary measurements of the cosmic 3◦ K radiation, the residue from the Big Bang, were found to be isotropic to an accuracy sufficient to exclude this motion. It was the beginning of a study of the large-scale motions in the universe that has continued to the present. When the isotropy study was completed in 1976, Rubin had returned to galactic dynamics and found surprising results in the study of the nearby
214
The Department of Terrestrial Magnetism
galaxy NGC 3115 for which a flattened disc had recently been found in a generally spheroidal system. The disc held out the possibility of examining the dynamics of what had been classified as an elliptical galaxy, stellar assemblages with problematical velocity structures. The angular size of NGC 3115 was sufficiently small to allow two positions of the slit of the image-tube spectrograph to cover the major axis. The resulting plates showed that excellent Doppler shifts could be extracted from a number of absorption lines, and a superb rotation curve resulted. (This differed from the method by which the measure of the rotation of M31 had been achieved earlier, observing a number of previously identified emission regions, and was, needless to say, a much faster method of acquiring data.) The rotation curve was not what had been expected; it showed the usual rapid changes in velocity near the galactic center but was followed by constant velocities as the data extended to the edge. Such a “flat” rotation curve was clear evidence for substantial amounts of mass outside of the center, although this was not too surprising for the quasi-elliptical. Such an easy method of determining rotation curves almost demanded its application to a large sample of spirals for which the relationship between rotation and mass was straightforward and thought to be understood. Rubin entered this new program with the eager collaboration of Ford and Thonnard using the 4 m telescopes at Kitt Peak and Cerro Tololo (Chile), securing thereby coverage for northern and southern hemispheres. Within a couple of years data had been acquired on 21 spirals with a range of sizes. If the principal mass of a galaxy were concentrated in its nucleus, as was theretofore assumed, the speed of rotation would fall off approximately in inverse proportion to the radius. A significant distribution of mass with radius would counter this or even cause increased rotational speeds. The results of 21 spirals indeed showed this surprising behavior without exception: none of them showed decreasing speed with radius and most even showed it increasing. The conclusions are succinctly stated in Yearbook 78: there is now no doubt that the integrated mass in a high-luminosity spiral increases linearly with nuclear distance and is not approaching a limiting mass at the edge of the optical image. Because it is unlikely that the mass distribution drops precipitously to zero at the edge of the optical image, this result suggests that high-luminosity galaxies contain a component of nonluminous matter which is distributed to an even larger radius than the luminous mass. . . . Our results strengthen the view that the optical galaxy is just a luminous disk within a more massive agglomeration.1
In 1984 Rubin and Ford made observations continuing their study of the rotation of spirals with an examination of the small, isolated UGC 12591 which provided the fastest rotational speed yet observed, 500 km/sec. The rotation curve was striking in the nearly constant and symmetrical velocities of the two sides, but what catches the eye of the historian is the observers’
Astronomy
215
use not of the image tube but of the new CCD, the charge-coupled device then sweeping television imaging. The image tube had served brilliantly for a generation, but its time was past. Astronomers began returning from observing trips not with packages of photographic plates but with reels of computer tape, and interpretation of data took place at some form of the rapidly changing kinds of computers with images being manipulated on cathode-ray-tube screens. This new study of galactic rotation examined the effects on the rotation of individuals in a cluster of galaxies. The characteristic rotation curves with constant velocity branches had been explained by the presence of a massive dark halo within which the spiral structure was embedded. Such haloes worked to explain galaxies at the edge of the cluster, but failed for those near the center, implying that their haloes had been stripped away, at least in part. A nice piece of work during the early 1970s, although somewhat slow to be appreciated at the time, was a contribution to the understanding of star formation. It had generally been believed that some kind of external perturbation was required to initiate cloud collapse leading to star formation, and the shock fronts resulting from supernovae had been suggested in 1953 ¨ by E. J. Opik, but observational evidence to support this hypothesis was wanting. The difficulty lay in the relatively short period during which the filamentary optical nebulosity persisted compared with the much longer periods during which star formation is observed. The link to star formation came about through studies by George Assousa, a new staff member by way of nuclear physics, who identified the remnants of two supernovae explosions from their expanding radio-hydrogen shells, evident long after the optical evidence had vanished. Associated with the shells were a number of stars in very early stages of evolution. Whenever there had been a report of a supernova at the time Rubin and Ford were making observations, they recorded the spectra, and by the time of these star-formation studies they had recorded three. The spectra showed broad structure but no individual lines. Assousa approached the problem by calculating synthetic spectra made up of spectral lines suspected of being present but shifted in wavelength by their emission from an expanding spherical shell. The conclusion was that transitions from singly ionized iron dominated, an interpretation that fitted with the models of such explosions. In September 1981 Franc¸ois Schweizer joined the staff, bringing with him an observational program that challenged the views held until that time about galactic formation. Galaxies are observed to be of two general classifications by appearance, with others taking on peculiar forms. Spirals with well-defined discs that have ordered rotational movement are contrasted with ellipticals whose stars show random motions; other morphological types seem to be either intermediate or of some completely different classification. It was thought that galaxies formed shortly after the Big Bang, but how spirals and
216
The Department of Terrestrial Magnetism
ellipticals with their contrary angular momentum properties came into being at the same time was a puzzle. Schweizer sought an observational base for the theory of Alar Toomre that ellipticals resulted from the collision and merging of two spirals, which were assumed to be the original inhabitants of the universe. To this end he studied the velocity structures of the peculiar types, among which he found special confirmation in the “long tails” that some had and that had been predicted by Toomre. Important aspects of his theory were that these collisions were dominated by tidal effects and dynamic friction caused by the stars of one galaxy exchanging energy with those of the other, which resulted in the random motions that characterize ellipticals. As the merged galaxy ages the tails either fall back into it or escape completely. Rubin and Schweizer collaborated to examine and explain a small group of galaxies that have polar rings, presenting the observer with two rotating systems approximately orthogonal to one another. This was thought to be the result of a spiral having captured a much smaller gas-rich galaxy, and the dynamics of the ring allowed the vertical gravitational potential to be probed. Studies of the merging of galaxies continued so that, by 1986, 72 highquality CCD images had been obtained for the purpose of examining the formation of the bulges of spirals, which have all the characteristics of ellipticals just on a smaller scale. As confidence grew that galactic collision was the mechanism for forming ellipticals, a program of study undertook to find, if possible, evidence in the ellipticals themselves that indicated their collisional origin. By examining the dynamics of galaxies where merging was still evident, Schweizer was able to define an index, the fine structure, which combined five visual properties. This index could be related to the incidence of starbursts, the enhanced formation of stars that came about during the collision. A sample of 74 ellipticals was examined and found to be capable of having fine structure indices attached to them. This indicated that they had indeed been formed by merger and led through color variations to estimates of when that event had taken place (Fig. 28.5). John A. Graham joined the Department in July 1985. He had previously worked in several fields of observational astronomy, including collaboration with Rubin and Ford on extra-galactic programs, but his main interest at the time was in the formation of stars like the Sun, which overlapped the planetary studies of Wetherill and Boss. This was a field of research that had been given an important boost from new detectors that were vastly superior to unaided photographic plates for observing infrared radiation. Newly formed stars that condense from clouds are obscured, partially or completely, by the dust of the clouds, whereas the radiation from the star heats the dust sufficiently to produce infrared radiation that can penetrate the enveloping dust in the range of 1–2 m wavelengths.
Astronomy
217
Figure 28.5 A pair of colliding disc galaxies, NGC4038/4039, shown imaged from the ground (above and right) and with the Hubble Space Telescope (below and enlarged). The two long “tails” are tides extracted from the discs by the gravitational interaction. The Hubble image shows thousands of newborn stars and star clusters formed from gas clouds compressed during the interaction and essentially nothing of the “tails.” Numerical simulations predict that the two disc galaxies will merge into one remnant galaxy within several hundred million years. Colliding galaxies were the object of Franc¸ois Schweizer’s research. NGC4038/4039 had long been a favorite subject for DTM investigators.
There has been observational evidence backed by theory that the protostar goes through a phase, first observed in the star T Tauri, during which it emits a strong plasma wind that sweeps away the gas and dust still surrounding it and in so doing becomes visible. Graham used a method of examining this wind directly with small clouds, called Herbig-Haro (HH) objects. These clouds have masses of the order of 10 Earth masses and have remained as identifiable units after much of the remaining gas and dust has been dissipated. They have the useful function of serving as test particles for tracing the stellar wind that drives them. By emitting in the visible, having been shocked into emission by the solar wind, they furnish thereby spectral lines from which Doppler shifts can be determined. Events happen so fast around these systems that the observer can watch changes simply by procuring images at different times and comparing them. Some of these stars show night-to-night variation in apparent brightness, presumably the consequence of the motion of obscuring clouds.
218
The Department of Terrestrial Magnetism
A matter that needed investigation was that of the conjectured two-step formation of stars from large clouds. In the early phase very large stars are thought to form, stars sufficiently massive that they consume their nuclear fuel in a few million years. Once they have taken their share of the cloud, stars of lower mass, such as a Sun, can proceed relatively unhindered. In the southern hemisphere there is a nearby cloud that meets this criterion, the Gum nebula that extends over 30◦ of the sky in Vela and Puppis. There are very massive and bright stars within it and the remnant of a supernova that exploded 10 000 years ago, but high-mass stars are no longer being formed there. The formation of low-mass stars is now taking place and the nebula is close enough for good spatial resolution. Using various southern hemisphere telescopes and both image tubes and CCDs, Graham studied a number of low-mass stars in varying degrees of formation, in some cases through their direct visible emission, in others by visible light reflected from clouds or exciting them, and by the infrared from the dust, both by clouds enclosing the star closely and those extending from it. Using the HH objects, stellar wind velocities of 200 to 300 km/sec were recorded. These studies gave strong support to the assumption that lowmass, Sun-like stars are being formed in the Gum nebula very much along the lines that have been proposed by those studying star and planet formation theoretically. As well as pursuing his individual interests in star formation, Graham was also involved from the onset of planning in 1984 with a large team project to determine with great precision the Hubble expansion constant. The Hubble constant, as a measure of the expansion of the universe, presumably carries with it the all-important cosmological knowledge of the age of the universe and hence has been the subject of many eager but dispute-encumbered studies. The speed of recession of galaxies is a simple and accurate measurement, but to determine the rate at which the universe is expanding requires their distances as well, and this is the hard part. Furthermore, the growing knowledge that the universe is not isotropic requires that a statistically significant number of galaxies must be observed over as wide a range of distances as possible. One approach for determining distance uses Cepheid variable stars to relate apparent magnitude to distance. Cepheids are so bright that they can be identified in very distant galaxies. In addition, the period of their light variation is related to their intrinsic luminosity. The identification of even one Cepheid variable and its period allows a distance to be estimated from its apparent brightness, but in practice 20 or 30 are required for each galaxy to obtain the required precision.2 Until the advent of the Hubble Space Telescope, their use was limited by the poor resolution imposed by the atmosphere on ground-based telescopes. The team initially had 14 international observers, although as the project progressed over the large part of a decade
Astronomy
219
some joined and others left. The results have been reported in 29 papers published in The Astrophysical Journal. Senior authors on individual papers, each having typically more than 20 co-authors, varied and generally indicated who led the investigation for a particular galaxy and actually did the writing. This was a way of doing astronomy that differed as much from the time when Ford and Rubin loaded the Carnegie image-tube spectrograph together with power supplies and associated equipment into a truck for transport to the Lowell Observatory as satellite methods of measuring the geomagnetic field differ from those of Bauer’s ships and expeditions. The astronomers observed from their computer terminals – a rapidly growing method for all telescopes. It was clearly the way of the future, but it left some old astronomers unsatisfied.
29 THE SOLAR SYSTEM
When George Wetherill became Director in 1975 he introduced to DTM the study of the solar system, a discipline lying between astronomy and geophysics that nicely linked the two. At UCLA he had developed an interest in the origin of the solar system, which soon centered on the theoretical underpinnings of its origin. Theory had recently received an important boost from the Russian physicist V. S. Safronov, whose work during the 1960s marked the most important advance since the conjectures, in 1795, of Pierre-Simon Laplace who had proposed that the solar system consolidated from a rotating cloud of gas out of which the Sun formed. This subject had been investigated by Thomas C. Chamberlin supported by one of the first grants awarded by the Institution.1 He tried to evaluate all observational evidence together with explanations proposed by various investigators of the nineteenth century, but these efforts were severely hampered by essentials that were not known and much that was known but wrong. He thought condensation directly from a gaseous Laplacian nebula dubious, as he saw it as unable to provide the thermal energy needed to melt rocks, and proposed that meteoritic material formed planetesimals that were subsequently fused into planetary nuclei through the kinetic energy of their impacts, a point of view similar in some respects to the later Safronov– Wetherill approach. Chamberlin had no knowledge of giant molecular, dust-filled clouds, the stuff from which planetary systems are now thought to form, and he was burdened by the continuing belief that spiral nebulae were planetary systems in formation. This troubled him, however, as he could not understand how the matter in these objects could be luminous, as they so obviously were, if planetary systems were forming; furthermore, the source of the matter for these nebulae was elusive. He suggested rather that a stellar body passing close to the Sun caused tidal disruption, as calculated by his Chicago colleague F. R. Moulton, and that this distributed matter into nebular form with subsequent consolidation. Planetary origins was not a line of work followed at Carnegie’s newly formed Mount Wilson Observatory. Indeed, Chamberlin’s contributions were organized in the Year Books under geology, his principal field of study, and the subject had to a wait the new work at DTM for further advancement 221
222
The Department of Terrestrial Magnetism
at Carnegie. Chamberlin’s tidal disruption theory was recognized to have serious difficulties, most importantly how circular orbits would result from such stellar interaction. Between 1940 and 1955 Carl von Weiz¨acker, Gerard Kuiper and Harold Urey, by then possessing knowledge of interstellar clouds of dust and gas, had returned to Laplace’s model. The formation of planets from clouds of gas and dust presented two problems resulting from observation. First, mass spectrometry had allowed the discovery of the decay products of extinct radioisotopes in meteorites, indicating that the formation of the solar system had been faster than the models of dust condensation of the time could explain. Second, there was abundant evidence for extensive melting of the planets and the Moon for which a onestep coagulation could not account. Safronov could account for the melting by assuming the dust coalesced into particles that collided with one another to form objects, planetesimals, of the order of a few kilometers diameter, that in turn collided to form the present terrestrial planets. The details of this needed to be worked out by simulating the orbits of many planetesimals with a computer and learning if they could form bodies similar in some way to the terrestrial planets. Such calculations became Wetherill’s principal study and occupied nearly three decades of his research. The task, shared with investigators elsewhere, was to examine in detail how planetesimals could grow into the terrestrial planets. The first step was to determine the distribution and evolution of velocities of the swarm of planetesimals having initial distributions of semi-major axes and eccentricities that were random but reconcilable with the mechanics thought to rule the collapsing cloud of dust and gas. Out of this distribution emerged the larger “embryos” from the relatively soft collisions of the planetesimals, a process requiring about 100 000 years, a time compatible with the lifetimes of the extinct radioactivities. During the second and final stage, a few runaway embryos grew by collision and became planet-sized objects. As these studies progressed, ever-increasing computer power became available that allowed the calculations to follow the embryos in heliocentric Keplerian orbits and to take into account mutual gravitation and collisional damping. Many computer simulations showed the formation of bodies having orbital parameters and masses roughly typical of what is found for the Sun’s terrestrial planets. Although the result could not be interpreted as a proof that planetesimals did accumulate in this manner to form the four innermost planets of the solar system, it nevertheless indicated the approach had merit, especially in that it all took place within a span of the order of 100 million years, a time compatible with other evidence. This approach to the problem explained the origin of the energies needed to melt the planets on formation, the important objection to the hypothesis
The solar system
223
of direct accumulation from dust. The kinetic energies of the impacting embryos sufficed adequately to this end. A theoretical study of how the solar system might have been formed necessarily carries with it a charge to reflect on the nature and origin of meteorites. Meteorites contain radioisotopes generated by cosmic rays and having half lives too short for them to be independent bits of matter that failed to coalesce into planetesimals in the solar system’s early years. The orbitalcalculation algorithm was used to examine this and supported the belief that most stone meteorites have their origin in the asteroid belt between 2.2 and 2.5 AU from the Sun, where they presumably resulted from collisions of the bodies there. This was hardly a new idea, but the calculations by Wetherill and others showed how such collisional debris could receive chaotic orbital changes from synchronous gravitational perturbations by Jupiter and Saturn that accelerate them into highly eccentric orbits that could intersect the orbits of the inner planets or possibly eject them from the solar system. The distribution of asteroids as a function of their distance from the Sun shows gaps where few if any objects are found and that have their origin in such resonant interactions. Known as the Kirkwood gaps, they are located where the orbital period for that distance coincides with a simple fraction of the period of Jupiter and thus provide observational evidence for the ejection mechanism. Such calculations have in recent years been placed on a more sound basis through the numerical integration allowed by modern computers. By the end of his term as Director, Wetherill in 1991 had addressed the problem of why there was no planet in the region occupied by the asteroids. His calculations continued to demonstrate the reasonableness of the formation of the four terrestrial planets by the rapid agglomeration of planetesimals into embryos with a few becoming runaways. But the asteroids quite obviously had not followed such a course. There was every reason to assume that the radial region they occupied had had its fair share of matter from the solar nebula, yet these objects can account for only a tiny fraction of the expected mass. Calculations of the fate of embryonic planets in the asteroid region showed that their mutual perturbations eventually brought them to one of the resonance regions, whence it was either sent out of the solar system or on to an eventual collision with the Sun or one of the inner planets. The result was a sweeping of the asteroid belt of nearly all of its building material. Wetherill’s orbital calculations found an application highly relevant to the question of life on Earth. They explained the early period of high bombardment during which the intense cratering of the Moon and Mercury give abundant evidence. The material that formed the projectiles of that stage was, of course, the numerous objects remaining after the formation of the terrestrial planets and that remain in various forms today. With time
224
The Department of Terrestrial Magnetism
the population of these objects was greatly reduced, either through collision or deflection out of the solar system. It is known that a store of comets lies at the extreme edge of the Sun’s gravitational influence, regions called the Kuiper belt and the Oort cloud. Perturbations of objects in these swarms, such as a passing star or a giant molecular cloud, alter their trajectories so as to direct them toward the Sun and the terrestrial planets. By calculating the orbits of a statistically significant number of such objects, Wetherill examined the effect of the giant planets Jupiter and Saturn.2 One set of calculations took the composition of the solar system as it now stands, the other substituted “failed” versions of the two giant planets that had significantly less mass. The results were remarkable. These planets so altered the paths of comets as to eject many of them from the system, and as a consequence the number of Earth-crossing objects was reduced by a factor of about 1000 over the 3500 million years during which life has evolved on Earth. Were the Earth subject to the rate of bombardment from which the full-grown Jupiter and Saturn protect it, the occurrence of impacts of the kind such as led to the extinction at the Cretaceous–Tertiary boundary would be expected on 100 000-year time scales. In the evening of Friday, 9 October 1992, a bright fireball appeared over West Virginia at the time when numerous high-school football games were being held. The fireball had a luminosity estimated by observers as greater than a full Moon and attracted the lenses of many video-recording cameras among the sports fans of four different states. The object evidently fragmented but a 12.4 kg meteorite struck a parked car in Peekskill, New York. Wetherill saw this as an opportunity to determine the trajectory, if the video tapes could be obtained from some of the observers. (Only three good orbits had been obtained from the photographic networks deployed in the United States, Canada and Europe during previous decades specifically to record fireballs.) Videos had been shown on television news broadcasts, and tapes were eventually obtained from some of the stations. Wetherill soon encountered Peter Brown, an enthusiastic student from the University of Western Ontario, who had undertaken the task independently and who joined efforts with him. They occupied the locations of the video observers and determined the orientations of fixed objects in the fields of view with a transit. After substantial calculation and data reduction, a very good set of orbital parameters was determined.3 Alan P. Boss was interested in the formation of stars by modeling their formation from clouds of gas and dust, the period that precedes the formation of planetesimals, so his appointment to the staff in 1981 supplemented the Director’s studies. His computational work followed the contraction of the presolar nebula under the effects of gravity, which tends to form clumps that are resisted by thermal energy and rotation. His early work concentrated on various modes that resulted in multiple stars, including the first
The solar system
225
three-dimensional study of a rotating cloud collapsing and heating up, work that included asymmetric starting conditions. His calculations provided a mechanism for the collapsing cloud to rid itself of excess angular momentum – an old and long-troubling problem in star-formation theory – by transferring it to the orbital motion of the stars formed from the cloud. The important intermediate step in some cases was the formation of a bar-like distribution at the center of the collapsing cloud, which fragmented into two or even more parts. With the steady, in fact, spectacular growth of computing power, the problem of star formation from collapsing clouds was continually reexamined. The formation of multiple stars was modeled, showing the reasonableness of a spectrum of possible star-formation outcomes from single stars to various fragmentation possibilities. One main determinant proved to be the ratio of thermal to gravitational energy in the cloud: too high a ratio can prevent collapse, a low ratio leads to fragmentation, and intermediate ratios form single stars. The calculations of the thermal structure of the star-forming nebula indicated that cloud temperatures within 4 AU were above 1000 K, which was consistent with the formation of rocky objects there and possibly even explained the low abundance of volatile metals in the terrestrial planets. By 1995 the collapse initiated by various kinds of shock fronts could be modeled with moderate resolution, verifying with theory what had long been suspected from observation. Since it became clear that stars are other suns, there has been measureless speculation as to whether any of them had planetary systems and, of course, whether such planets supported some form of life, speculation that has formed a bedrock of science fiction. Inasmuch as the direct observation of extrasolar planets lies outside the capabilities of all present-day telescopes, searches have been carried out through the effects they might produce on their host star: astrometric wobble or Doppler shift, photometric transiting and microlensing. Early attempts using the former gave rise to claims that could not be substantiated, but in 1995 a technique based on the Doppler shift yielded a planet of 0.45 Jupiter mass orbiting a solar-sized star at a distance of 50 light years. Confirmation quickly followed because other groups had been working independently on the very intricate observational techniques, and within a short time eight more new planetary systems had been discovered. A pair of observers, who had spent years developing the needed observational skills at San Francisco State University, Geoffrey Marcy and R. Paul Butler, began a series of measurements using the 3 meter Lick and the 10 meter Keck telescopes. The critical instrumentational elements are the insertion of a cell containing iodine vapor into the optical path leading to the spectrometer and the subsequent computer analysis of the spectrum obtained from the detecting CCD. Comparison of the spectral lines of the star with the iodine
226
The Department of Terrestrial Magnetism
absorption lines allowed the minuscule changes in wavelengths to be determined. It was thus particularly gratifying that Butler accepted a staff position at the Department in 1999, thereby adding an observational component to the theory of Wetherill and Boss. Along with Butler came the knowledge that these planets were all large, of the order of Jupiter’s mass. This was hardly surprising because the technique is currently useful only for very massive planets, but what was surprising was that many were generally very close to their parent stars, significantly different from the solar system. Butler’s observational skills were complemented two years later by the appointment to the staff of Alycia Weinberger, who had developed a technique by which planets can be observed through their perturbation of the discs of dust observed around young stars with infrared. With the appointment in 2002 of Sara Seager came skills in understanding the atmospheres of these newly found planets by observing the effect of their atmospheres on the stellar light that passes through them during their transits of the stars.
30 GEOCHEMISTRY
As studies with mass spectrometers succeeded in determining the ages of rocks, investigators began to use them in searches for “isotopic signatures” that they hoped would indicate the sources of magmas. Problems had been encountered in dating wherein the data for the daughter isotope did not conform to the linear relationship with the parent that defined an isochron, clear evidence that the sample had undergone some kind of alteration between the initial solidification of the rock and its collection. Obviously, such alteration would preclude the use of isotopes to identify magma sources as well as falsify ages, so it became imperative to understand what the processes of alteration were and how they affected data. As a secondary matter, the alterations provided some information about the history of the rock during this intermediate period. Because parent and daughter isotopes are different elements, heat, water and pressure can cause their differential diffusion into or out of the various crystals that compose a rock. The mobility of such atoms is enhanced by partial melting, the melting of the crystals in a rock assembly that have lower melting points than their neighbors. Thus research emphasis shifted from pure chronology to searching for signatures of magma sources and to understanding the processes that could confuse the observer. Year Book 62, the year after Stanley R. Hart joined the Department, described the group’s efforts as “Geochronology and Isotope Geology,” and in the following Year Book the reference to dating was omitted. In Year Book 68 it became “Geochemistry and Geochronology.” The names reflected changing attitudes toward their scientific goals and the expanded use of petrology and geochemistry for interpreting results. This was a period during which instrumentation was improved. In 1968 and 1970 two new mass spectrometers were built in the DTM shop, the first with a radius of 9 inches for use with heavy elements such as uranium and lead, the second with a radius of 6 inches for use with intermediate-weight elements such as strontium and rubidium (Fig. 30.1). These machines had metal vacuum systems and ion-getter pumps and were computer controlled with output recorded on punched cards for analysis. They were capable of measuring isotope ratios for Rb/Sr to accuracies of a few parts in 100 000. Mass spectrometers also were unexcelled for the chemical analyses of trace amounts of elements capable of being ionized with a hot filament through 227
228
The Department of Terrestrial Magnetism
Figure 30.1 A further evolution of DTM mass spectrometry. At the left is a 6 inch radius machine, at the right a 9 inch; the dimensions give the radius that the ion trajectory takes in the magnetic field. The two machines are computer controlled. Seated in front of the controls are Stan Hart, unidentified, and Thomas Krogh. 1968.
the technique of isotope dilution. Another device was put into operation at the Geophysical Laboratory that allowed rapid and accurate chemical analysis, not only of the rock but of individual crystal grains. This was the electron microprobe. In it a tiny beam of electrons is directed toward a flat rock surface; the electrons excite X-rays in the sample whose wavelengths are determined using Bragg X-ray diffraction. Equally important was the creation of a laboratory of extraordinary cleanliness for the chemical extraction of the elements of interest from samples. This did much to reduce the contamination of samples from dust of the environment. To qualify for use in such a pure environment chemical reagents often had to be distilled or otherwise purified. These instruments and associated techniques opened a new method for studying the history of the Earth before the fossil record that appeared during the Cambrian period, which began 570 million years ago. Hart’s arrival coincided with the beginning of general acceptance of the theory of plate tectonics and with it a complete new view of the Earth as a dynamic system. It was obvious that there must be recycling of material, plates being formed at mid-ocean ridges and removed to fates unknown at
Geochemistry
229
subduction zones. Superimposed on this was recent knowledge about the origins of the chemical elements and the solar system. Of particular importance were the collections of meteorites, which called for isotopic analysis, and the return of rocks from the Moon in 1969, which called for every form of analysis. The first task of this new field was to define its goals, and this was to prove difficult and had to be reached through experimental observations for which guidance was poor. It is a difficult subject of study, one that presents a formidable vocabulary of petrologic, petrochemical and geologic terms with which the story must be told and with which few outside of the discipline are conversant. It has an historical parallel in the difficult work of spectroscopists during the nineteenth century; they were confident that their studies would lead to fundamental knowledge of the constitution of matter but they had to contend with the confusion of atomic and molecular emission, with impure samples and with theories that confused more than illuminated. Assumptions previously held about the homogeneous nature of the Earth began to fail without compensating clarifications. The principal revelation from the studies of the Earth’s crust with explosion seismology had been the replacement of the expected simple-layered structure with complicated structures varying from place to place. Studies of isotope variation in rocks taken from a wide range of locations quickly showed something similar. The isotopes of strontium were experimentally the easiest to investigate, the consequence of stable mass spectrometer runs, and were therefore subjected to extensive study. It was found that the ratios of 87 Sr/86 Sr, corrected for the effects of radioactive decay owing to age, were generally the same for a given locality but were significantly different for another. Basalts from mid-ocean ridges, ocean islands and various continental locations attracted many investigators and were found not to be alike. Uranium-lead systematics showed similar effects. It was clear that the Earth’s mantle was not isotopically homogeneous, so it became a major goal to understand the variability of the initial isotope ratios in order to explore old igneous processes. Hart began concentrating on understanding what was happening with the alkaline elements potassium, rubidium, and cesium and the alkalineearths strontium and barium, all suitable for the mass spectrometry of the time. Alteration by sea water disclosed confusing trends making it impossible to date the ocean floor from dredged samples either with Rb–Sr or K–Ar, quite disappointing given that plate tectonics gave such unequivocal predictions. To follow in detail the group’s studies of the rocks from diverse locations would tax the patience of the reader. Suffice it to record some of the locations that furnished samples for examination: the Franciscan basement on the western margin of the United States, the Glenarm series of thick metasedimentary and meta-volcanic rocks in the US Piedmont, the Archaean
230
The Department of Terrestrial Magnetism
meta-volcanic belt near Wawa (Ontario), Iceland and the Reykjanes Ridge, and, not surprisingly, the Andes. Also included was a small increment of a new age of science, a lunar basalt. Archaean volcanics were found to occur in multiple discontinuous belts, not long continuous ones. Major differences in the chemistry of the Earth were found to exist on larger scales than ever proposed before. One observation especially puzzled the investigators. Mid-ocean ridges were found to have scatter in isotope ratios, but it was relatively narrow compared with what was found among ocean-island basalts. The volcanoes forming such islands are not associated with subducting tectonic plates or mid-ocean ridges and are found in large numbers in the Pacific. It was somehow expected that the origins of these lavas would be similar to the midocean ridges, but this was quickly found not to be the case. Evidence points increasingly toward their origin being plumes of magma originating at the core–mantle boundary. A number of persons worked with Hart’s group, which cooperated with Gordon Davis and Thomas Krogh of the Geophysical Laboratory, which also provided a new DTM staff member in 1972, Albrecht Hofmann. Three collaborators whose work left important results were Akhaury Krishna Sinha, Christopher Brooks and Nobumichi Shimizu. Added to these were many visitors and fellows. Controversy arose over the validity of using chemical and isotopic differences as indicators of compositional differences in the source mantle. Those who disagreed with this assumed that the source is homogeneous on a large scale and explained all chemical and isotopic variation found in volcanic rocks as the result of differences caused by alteration. All agreed that the subtraction of the melt from an initially homogeneous mantle and its subsequent intrusion leads to an inhomogeneous residue. It was therefore important to determine whether and on what scale mantle isotope signature can be altered by the combined process of convection and diffusion. Hofmann initiated a series of experiments carried out by himself and others by observing the diffusion of radioactive isotopes through the melts of basaltic composition, an experimental method that resulted in the accumulation of a large inventory of radioactivity. (The government bureaucracy imposed on experimenters using radioisotopes increased steadily with time, and by 1990 the staff concluded that the experiments requiring them were not worth the trouble. The inventory was – with significant difficulty – disposed of and the nuclear regulatory license terminated.) A more elegant though less general method prepared a narrow zone of radioisotopes of sodium, calcium, manganese, and vanadium in situ at the center of a platinum encased cylinder of basaltic glass that was irradiated with a 40 MeV beam of deuterons from the Berkeley 88 inch cyclotron. The diffusion of uranium, whose significance in isotopic work is extreme,
Geochemistry
231
was studied by exposing the specimen to a flux of thermal neutrons that generated easily observed fission tracks. The rare earths occupy a peculiar slot in the periodic table between lanthanum (Z = 57) and hafnium (Z = 72) and form a suite of 14 elements having a gradient in mass yet similar chemical characteristics. They can be ionized by a hot filament and thus were found suitable for experimentation in mass spectrometers. They form the basis of a number of studies designed to examine the manner in which elements partitioned themselves into crystals growing from partial melts. This has remained a favorite instrumental tool for investigating such systems under varied conditions of heat and pressure. A new isotope system came into use in 1974 at La Jolla that opened up new possibilities both for geochronology and geochemistry, the alpha decay of 147 Sm into 143 Nd, both rare earth elements, and this radiometric system rapidly helped interpret the cause of variations in the abundances of these elements, whose geochemistry was already well known. A half life almost 30 times older than the solar system demanded precision that the 9 inch could not attain, if the resulting data were to contribute knowledge rather than noise. This led to the construction of a new machine with a 15 inch radius. The inference drawn from these diverse measurements of atomic mobility was that the inhomogeneities observed could not be accounted for by any of these processes working on homogeneous source material. The origin of the Andes was a question that had caught James’s attention during his seismic studies, and the possibility of learning more about the geological process by studying the geochemistry of the volcanoes that form that mountain chain appealed to him. His complete lack of the required laboratory skills proved no hindrance in DTM’s research atmosphere, for Hart’s laboratory was down the hall from his office. After a few discussions and some basic instruction he began extracting rubidium and strontium from rocks and measuring their ratios on a mass spectrometer. Analysis of Mesozoic Andean rocks showed 87 Sr/86 Sr ratios that were essentially the same as found in island-arc volcanoes, which fit the subduction model that James had proposed from seismic observations, but younger Cenozoic rocks had higher ratios and did not fit. The matter became controversial because of its bearing on the evolution of continental crust. James then set about using the oxygen isotopes, 16 O and 18 O, in the hope they would present matters in a different light. This brought about the need for completely new laboratory skills. Oxygen cannot be ionized thermally, requiring an electron-bombardment machine at the Geophysical Laboratory, where colleagues Thomas Hoering and Douglas Rumble provided instruction. The extraction of oxygen from rocks must employ reagents free of oxygen, which in practice required the use of bromine penta-fluoride, a colorless liquid of such notorious properties that its procurement presented a problem in its own right. Comparisons of oxygen and strontium isotope
232
The Department of Terrestrial Magnetism
ratios disclosed isotopic patterns that helped identify possible sources for the Andes: the mantle, altered oceanic basalts, ophiolite basalts, geosyncline sediments, deep-ocean sediments and continental-derived sediments. An interpretation was found that fitted these puzzling rocks by comparing their patterns with those from the assortment, and it pointed strongly toward the ascending magmas having incorporated rather large quantities of eroded material found in continental platforms. The basic idea of the Andes having been formed by the same mechanism active in island-arc volcanism was retained with geochemical differences that could be attributed to the relative proximity of continental platforms. In 1980 Hofmann left the Department to become the director of the MaxPlanck-Institut f¨ur Chemie in Mainz. Coincidentally Richard W. Carlson, who obtained his Ph.D. working with the developer of the Sm–Nd system, arrived as a postdoctoral fellow and was appointed to the staff the following year. A similar pattern was repeated four years later leading to Steven Shirey’s appointment to the staff.
31 ISLAND-ARC VOLCANOES
During winter 1977–78 seismologist Selwyn Sacks and geochemist Fouad Tera at lunch discussed a problem that troubled them about the subduction of lithospheric plates. It was clear that the lines of volcanoes that arranged themselves about 100 km behind a deep oceanic trench, which marked the subduction, had some kind of causal relationship with the plate, but what connection went beyond geometry? More to the point, did the plate furnish material for the magma that surfaced in these volcanoes? It was not a new problem, having been posed as soon as the dynamic model of the Earth had gained acceptance. Needless to say, two parties had formed in the dispute, one seeing evidence for the subducted material in the lavas, the other seeing none. The difficulty was that all the chemical elements and isotopes were subducted, all the elements and isotopes were present in the mantle wedge that lay over the subducting plate, and all the elements and isotopes were present in the erupting lavas, so conclusions based on analyses of lava turned on an interpretation of relative proportions. Tera made the observation that the cosmogenic isotope 10 Be, produced in the atmosphere by cosmic rays, had been measured in deep ocean sediments and that its half life of 1.5 million years was sufficiently long for it to be present in the lavas, if they incorporated any of the sediment and if the sediment had been transported to the roots of the volcanoes at speeds attributed to plates. The isotope’s half life was so short that it was otherwise insignificant in the Earth’s inventory. To settle the matter one need only extract the beryllium from the lava and examine it for radioactivity. In their enthusiasm they went to Brown, the resident nuclear physicist, with the idea that they might remove the beryllium from the rock following which he would determine the level of radioactivity with a suitable radiation detector. The idea, so happily conceived, shattered on the arithmetic of counting, for it was quickly demonstrated that for the amounts of 10 Be that might be expected in a sample, a ton of basalt would have to be processed in order to yield enough decays for a determination. In addition to that, 10 Be decayed with a weak beta particle without accompanying gamma ray, making it easily confused with other radiation, and it would be very difficult to reduce background radiation to the point where one could obtain a believable signal. It was, in short, an impossible experiment. 233
234
The Department of Terrestrial Magnetism
About a week after this discussion, Brown read of an experiment in which C had been measured using a nuclear particle accelerator as a mass spectrometer, and 10 Be should be susceptible to the same procedure. Within a relatively short time this had been accomplished by Grant Raisbeck.1 With this the problem took on a new face. Use of the DTM Van de Graaff was quickly ruled out because its energy was much too small. The obvious kind of machine for the task was the tandem Van de Graaff, which not only could attain the energies needed but also had an ion source that was easily accessible for changing samples. One at Rutgers University came to mind because the director of the laboratory was Georges Temmer, a DTM veteran. He was interested, and Brown took samples for a preliminary experiment on 16–17 May 1978. The result was failure; the transmission of the accelerator was about 0.001 at best, and subsequent work was unable to increase this to where the experiment would be reasonable. The next months were spent inquiring among various laboratories that might be capable of collaborating and hoping the performance of the Rutgers machine might become acceptable, but without success. On 28 June 1979, Roy Middleton, Director of the Tandem Laboratory, University of Pennsylvania, suggested a visit to discuss the problem and became intrigued with the study of volcanoes and agreed to see what his machine could do; its transmission was very nearly 100%. He introduced an assistant, Jeffrey Klein, and a lasting and very profitable working arrangement was formed between DTM and Penn. It was possibly the best laboratory for the work because Middleton was world renowned as the expert of the necessary negative ion source and an accomplished accelerator man. There were two experimental problems to be overcome: adapting the tandem to work as a mass spectrometer and extracting the microgram quantities of beryllium from basalts. The former was obviously the responsibility of the two in Philadelphia; for reasons dealing with local research priorities, responsibility for the latter fell to Brown (a physicist) with some instruction from chemist Tera. Samples of beryllium were to be formed into BeO, and this powder was loaded into the ion source whence it emerged as BeO− . On leaving the ion source the beam passed a mass analyzer that selected either the 9 Be or 10 Be component for the accelerator. The ions were accelerated to 8 MeV and passed through a combined carbon foil and gas stripper that produced a Be3+ beam, which had attained 27 MeV on leaving the machine. This energy allowed a detector specially designed by Middleton and Klein to count the 10 Be ions in the presence of orders of magnitude of more extraneous ions that exited the accelerator and entered the detector. A detector that so utilized high-energy ions was the secret of accelerator mass spectrometry. Atoms of the sample were literally counted. 14
Island-arc volcanoes
235
One problem common to both sample preparation and mass spectrometry was boron. Boron has two natural isotopes, 10 B and 11 B, and 10 B forms BO− and hence follows the 10 Be through the system. Worse still, it was an important contaminant in every sample investigated even after the most uncompromising chemistry. The special detector was made capable of eliminating it from the data, if the boron current was kept below a certain limit, and it was up to the chemist to hold it below that limit, but the first samples delivered for trial seemed to be made of pure boron. In Philadelphia they worked on the detector and details of the ion optics; in Washington on the extraction chemistry. For both laboratories it was a trial and achievement procedure. Basalts proved to be the most difficult kind of sample to run, owing to the difficulty of the chemistry and the low level of 10 Be in the rock, so the first science attempted was the examination of sediments and soils, which had concentrations of the isotope about a hundred times greater than the island-arc lavas. In March 1982 the first data were acquired from volcanoes, and the results were worth the three years expended developing the technique. Data were obtained from a suite of 15 samples from Central American and Aleutian island-arc volcanoes and one each from Japan and Taiwan. Of these all of the Central American and Aleutian rocks showed concentrations significantly greater than a million atoms per gram. Four samples were run as a control: hot-spot and rift volcanoes, mid ocean-ridge- and flood-basalts, all having no relationship to a subducting plate, and all with concentrations about a factor of 20 smaller than the island-arc rocks. The two samples from Japan and Taiwan also showed very low values, which gave the group pause, but the explanation lay in the diverse nature of island-arc subduction and found explanation later. There had been concern from the beginning that 10 Be in rain could enter the samples and falsify the data, so considerable effort had gone into obtaining samples that were either very fresh or cut from deep within a rock, hence the control lavas for comparison. It was necessary to obtain a large data base, and other lines of research involving 10 Be were also being tried, so the four days a month generally allotted on the tandem (it was helpful that the laboratory director was an enthusiastic coworker) were used on a 24 hour schedule. The off time sufficed for Brown to have a new selection of samples run through chemistry, which he brought to begin the run. Owing to the large amount of accelerator time required, which few nuclear physics laboratories wished to devote to such work, there were but few although sufficient confirmations, and nearly all of the data acquired, and consequently the science, came from the DTM–Penn collaboration. One unfortunate measurement from another laboratory, which was later retracted,
236
The Department of Terrestrial Magnetism
claimed the results did come from contamination by rain water. This happened about the time that Julie Morris came as a postdoc from MIT and allowed her to apply the skills of thin-section examination to demonstrate there was no water alteration. She soon improved the chemistry, lowering the boron to ultimate levels, and began to make the tandem runs. In addition to expanding the sample base, Morris also performed the experiment that had been done incorrectly and that seemed to invalidate the group’s work. This involved breaking the sample rock into its characteristic mineral grains and doing the measurement on collections of the various types, but in addition to measuring the 10 Be concentration, the 9 Be was also measured. For a given rock the two isotopes should behave the same in the melt, so that a plot of one against the other should approximate a straight line, which proved to be the case. Measurement of the amount of 9 Be present became standard and proved to be a useful interpretive parameter. A crude mathematical model was also developed that allowed the speed of the plate, the distance from trench to the volcano roots and the offshore sediment inventory to be related to the fraction of the sediment incorporated, which was found to range from 2% to 10%. By 1985 the group had measurements from 106 arc volcanic rocks and from 33 non-arc control basalts. Twelve different arcs were represented, of which three with a total of 49 rocks showed 10 Be at levels much greater than the control group. These were from Central America, the Aleutians and Japan, although half the Japanese samples were indistinguishable from the control group. Concentrations in the control group were typically < 5 × 105 /gm; concentrations in the arc volcanoes were > 106 /gm and frequently > 107 /gm. (These numbers should be compared with 10 Be concentrations in deep-ocean sediments of 5 × 109 /gm.) Nine arcs had concentrations indistinguishable from the control group. These arcs – and the investigators considered themselves lucky not to have used them for their first studies – proved useful in explaining the low levels, which were the result of inadequate inventories of 10 Be in the offshore sediments or of a travel time of the plate that was too long, or both. Use of this isotope continued but on a more selective manner, as its main contribution had been made, but Morris, who became a staff member in 1987, used it to open a new line of research – one that changed the hated enemy boron into a venerated confederate. There were chemical reasons to believe that boron would track beryllium in many processes – indeed this had previously been the problem – and it was known to be concentrated in oceanic sediments. A series of measurements were undertaken with the cooperation of William P. Leeman and postdoc Jeff Ryan that determined the concentrations of 9 Be, 10 Be and B in rocks from arc volcanoes having 10 Be. The results showed clearly that B tracked 10 Be; plots of 10 Be/Be against B/Be made straight-line plots and had correlation coefficients greater than 0.85.
Island-arc volcanoes
237
A number of facts point strongly to the transfer of beryllium and boron in an aqueous phase rather than in the melting of the subducting slab. To the misfortune of the Department Morris decided in 1993 to accept a position at Washington University in order to live with her husband, who had received a tenure-track position there. A few years thereafter the tandem in Philadelphia was shut down because Middleton and Penn declined to make it a service laboratory for accelerator mass spectrometry and the NSF nuclear physics program could no longer justify its operation. Morris was able to continue investigations with 10 Be using an accelerator at Lawrence Livermore Laboratory. The study of arc volcanism did not disappear from DTM. For boron to track beryllium made it obvious that the isotopic ratio of boron should be measured, using a normal mass spectrometer. This proved very difficult both in the chemical extraction of the minute amounts of boron in the rock and in the technique of running the beam from a thermal ion source. In one of nature’s jokes, small traces of contaminants seriously interfered with the Cs2 BO+ 2 , a molecular ion that ran on thermal machines, and only carefully tested chemistry and artistic handling of the spectrometer could produce the desired ratios. This was done not at DTM but in Japan by Tsuyoshi Ishikawa,2 who then spent a year as a visitor with Tera.3 The results were straightforward. Ratios of 11 B to 10 B had consistent values for a given arc, but differed from arc to arc and allowed of a simple explanation: isotopically lighter boron came from deeper volcanoes. The appearance of a radically new measurement technique calls for experimenters to think of applications unforeseen in its first use, and this was certainly true of accelerator mass spectrometry. It proved capable of analyzing other cosmogenic isotopes besides 14 C and 10 Be, specifically 26 Al, 36 Cl, 41 Ca and 129 I, which has led to research in various fields. Brown examined the use of 10 Be for studying sediments and soil erosion as well as a number of “crazy” ideas, but none of these had application to the general lines of research of the Department and they were not followed up. And so ended a beautiful, if short-lived, epoch in DTM’s history.
32 SEISMOLOGY REVISITED
When seismology was first studied at DTM the only data from the explosioninduced waves were the times of first arrivals and their amplitudes. This information sufficed to resolve questions about the structure of the crust and to locate earthquake sources as well as to examine some questions about interior structure. It was well understood, of course, that the seismometer signals following the first arrival contained valuable information, but the seismograms were compounds of the poor frequency response of the receiving instruments and the complexity of arriving signals. During the last quarter of the century a continual improvement of seismometers and the remarkable expansion of computer power allowed investigators to dispose of most of these restrictions. A memorable step was the acquisition in 1982 of the Digital Equipment Corporation VAX 11/780 computer that was located at the Geophysical Laboratory and connected to the local PDP11/34 by a highspeed telephone link. The earthquake generation of seismic waves results from the motion of the rock surfaces on faulting. Important parameters are the total energy release, the area of slipping, the amount and direction of displacement and its duration. The seismogram is the record of the motion of the rock as received at the instrument location, but this record is a composite of the response of the instrument, which is known, and the attenuation and propagation for the region between source and receiver, quantities that are generally known within limits that allow meaningful adjustment. Paul Silver, a staff member appointed in 1981, used the new computing power to determine the seismic parameters for the magnitude 6.9 earthquake of 15 October 1979 in the Imperial Valley, California and its aftershocks. The new computer allowed the calculation of the Fourier transform of the timedependent signal received into a frequency-dependent function, a technique that had proved invaluable in analyzing a wide range of communications problems. Owing to the spatial extent of the faulting, time-dependent signals are received at the various instruments with durations determined by the time and location of the slipping and the orientations of the fault relative to the directions to the individual receivers. The pulse width cannot be read directly from the time-dependent signal, even after correction for instrument response, but can be extracted from the frequency-dependent transform. 239
240
The Department of Terrestrial Magnetism
Data for this study came from 15 stations of the NSF Global Digital Seismic Network, each having a long-period three-component seismometer. This study extracted information about the source event from seismic data, finding for the main quake a slip-length of 40 km, a width of 12 km and a rupture velocity of 2.5 km/sec; the direction of rupture so determined was found in good agreement with the known line of the fault. A pivotal change in seismometer design evolved beyond the dynamic system in which the inertial mass was maintained in its neutral position by springs into one in which it was maintained there by amplifier-driven coils that kept the mass fixed relative to the instrument body. The currents in the coils provide the signals for the three components of motion. This freed the instrument from the mass–spring frequency resonance so hostile to a flat response and combined the three instruments into one unit. New seismometers were essentially flat from 0.008 Hz to 50 Hz, making the output signal a much closer approximation to the spectrum of a seismic wave. Not only that, the same instrument provided velocity data for three coordinates in the same package, one that was small and easily transportable. Advances in other technologies would allow the instrument to locate itself through observation of global positioning satellites, from which also came time signals of extreme accuracy; power was obtained from solar panels. Data could be culled by a computer at the station to eliminate signals not associated with events above some amplitude threshold and stored for later retrieval. The existence of such instruments opened possibilities for research that sprang to the minds of James and Sacks: deploy for periods of months a large array of such self-contained instruments, perhaps several hundred or even a thousand, over a limited area of the globe in order to examine a specific structure by observing with the array the many waves passing through it. The cost of providing the necessary instruments and operating them was substantial, at least when measured against the sums generally available for such research, even from government sources. This led in 1984 to the formation of PASSCAL (Program for Array Seismic Studies of the Continental Lithosphere), an organization of interested research groups that could request government grants. Carnegie furnished funds for organizational meetings between potential investigators and seismometer manufacturers. Sacks and Robert P. Meyer of Wisconsin, a long-time associate of the Department’s field activities, jointly chaired the organization. Specifications for the portable stations were worked out so as to allow operation for about a month without attendance, data to be collected at fixed intervals like syrup from maple trees. Government funds for hundreds of these portable seismic stations were not forthcoming, but Carnegie immediately purchased ten for DTM, allowing more modest observations to be made and experience to be gained with the new technique. Other groups were soon found who acquired or had compatible instruments, and collaborative expeditions became the rule, although
Seismology revisited
241
Figure 32.1 Marcelo Assumpc¸ao, Randy Kuehnel and David James service a portable broadband seismic station in Olimpia, S˜ao Paulo, Brazil. The station was part of an array installed to study deep lithospheric structure beneath the ancient continental shield of South America. 1993.
never quite fielding the hundreds that had filled their original hopes. The first such expedition was with the University of Wisconsin in June through October 1989 that deployed 25 instruments in a 1500 km transect running generally from Wyoming to Ontario over three major geologic provinces.1 This was followed by positioning the ten Carnegie instruments over the northern coast of South America in collaboration with Intervep, the Venezuelan national oil company. The Carnegie array then went for an extended study from November 1992 to June 1995 in southeast Brazil in cooperation with the University of S˜ao Paulo2 (Fig. 32.1). The array was also deployed to study Iceland and Hawaii and has at this writing completed a large multiinstitutional study of the Kaapvaal Craton in South Africa (Fig. 32.2). A second equally revolutionary change came in the manner in which the data were analyzed, as it opened methods of learning more about what had taken place at the rupture. With the advent of computers having extravagant memories and processing speeds, the problem of interpretation could be addressed by calculating synthetic seismograms that incorporate the detector response as well as the attenuation and propagation functions of the intervening medium. This allowed the investigator to vary the source,
242
The Department of Terrestrial Magnetism
Figure 32.2 The “huddle test” of 40 Streckeisen three-component broadband seismometers for use in studying the Kaapvaal Craton in South Africa. The instruments to be set out in the field were being tested with their recorders and running from batteries to insure that performance would be satisfactory. The seismometers are on the floor, the recorders are on the table and the batteries are on the floor to the left. At the left is Rod Green of the University of Witwatersrand, in the center is Randy Kuehnel the DTM seismometer technician, and to the right rear is Adriana Kuehnel. Circa 1997.
attenuation and propagation functions until the best fit had been obtained for the seismograms of all receivers. The result is a model that approximates what is thought takes place at the rupture; the attenuation and propagation terms provide information about the structures through which the waves pass. The three-component, broadband seismometers opened a fruitful line of research for Silver,3 which he exploited with the portable seismic array. Seismic waves are propagated as pressure (P) and shear waves (S), the latter having particle motions perpendicular to the direction of propagation. When S waves are incident on a three-component seismometer, components will be registered depending on the orientations of the incident propagation vector and the plane of polarization relative to the coordinate system of the receiver. If the wave is incident from directly below, only instruments responding to components in the plane of the Earth’s surface will respond. (The instrument responding to vertical motion will record P waves.) The first experiments conducted with explosives by the Department in their study of the crust disclosed a complicated structure; not only were the expected uniform layers not present but there were strong anisotropic
Seismology revisited
243
effects with bearing. The absence of isotropic behavior was found at lower depths as the decades proceeded, and Silver approached the matter in an engagingly simple manner by using the waves arriving at a station from nearly below. This allows straightforward use of the seismometer records and limits investigation to the crust and mantle just below the station. He selected particular arrivals, the SKS, shear waves that originate at the core– mantle boundary by conversion from core P waves. This provided not only waves directed almost radially out from the center of the Earth but also shear waves uncomplicated by the source characteristics. It was found with very few exceptions that these waves were split into two waves of orthogonal polarization and traveling at slightly different velocities. The resulting data were the orientation of the faster component and the time delay of the slower. The results of the first deployment of 25 instruments in 1989 over Wyoming to Ontario showed the orientation of the faster wave lay in a generally northeasterly direction for all stations. Similarly, the observations at the northern coast of South America and of south-central Brazil showed strong effects. Silver is extending these studies with portable arrays and data from network stations with the object of mapping the worldwide anisotropy of the mantle. The mineral basis for the anisotropies observed has its origin in the properties of two rock-forming minerals, olivine and orthopyroxene, as functions of strain. This crystalline anisotropy causes a splitting of S waves much as a calcite crystal splits optical waves. Deep in the mantle, high temperatures prevent the preservation of strain and hence transmit waves without splitting, but as the waves near the surface colder temperatures allow the retention of strains resulting from conflicting plate motions that produce mantle deformation, deformation from which gross features of the crust, such as mountains, are forced. The orientation of the fast component allows the direction of the strain to be determined, and the delay of the slow component gives a rough measure of the depth where the splitting begins. With this another chapter has been written in answer to the ancient question of how mountains are formed, plate tectonics having provided the first. Understanding the movements of the Earth’s tectonic plates is the most fundamental problem facing geophysics, not only as a piece of fundamental knowledge but also for the practical reason that it underlies the understanding of two dangerous natural phenomena: earthquakes and volcanoes. Previous sections have described the Department’s approach to these complicated matters, whose results have convinced Department scientists that much more elaborate methods of observation will be required, and two suitable projects have been approached, one recently initiated, another being planned. Both require cooperation with other institutions, as they require resources in personnel, equipment and funds beyond what DTM can provide even with generous grants.
244
The Department of Terrestrial Magnetism
Selwyn Sacks and Alan Linde formed a partnership with the Japan Marine Science and Technology Agency with Kiyoshi Suyehiro, former DTM fellow and son of longtime Department associate Shigeji Suyehiro, as coproject leader. Their object was to examine the seismicity at regions of plate subduction. In summer 1999 the Ocean Drilling Program ship was used to install two borehole geophysical observatories in 1100 m deep holes in the ocean bottom off Tohoku, Japan where water depths are about 2.5 km. These sites are immediately above areas where great earthquakes have occurred but where a seismic slip also takes place. Each borehole is instrumented with a strainmeter, a tiltmeter and two seismometers. Initial data have shown that the newly developed installation techniques, using the most advanced drilling methods and robot controlled submarines, have successfully installed the two observatories. The expectation is that these and future sites will allow new insights into the behavior of subduction and the generation of great earthquakes. On 3–5 October 1999 Paul Silver chaired a meeting attended by representatives of 17 institutions to devise a plan for an extensive national effort to study the behavior of tectonic plate boundaries. The outcome was a proposal for the Plate Boundary Observatory, a project that will require federal funding approved by Congress. The plan would install global positioning receivers at approximately 100 km intervals from Alaska to Mexico with more dense clusters in regions that have high tectonic activity or volcanism or both and where the receivers will be augmented by borehole strainmeters. A full complement of instruments will require 1275 continuously recording positioning receivers and 245 strainmeters. Add to this the communication and data handling requirements, and the need for a very large expenditure is understandable.
33 GEOCHEMISTRY AND COSMOCHEMISTRY The periodic table has a number of radioactive parent–daughter isotopes with half lives useful for geochronology or geochemistry studies, but for many years only those susceptible to thermal ionization were of use. All elements can be ionized and therefore given over to mass analysis, but constraints apply. For most of this kind of work the elements of interest are present in relatively small concentrations, requiring an efficient ionization process, if reasonable amounts of sample are to be employed. The thermal method has much higher efficiency than electron bombardment and does not require the complication of electrostatic energy analysis needed for most other methods of ionization. For many years these observational restrictions had limited analysis to samples containing the radioactive elements potassium, rubidium, samarium, thorium and uranium; oxygen relied on electron bombardment but was present in abundant amounts, even in small samples. An isotope pair that had long intrigued those in the field was the beta decay of 187 Re into 187 Os, having a half life about the same as 87 Rb. Little thought was given to using the pair because both isotopes are refractory elements from which it is impossible to form positive ions thermally, as they are hungry to bind electrons, not to give them up. In 1986 a research effort at the National Bureau of Standards by John D. Fassett to develop a selective technique of laser ionization that succeeded in ionizing the Re/Os pair, work done with DTM research associate Richard J. Walker, found it possible to obtain ratios to an accuracy of 1%, yielding useful geological data for samples rich in the parent isotope. The results were sufficiently impressive that Brown, who saw Morris taking on the responsibility for 10 Be, began a part-time apprenticeship in resonance ionization, as it was called, at the Bureau. The DTM group saw important research possibilities and began altering the old 9 inch mass spectrometer for the laser method. During the construction of the new ion source, an experimental arrangement more complicated than the mass spectrometer itself, Shirey and Carlson set about developing chemical extraction techniques for use on basalts having very low rhenium and osmium concentrations. When the local laser machine was ready, the chemical extraction was also ready, its path to success marked by ingenious and difficult chemical techniques. Especially troublesome was osmium’s volatility and general chemical loathsomeness and the difficulty of 245
246
The Department of Terrestrial Magnetism
removing the two elements from laboratory and reagent contamination. It was the work of a year but had left the group with an excellent extraction technique for concentrations of parts per billion that was ready for the new mass spectrometer. For a number of years G. K. Heumann in Germany had been quietly studying thermal ionization for negative ions and had found that the negative molecular ions OsO3 − and ReO4 − functioned perfectly on a thermal machine, thereby making use of the refractory elements’ affinity for electrons. News of this came by way of Caltech within a month of the beginning of routine measurements with the laser machine, which was never used again after word of the new method arrived. The 15 inch was altered overnight to run negative ions, and after a few weeks of learning the peculiarities of the “inverted” instrument, Re/Os was being measured from nanogram samples at precisions of 0.02%. Two properties of rhenium and osmium, besides being refractory, set them apart from the other isotope pairs used in geochemistry. First, both are siderophile elements, meaning that they have strong affinities for metals, as compared with lithophile elements, which have affinities for the components of rock. This property has led to a large fraction of both elements having been drawn from the mantle into the iron-nickel core, producing very low abundances in the rocky components of the Earth. Second, in partial melts rhenium preferentially goes with the melt, osmium with the solid. (Elements of the first kind are called “incompatible,” those of the second “compatible.”) These properties combine to give portions of the mantle distinctive isotopic signatures that make the pair useful as tracers. Evidence had accumulated over the previous decades pointing to chondritic meteorites as samples of the material that initially formed the Earth, and one of the first findings of this new line of research was the discovery by DTM and others of how closely mantle-derived rocks follow chondritic ratios of 187 Os/188 Os (188 Os is not the product of radioactive decay and is used as a concentration reference). Particularly noteworthy was the similarity of 187 Os/188 Os, after correcting for age, of the basalts of oceanic islands and chondritic meteorites. Exceptions tested the rule, exceptions that proved to have been from sources for which there was evidence for modification by mantle melting or for which the magmas had interacted with the crust. Cratons are vast, long-stable parts of continental crust that formed a few billion years ago and contain the oldest rocks on Earth. They have mantle keels that extend hundreds of kilometers in depth, and as such had early attracted the attention of the Department’s seismologists, so it was not surprising that the Re/Os system was applied to these structures. Geochemists sample them through xenoliths, rocks which have been carried unmelted to the surface by explosive volcanism. The lavas of this volcanism have relatively low concentrations of osmium, greatly reducing the contamination
Geochemistry and cosmochemistry
247
Figure 33.1 Steven Shirey, F. R. Boyd and Richard Carlson pose behind this unusually large xenolith from the deep mantle found in the Premier kimberlite of South Africa. A xenolith is a piece of rock broken from its place and carried relatively unaltered by the magma of a volcanic eruption. Xenoliths from the great depths of kimberlites contain small diamonds useful for geochemical study. Circa 1993.
of the xenolithic samples of the craton that they carry up. This problem of overprinting original isotopic signatures had prevented the same studies from having been made with other isotope systems. Ratios of 187 Os/188 Os from the Kaapvaal (South Africa) and Wyoming cratons, again corrected for age, had values significantly below those of the meteorites and the mid-ocean ridges, indicating that they had had a history of rhenium depletion. These xenoliths also proved to be highly depleted in rhenium, suggesting that the keels they sampled are residues of an early partial melting, which “froze” the osmium in the future xenolith. By comparing their 187/188 ratios with mantle components not affected by partial melting, one can estimate the age at which this event took place. Such estimates, called “Re-depletion” ages, are minimum ages because the original rhenium extraction was incomplete and some small amount of rhenium was added later by the extracting magma. Analysis of numerous xenoliths taken from as deep as 200 km depth (determined from the diamond content and mineral compositions) indicate the keel of the Kaapvaal Craton to be at least 2.6 billion years old and in rough agreement with the ages of rocks taken from the surface, indicating the entire structure formed at about the same time (Fig. 33.1). The temperatures and pressures holding at 200 km formed diamonds from the elemental carbon found there, which were brought to the surface
248
The Department of Terrestrial Magnetism
in explosive kimberlite volcanism. In a remarkable demonstration of the sensitivity of the techniques developed at DTM, Carlson, Shirey and Graham Pearson, a Carnegie Fellow, extracted rhenium and osmium from sulfide inclusions from some of these diamonds and determined isochrons for two such craton keels from quantities as small as 100 million atoms, providing ages of 2900 and 990 million years. Rhenium and osmium were also applied to the study of tektites, glasses generally believed to have been formed by the impact of meteorites with the Earth’s surface. It was observed that the tektites and the rocks about the crater associated (at least thought to be associated) with the impact carried the Re/Os isotopic signature of meteorites. The indication of an impact event followed lines a decade earlier from studies with accelerator mass spectrometry, which had shown tektites had the 10 Be signature of terrestrial sediments, not the Moon, as proposed by some. There appeared in the 1960s a new form of mass spectrometer that opened new research possibilities. This instrument, called the ion microprobe, used a beam of ions, typically of thermally ionized cesium or oxygen from a plasma ion source, to create ions from a sample placed in this beam, which could be given an extremely small diameter. Secondary ions of any element could be so produced with varying degrees of efficiency. There were problems with the method, such as interferences by elemental and molecular ions, poor beam stability and poor reproducibility of standards, that eliminated it from the uses to which thermal machines were employed, uses characterized by the need for very high precision. There were, however, other uses for such universal ionization, so the machines underwent development, generally at commercial manufacturers that had found less demanding markets for their first products, and by 1990 many of the defects of the original devices had been set aside or greatly ameliorated. This came at a time when the Department expanded its interests in isotope work into cosmochemistry. Studies beginning in the 1960s of the isotopic composition of meteorites had disclosed much about the origins of the elements making up the Earth as well as the nuclear processes that had synthesized them. The isotope ratios observed in certain meteorite components were in some cases significantly different from the homogenized values of terrestrial samples, oxygen being an important example, but in 1988 researchers at Washington University in St. Louis used the ion microprobe to examine tiny crystals of SiC in situ in a meteorite and found ratios of 12 C/13 C ranging from 2/1 to 7000/1, vastly different from any values theretofore observed. Ratios so extreme and so different from solar system values led, when combined with their singular crystalline nature, to the conjecture that they were interstellar dust having different stellar origins from the events that created the bulk of the elements making up our solar system. They are thought to be samples of interstellar dust that had its origin in the stellar wind of red giants, probably from many
Geochemistry and cosmochemistry
249
different stars. This discovery took place when Sean Solomon was emphasizing new research in the solar system, and led to the appointment of Conel Alexander, one of the discoverers, to the Department staff. It initiated a cosmochemical component to the geochemistry, and the new microprobe soon began delivering examples of similar extreme isotope ratios from nitrogen, silicon and oxygen, and the task of relating these data to the various kinds of synthesizing stars began. The addition to the staff in 1999 of Larry R. Nittler introduced the method he had devised that allows large numbers of micron-sized particles to be rapidly identified with the ion microprobe as possible presolar grains, which has greatly increased their inventory. Magmas reach the Earth’s surface in three general classes of volcanism: island-arc volcanoes, mid-ocean ridges and hot-spot volcanoes. The last category has origins that are deep within the mantle, possibly at the core–mantle boundary. Most of the hot-spot volcanoes form ocean islands and are quite numerous, especially in the Pacific. Over the years five major magma reservoirs have been identified by their chemical and isotopic compositions, but these identifications have been troubled by doubts as to the purity of the basalts examined, doubts caused by the need for the ascending magmas to pass through about 100 km of tectonic plate. If they react with the material of the plate, as one can easily imagine, the analyses will not reflect the source material but an indeterminate mixture. To examine this matter, Erik Hauri, a staff member appointed in 1994, studied xenoliths carried to the surface by the most famous of hot-spot volcanoes, Hawaii. Here the situation is different from that described for studying the keels of continental cratons. For them the volcanism was explosive, with the lavas reaching the surface from depths of hundreds of kilometers in a matter of days or even hours. This allowed little time for any reaction of the lava with the bulk of the extracted xenolith, but hot-spot volcanoes are the result of lavas that take thousands of years to pass through the lithospheric plate, thereby offering time for interactions at the xenolith surface. Here osmium provided an answer. Osmium has very low concentrations in the lavas in question and is therefore a sensitive indicator of reaction with plate rocks having a higher concentration. If the ratio of osmium to some other element that would have a different reaction rate remains constant, one can conclude that very little mixing takes place. A number of Hawaiian rocks were so analyzed for osmium and strontium, and a near linear correlation was found, strongly indicating that the lavas reaching the surface were proper samples of the reservoir. The decades of Carlson’s and Shirey’s tenure at DTM marked a period in geochemistry during which instrumentation underwent significant improvement and thereby opened new lines of research. A high-frequency discharge coupled inductively to low-pressure gas in a glass bulb, eliminating thereby any metal electrodes that could react with the plasmas formed, led to the
250
The Department of Terrestrial Magnetism
Figure 33.2 Tests being made on a large mass analyzer for the geochemistry section’s ion microprobe. The gift of a 40 inch radius electromagnet from the University of Pennsylvania’s Tandem Laboratory initiated this development, continuing a long tradition of mass spectrometer construction by the Department. Seen here examining the beam at the mid-point of its travels is Jianhua Wang, Ben K. Pandit and Erik Hauri. June 2002.
discovery of the Balmer lines of hydrogen, the key to understanding atomic structure that had eluded spectroscopists until the end of the nineteenth century. Such a discharge allows the ionization and excitation of atoms with very limited molecular species formed. By 1980 mastering the highly significant details of such discharges and associated techniques had resulted in an instrument for near-universal spectroscopic analysis of high sensitivity, called inductively coupled plasma spectroscopy. The best method was to produce a discharge of argon near atmospheric pressure and introduce the sample for analysis at a low concentration into it, thereby isolating the sample atoms. A high-resolution grating analyzed the light emitted from the discharge, which was scanned photo-electrically, and the resulting spectra were stored in a computer. The spectra so acquired had to be normalized and compared with standard spectra stored in the computer memory. The result was the rapid spectrographic analysis at accuracies difficult otherwise of attainment. Such an instrument was purchased for combined use by both DTM and the Geophysical Laboratory from a commercial supplier, in part with funds furnished by the National Science Foundation. It provided general chemical analyses for routine laboratory work.
Geochemistry and cosmochemistry
251
Thermal mass spectrometers also found development by commercial suppliers, and in 1985 the Department bought an instrument that stored 16 sample filaments and operated automatically to bring them to emission, focus the ion optics, collect the data when the run was judged to be satisfactory, terminate the collection and advance to the next sample. The time required by the tedious operation of the machine by the investigator was replaced by increased time demanded in the chemical extraction of samples to satisfy such a greedy device. An argon discharge also ionizes gas and does so with no prejudice for one isotope over another, so it was not long before commercial manufacturers incorporated it as the ion sources of their new mass spectrometers. Many technical problems had to be solved for this method of universal ionization to become useful for geochemical isotopic analysis. At their current state of development, plasma ion sources cannot match the ion transmission efficiencies of thermal ionization for those elements that ionize easily, but for elements with high ionization potentials plasma sources produce one to two orders of magnitude improvement in sensitivity. Perhaps the most important aspect of these instruments, however, is that at least to a first order, the way in which they fractionate isotopes during analysis is a function only of the mass of the isotope and not its chemical identity. In thermal ionization isotope fractionation occurs primarily during the evaporation and ionization by the hot filament. Evaporation easily fractionates isotopes from one another but does so in a chemically dependent way, since the evaporation temperature depends on the element. Essentially everything entering the plasma is ionized instantly, and fractionation results primarily when the electrically neutral mixture of ions and electrons enters the mass spectrometer and the ions are extracted from the electrons by the high voltage present. Such rapid separation of positive from negative charges causes tremendous repulsion of the ions from one another. Electrodes attempt to recover all the positive ions, but some are simply too violently expelled to be recaptured. This loss of ions produces fractionation, but one dependent only on the mass of the ion, not its chemical species. Consequently, fractionation is approximately a function of mass and is nearly constant in time, since the source of the ions is a continual injection of sample into the plasma. This allows the precise determination of the isotopic composition of every solid element of the periodic table. Most of the light elements experience mass fractionation in the chemical reactions of their natural environment, and their isotopic composition is a sensitive tracer for a variety of processes, such as climate, weathering and biological activity. Thus plasma mass spectrometry can address a whole new class of geological and environmental problems that can broaden the future research of the geochemistry group.
34 T H E SO L O M O N T R A N S I T I O N
George Wetherill stepped down as Director on 30 June 1991, having attained the mandatory age of 65 for retirement from an administrative position, although he remained as a member of the scientific staff. He could look with satisfaction on the Department’s scientific condition. The three categories of seismology, geochemistry and astronomy were independently strong and had healthy intellectual contacts with one another and the international scientific world. The two geologically oriented sections both sought to understand the functioning of the Earth and its most ancient history. Astronomy had a sufficiently strong component of star and planet formation to provide a healthy exchange of ideas, as enlivened discussions at seminars demonstrated, and the geochemists’ concentration on terrestrial rocks resulted from the rarity of significantly different extraterrestrial samples, not the narrowness of their interests. He had found his relations with the President less satisfying. He had left the Institution during Tuve’s tenure when outside funding was rarely encountered and was never used to support routine Department research activities. This strict rule had been breached during the Bolton years, and on returning to the Institution to become Director, Wetherill had continued to allow individual investigators to accept grants. This form of financing was given strong emphasis when James Ebert became President, as he insisted that investigators submitting proposals apply not only for their immediate research needs, such as equipment, laboratory and field expenses, and support for visiting scientists, but also for part of their salaries and for substantial amounts of overhead. Wetherill opposed this approach to shoring up the Institution’s finances but had to comply. A second disagreement with the Institution had been the decision in all its preliminary variation to colocate the Geophysical Laboratory on DTM’s Broad Branch Road location. A third cause of friction came to the fore when Maxine Singer became President in 1987. A strong component of the Department’s astronomy program was the extragalactic observational work of Rubin, Schweizer and Graham, although Graham studied star formation in ways that were tied to planetary formation. The Trustees and the astronomers on the West Coast had long been of the opinion that for the Institution to maintain two astronomy groups was poor administration. As the DTM astronomy showed no signs of 253
254
The Department of Terrestrial Magnetism
fading away, Singer set the correction of this as a long-term goal. Although Wetherill recognized the administrative anomaly, he welcomed the vigorous intellectual contribution by the astronomers and defended their position successfully. Two sources of contention between the Director and the President had been decided by the time of his retirement, but the third remained. The President began searching for a new Director well before Wetherill’s retirement. Abelson had broken precedent by discussing candidates with a staff committee, which had proposed Wetherill. Rather than deal with a staff committee, Singer held regular meetings with the staff and accepted individual communications about candidates from those who wished to contribute. Three excellent candidates were examined and visited the Department, all declining after extended discussions, a situation that created tensions between the staff and the President. During these 14 months Brown served as Acting Director and became the first in living memory not to award cost-of-living salary increases. The selection and acceptance of Sean C. Solomon, a professor of geophysics at the Massachusetts Institute of Technology, sufficiently matched the man to the job as to make the painful delay worth while. He was a distinguished seismologist who had held responsibility for the scientific interpretation of the NASA mission to Venus that had mapped with radar the surface of the cloud-enclosed planet from an orbiting spacecraft, work that formed strong scientific ties to two of the DTM groups. From the President’s point of view he removed one of the matters that had clouded her dealings with Wetherill, for he was not averse to using grant money to augment the Institution’s finances, a view that came as a natural consequence of his experience in large-scale NASA projects and from the attitudes prevalent at a major research university such as MIT. The anomalous position of extragalactic astronomy reached a climax in a meeting between the astronomy staff, the President and the Acting Director on the day before Solomon visited the Department. Singer announced that no more staff positions would be filled at DTM in that discipline. With Rubin declining to retire at 65 this set the event a few years into the future. To the approbation of the staff, Solomon took a sympathetic attitude relative to astronomy, and the comradeship of the staff cannot be better illustrated than in the support for the astronomers by those in other disciplines, even though they might reasonably have expected their own fields to benefit from the vacancies created. The shift away from extragalactic astronomy became tangible in 1999 when Schweizer, the youngest of the group, decided to accept a long-extended invitation to join the Carnegie Observatories. This coincided, although without overt connection, as the opportunity came as a surprise, with the appointment of R. Paul Butler to the staff. Butler’s extraordinary instrumental and observational work in discovering planets in other solar systems1 had resulted
The Solomon transition
255
in his appointment being given unanimous approval by the staff when the Director requested their opinion, but the appointment was a clear alteration in the Department’s direction. The most noticeable change in the Department identified with the new Director was the increase in the number of pre- and postdoctoral appointments and of visitors in numbers that often exceeded those of the staff. He also supported a program for summer research interns in collaboration with the Geophysical Laboratory that began shortly after his arrival and was expanded into a National Science Foundation program of about a dozen high-school and undergraduate students for each department. Solomon’s personal research interests continued to follow the seismology of the Mid-Atlantic Ridge and the geophysics of Venus; from the latter came evidence that the planet’s tremendous atmosphere has influenced its tectonic deformation.2 Increasingly his energy engaged a new solar system exploration.3 In 1999 NASA approved a $286 million project to place a spacecraft in orbit around the planet Mercury in September 2009, a project for which he is the principal investigator. The planning and construction of the craft as well as the data acquisition and the myriad details of the project are being carried out by the Applied Physics Laboratory, DTM’s scientific child from the time of the proximity fuze. The project was given the acronym name, MESSENGER, derived from a collection of words intended to be forgotten. The planet had been viewed briefly in 1974–75 from three flybys of Mariner 10 but has not been visited since. Because of Mercury’s nearness to the Sun, any spacecraft approaching it from the Earth will gain a large amount of energy, requiring a substantial payload of fuel for braking and an elaborate spatial ballet to place it in orbit. The instruments carried will be detectors for gamma rays, X-rays and neutrons. The planet’s surface will be studied with a radar altimeter. An important goal of the venture will be the accurate mapping of Mercury’s magnetic field, in the hope of answering questions that inspired Louis A. Bauer but at significantly greater cost.
35 THE SUPPORT STAFF
Behind Galileo’s amazing discoveries with the telescope was a lens grinder, whose competence made the observations possible. This is a pattern that has repeated itself many times over, and only a few observational and experimental scientists have been able to function without the help of skilled craftsmen, their skills often promoting them into collaborators, acknowledged at the end of a paper rather than among the list of authors. As the industrial age progressed investigators found an ever-growing array of devices useful and even vital to their trade, such that with time the initiation of a new experimental study required an assortment of catalogues near the drawing boards of the designer. This dependence on the industrial base has in recent years even overtaken theory, where electronic computers have changed that pencil-andpaper field in a way that is revolutionary, to use correctly this overworked adjective. Those who are conversant with the various aspects of these new technologies are often more important to the DTM staff than many of their colleagues, and are easily overlooked in a history, which all too often rings with statements such as: “Caesar conquered Gaul.” But didn’t he even have a cook with him? Craftsmen were indeed among the first persons mentioned in the early Year Books and thus were clearly appreciated. As noted in the Preface, the number of persons who made active and important contributions to the Department’s functions, both scientific and supporting, is so large as to make mention of more than a small fraction impossible in these pages, if the book is to be kept readable and of reasonable length. The principal scientists are covered in other chapters, and indeed some members of the support staff are named there because of unique contributions, but the names of this group that made operations successful have not generally found their way into that text. To have overlooked them in this history would be a slight bordering on arrogance and certainly present striking evidence of a lack of good manners. The reader is reminded that there were incomplete listings of personnel until Year Book 21. When a small instrument shop was opened in the Ontario Apartments, Adolf Widmer, a Z¨urich-born instrument-maker with extensive experience gained in four nations in the construction of optical instruments, was hired as chief in 1907; a Swede, Erik Knut Skonberg joined him in 1910 (Fig. 35.1). These two men evidently left DTM employ around the time of the 257
Figure 35.1 The instrument shop in the new Broad Branch Road building. The identities of the two machinists are not recorded, but they are probably Adolf Widmer and Erik Skonberg. 1914.
The support staff
259
Figure 35.2 William Steiner and John Lorz at the celebration of Steiner’s retirement. Steiner came to the shop as a new journeyman machinist at the same time as Lorz came as an apprentice. Lorz worked for 50 years in the DTM shop and Steiner only a couple of years fewer. Their length of employment has never been surpassed. 1963.
World War I, when Christian Huff was appointed as chief along with journeyman William Steiner and apprentice John G. Lorz. These three were to define the shop for many years. At Steiner’s retirement party in 1963 a glimpse into what service was like under Huff came about when Lorz reminded him of the alliance the two boys had made: if Huff were to attack one of them, the other was obliged to come to his aid (Fig. 35.2). Huff, who was also notorious for his opposition to tobacco, died in 1936, and Steiner was foreman until his retirement in June 1963 with Lorz foreman until his own retirement five years later. Lorz spent his entire 50 years of service in the same machine shop, and Steiner almost as long. Tallman Frederick Huff joined as an apprentice in 1933 and remained until becoming a navy warrant office during the war; one supposes he was a kin of the foreman, but nothing in the record confirms this. Another long-time machinist joined in 1927, Bruno J. Haase, who like the others, stayed until retirement. After World War II new machinists came to the shop: Francis J. Caherty, Robert Hofmaster, Carl Rinehart and Milton T. Taylor, the last three remaining until retirement. Michael Seemann joined in 1958 and became increasingly involved in the design and construction of borehole strainmeters, eventually supervising the construction of a large number manufactured for a Japanese array. Georg Bartels and Nelson McWhorter joined during the early 1980s, McWhorter becoming shop foreman on Seemann’s retirement in
260
The Department of Terrestrial Magnetism
1998. In 1998 Richard Bartholomew was added to this list of much admired craftsmen along with Jay Bartlett as apprentice. Tuve’s nuclear physics group maintained a laboratory assistant capable of machine work so that it was unnecessary to go to the main shop for small jobs; he could also deal with vacuum problems, blow glass, build electronic equipment and manage a multitude of other tasks. C. F. Brown held this post for the Tesla coil and early Van de Graaff periods; when he left for a university education he was replaced by Robert C. Meyer. Before World War II, electronics, both design and construction, was a skill normally performed by scientists and their laboratory assistants, although Huff had some skills along those lines. The war emphasized electronics over all other experimental techniques and cultivated a new class of technologists. These were present in plenty in the proximity fuze work and two of them, John B. Doak and Charles A. Little, joined the Department with the end of hostilities. Construction of the cyclotron made use of Paul A. Johnson for the oscillator and Stephen J. Buynitzky for other electrical work. In 1949 Johnson persuaded his school friend, Everett T. Ecklund, to join this new group of electronic technicians. Johnson and Ecklund were both skilled machinists, giving a special and widely appreciated value to their contributions and leading to Johnson’s serving as shop foreman after Lorz’s retirement until his own in December 1973. Johnson, Ecklund and Little were, together with Tuve, active radio amateurs. Buynitzky remained as technician in charge of operation and maintenance of the cyclotron. Charles J. Ksanda became a machinist for the Geophysical Laboratory in 1914 and was transferred to DTM in December 1940 to help build the cyclotron. He remained until 1954 and was long remembered for the precept: “the drawing is just one man’s opinion.” Glenn Russell Poe went to work as an electronics technician for the seismologists in November 1959 after discharge from the Navy and remained until retirement in September 1994; he was joined in that work by Ben K. Pandit in 1982 and Brian Schleigh in 1998. The possession of 25 portable broadband seismometers, each a mechanical, electronic and computing device of the highest technical evolution, required engaging a technician capable of servicing and deploying. Randy Kuehnel was engaged for this in November 1989 and remained a vital member of the seismometer group, unfortunately departing for a radically different kind of career in July 2000. He was followed by Peter Burkett. The introduction of electronic computers to seismology resulted in the scientists being intimately involved with the analysis of their data, but the need for one of the human “computers” of earlier time was also felt, and Daniela Power was hired in January 1991 for general data reduction and assistance. In 1967 the mass spectrometer group employed Kenneth D. Burrhus as an electronics technician. When Stanley Hart left the Department in 1975 to
The support staff
261
become a professor at MIT, Burrhus decided to follow him. He was replaced by John A. Emler, who retired in November 1996 and was in turn replaced by Timothy D. Mock. The acquisition of new mass spectrometers of remarkable capabilities brought an increase in the number of visiting investigators to the geochemistry section; keeping machines and laboratories properly functioning grew increasingly burdensome. To this end Brady Byrd, then David Kuentz, were hired as laboratory technicians. In January 1997 Mary Horan, a skilled geochemist in her own right, was appointed as laboratory manager. The same pressures of increased demand for instruments required engaging a technician, Jianhua Wang, for the ion microprobe. There is abundant evidence of the presence of a carpenter during the early days, but the first one named in the Year Book was Albert Smith who was hired in January 1918 and served in that capacity until retirement in August 1940. He even returned as a temporary employee during the war when labor was short. He was replaced in 1945 by Charles Balsam, who also became responsible for the building upkeep until his retirement in 1960. Leo J. Haber replaced him as carpenter with assistance of Elliott M. Quade for buildings. Richard Collins and Gary Bors took over these responsibilities on Haber’s and Quade’s retirements in 1975 and 1977, and Roy Dingus, transferred from the Geophysical Laboratory, became the last DTM carpenter in 1989. William E. Key, who joined the Department on discharge from the army, assisted in buildings and grounds maintenance. Caretakers for buildings and grounds have provided services that were necessary even though not of the kind that required technical training, and it is only proper that the names of those who devoted many years of their lives to the Department also be recorded. From the earliest years Philip E. Brooke served as caretaker and night watchman and Stephen W. Malvin as caretaker and gardener; Brooke seems to have retired during the war, and Malvin retired in December 1948. For nine years after the war Carl R. Domton served as gardener, a position not specifically referred to after his departure, but the duties were then performed by Stanley Swantkowski (September 1951, retiring in June 1966), Stanley Gawrys (June 1957, retiring in 1974) and Willis Kilgore, Jr. (December 1966, retiring June 1981). Bennie Harris (March 1967, retiring May 1993) functioned as caretaker and was replaced by Pablo Esparza. During the construction and renovation project for the colocation of DTM and the Geophysical Laboratory, Gary Bors served as assistant to the project manager and later briefly as head of the Broad Branch Road Campus maintenance staff, which assumed all such duties for the two departments, taking Dingus and Key from DTM. H. Michael Day headed these activities until he departed in 2001 and was replaced by Dingus, with Bors returning. With colocation, buildings and grounds were maintained as a Broad Branch Road responsibility, and Roy Scalco and Maceo Bacote were added to the group.
262
The Department of Terrestrial Magnetism
Caretakers Pedro Roa and Lawrence Patrick joined from the Geophysical Laboratory. Clerical work was carried out by persons with a variety of titles: stenographer, secretary, property clerk, file clerk, office manager, computer, bookkeeper, accountant, fiscal officer, draftsman, photographer, laboratory assistant and librarian were descriptions associated with “office” activities. There was flexibility of assignment with stenographers becoming computers or laboratory assistants, and it became common practice to alter job titles and descriptions of personnel from year to year. Jefferson H. Millsaps appears to have been the first bookkeeper, coming to the Department in January 1909 from the Coast and Geodetic Survey, where he had worked as typewriter (the term then used for typist) and stenographer; there is no record of who kept accounts before him. He resigned in 1919 as a consequence of having passed the examination for a Certified Public Accountant and accepted a faculty position at a commercial school. In 1916 McClain B. Smith, invariably called M. B. Smith, was hired as a stenographer, and he remained until retirement in December 1957. His degree in electrical engineering aided his understanding of the activities of the Department and over the years he held a number of positions in addition to fiscal officer and office manager. He was replaced as fiscal officer by Helen Russell, who joined in 1945 as an accountant. The catalytic properties of money are such that the fiscal officer holds the greatest administrative power other than the Director, so Miss Russell’s replacement on retirement in 1973 by Niels Pedersen, who worked for a time as her assistant, was studied with more than ordinary interest (Fig. 35.3). Pedersen left in 1979; he was the first such officer in the Institution to have entered the bookkeeping onto a computer, the Department’s IBM 1130. His replacement, Terry Stahl, who remains at the time of this writing, has had to deal with more computer accounting systems than he probably cares to remember. Emma L. Beehler was the first stenographer of whom there is record in the Year Books; she was joined later by Emma L. Tibbetts and Hazel Noll, both of whom worked as computers. Ella Balsam was employed as a stenographer in 1926 and remained with DTM for 26 years, working as a computer the last years before retirement. William N. Dove was hired as a stenographer in 1935, soon became the Director’s secretary and finally office manager after M. B. Smith’s retirement; he remained until his own retirement in 1978. James John Capello was hired as property clerk and stenographer in the same year as Dove, retiring in 1951. In the decade following World War II six women were employed as typists (no longer called typewriters) and stenographers but all remained for relatively brief periods, possibly the consequence of the general disorder introduced in private lives by the Korean War. For whatever reason, the office staff settled into a period of stability with the employment of Claudine C. Ator in
The support staff
263
Figure 35.3 Niels Pedersen, Dorothy Dillin, Scott Forbush and Helen Russell at the celebration of Miss Russell’s retirement. 1973.
September 1953, followed by Kathleen Hill and Dorothy B. Dillin. In April 1978 Mary K. Coder joined the office as an editorial assistant, to become senior administrator. In 1983 Janice Scheherazade Dunlap joined as a parttime secretary and became assistant to the Director. Alexis Clements became Project and Publications Coordinator. Recent receptionists have been Rosa Maria Esparza and Oksana Skass. For many years a draftsman was a notable accessory, as the many beautiful plots of data and drawings of equipment that fill the Department history witness. The first was Carroll Christopher Ennis, who joined in 1915 and held the position until retirement in 1937. He and his replacement, William C. Hendrix, were also designated computers. No draftsman was hired after Hendrix’s retirement in 1954. Most of the computers were women, who spent their hours at work relentlessly entering data into mechanical calculators and looking up functions in an array of mathematical tables. Without some recess, such as drawing or photography, it was work not conducive to long employment. Three men who continued working as computers and research assistants with sundry related tasks were Walter E. Scott, one of the observers from the Carnegie, Paul L. Moats (April 1932 until October 1951) and A. David Singer (January 1936 until December 1950). Scott retired in December 1963.
264
The Department of Terrestrial Magnetism
The reduction of the huge amounts of cosmic-ray data devolved with time into a special relationship between Forbush and two associates who became engrossed in his work: Isabelle Lange and Liselotte Beach. Lange began work, initially on some of the final magnetic data, in 1945; Beach replaced her on her retirement in March 1957, retiring herself in 1975. Both had significant mathematical skills. Beach and Forbush occupied an office that was famous for its extraordinary density of tobacco smoke. The biophysics section required the services of persons trained in skills needed in chemical and biological laboratories. For reasons not apparent, this category of employee, for which the section generally had two, proved to be the most transient of any in the Department. During the existence of the section there were at least 16 women on the rolls, only two of whom remained longer than three years: Margaret E. Chamberlin and Neltje Van de Velde. Operation and maintenance of the first DTM computer, used in common by all, was the shared responsibility of the scientific staff, and this approach continued when computer terminals from DEC machines became common and then ubiquitous in laboratories and offices. With time the system became so complex, having extensive interaction with outside machines, that it became necessary to have a full-time expert in this rapidly growing sophistication of modern life. Michael Acierno entered into this new position, in time titled computer systems manager, in November 1984, and quickly became invaluable to the entire staff, and not just the astronomer who had urged the creation of such a job. By 1993 the task had grown to such proportions that it was necessary to add Sandra A. Keiser to master the seemingly ever-mounting problems associated with modern computing and communication. Bauer and Fleming both set great importance on the library, not just as a research tool but as a cultural entity in itself. Bauer had been an enthusiastic book collector and had obviously found in Harry Durward Harradon a librarian after his own heart. Gifted in language, archival and literary skills, he fashioned the Department library during his tenure from 1912 until his retirement in September 1949. Tuve viewed the library primarily as a research tool and provided little support beyond maintaining the serial publications and acquiring the monographs necessary to keep abreast of various fields. The superb collection of photographs was then only sporadically augmented and catalogued. After Harradon’s retirement part-time librarians were employed, with three filling the position between 1950 and 1956. Stability was attained when Lelah J. Prothro took over in September 1956; her appointment was half time, as she was also librarian for the Geophysical Laboratory. On her retirement in 1972 the responsibility for the library was assigned to Dorothy Dillin, who was working as a secretary and not given the title of stenographer-librarian for another year. Dillin’s retirement in March 1988 led to a reconsideration of
The support staff
265
the nature of the library and its custodian. The Director formed a committee that instituted a search for a new librarian and selected with complete accord Shaun Hardy from a large number of applicants and a small interviewed group. Hardy brought the traditional skills of a librarian as well as skills well cultivated for the computer age and quickly became a valuable research resource. That he also had the predilection and instincts of an archivist made him all the more suitable. When the Geophysical Laboratory was colocated on the Broad Branch Road grounds, Hardy was made responsible for combining and maintaining the two libraries. With cataloguing assistant, later assistant librarian, Yungue Luo he culled the two collections, replaced the unique call numbers with the Library of Congress system, and made the library accessible from a computer catalogue. Merri Wolf became library assistant in 1993. With volunteers he cleared the attic of accumulations since 1914, saving and cataloguing much valuable material. The result of this transformation was to return the library to the high quality it had had under Harradon. What has been sketched is the story of a devoted group of employees without which the scientific achievements of the Department could not have been made. They range from persons with high talent to caretakers, but all were necessary, all contributed significantly to success. The repetition of the word “retired” in the text just by itself tells the story of a happy work place, and this happiness provided the glue that made DTM so strong. An excellent description of what this happiness means was a comment someone once made in talking about work here: “Everyone wants to do the right thing.”
36 EPILOGUE
Taken by any measure that one might chose, the world of science was vastly different at the time of the Department’s establishment from that of a century later. The number of scientists alive in 1904 was insignificant when compared with the number at the end of the century, and there was then still a strong amateur component that has all but vanished. Science is now a wellestablished, paid profession, one for which government policy is a matter of abiding concern. In 1902 government participation in science was restricted to those few matters for which it felt a direct need: navigation, geological surveys, industrial standards, and agriculture. Pure research was outside the national interest, at least insofar as monetary support was concerned. Given the complete transformation that science has undergone, it is remarkable that the Department has functioned so well as the environment for its operations altered so drastically. It is worth while to trace the important decisions that have shaped DTM’s history. It was Louis Bauer’s determination and energy that caused the Department to be established. Without his drive there is little probability that anyone else with authority would have set about mapping in detail the magnetic field of the Earth as a goal requiring a substantial fraction of the Institution’s funds. But Bauer’s case for a non-governmental agency to coordinate this activity found support from President Woodward and the physics community. The Department was Bauer’s creation, and he conducted its operations in a manner that would – and very likely did – draw admiration from captains of the business world. His organization of a worldwide team of skilled and courageous observers must fill us with astonishment. The eager participation of national surveys discloses a diplomatic skill based on his international ideals of science. He was, however, too successful for his own good, because when President Merriam surveyed his new responsibilities he noted that Bauer had accomplished the accurate mapping of the geomagnetic field. Bauer had built a department with a well-established infrastructure and extended international affiliations for improving the accuracy of our knowledge of the geomagnetic field and following its temporal variation, which was his great interest. He did not find truth in Merriam’s criticism of continued “data taking” and was understandably proud of his accomplishments and resented Merriam’s evaluation, but the results had neither disclosed the 267
268
The Department of Terrestrial Magnetism
source of the geomagnetic field nor indicated a new line of study that might determine it. He was sympathetic toward probing the underlying causes of the terrestrial field and had gained Barnett for the staff, even building him a special nonmagnetic experiment building, but it was a project that did not yield anything of value and was justly abandoned. He hired other scientists to broaden the scope of the Department’s investigation and to mollify Merriam, but they did not stay, one suspects because of his abrasive personality. Toward the end of his administration Bauer assigned more responsibilities to his assistant, Fleming, whose support of the two young physicists, Breit and Tuve, paid off richly in their ionosphere sounding in 1925. It was due to him that they were permitted to begin highly speculative work in accelerating particles for nuclear physics, an activity that could be – and from time to time was – related to magnetism only in the most devious manner. In doing this he steered DTM away from the mission designated in its name and from his own research interests and set the basis for greater changes after his retirement. He also set a precedent in turning the ionosphere work over to the Bureau of Standards when its value was seen to be restricted to radio propagation predictions, having to pick it up again when the Bureau’s budget was cut during the Depression. The experience of Tuve during the war in initiating and managing the proximity fuze development and production did not provide him with a craving to apply the same style to science. Indeed the outcome was the opposite; he did not see in the eminently successful conduct of the fuze work a model for scientific achievement but instead thought science should belong to the single researcher or the small team. He gave vent to his fears in an article in the Saturday Review,1 spelling out his side of the epic debate with Lloyd Berkner. His wartime experiences, however, reinforced by those of Bush, had enhanced the standing accrued with the Trustees and opened the path for the dynamic changes both had in mind for the Department. Work in terrestrial magnetism and atmospheric electricity was quickly ended, and experimental nuclear physics, Tuve’s strength before the war, was drastically scaled back, leaving the resources normally assigned to these disciplines for new research. Tuve’s great strength in science was his foresight and boldness together with staying power when things went wrong, qualities he demonstrated repeatedly on becoming Director. Explosion seismology, isotope dating, paleomagnetism, biophysics, electronic image intensification and radio astronomy became major research goals. Few if any research organizations ever underwent such drastic changes. The research goals of the current Department find their origins in those daring decisions. The most serious mistake of the postwar period was the decision to construct the radio telescope at Derwood despite knowledge that it would be duplicated at the National Radio Astronomy Observatory. Such a
Epilogue
269
commitment of Institution funds and labor made it very difficult to terminate operating, maintaining and developing such expensive instruments further. The time was ripe when Bolton became Director for a decision to shut down DTM’s radio telescopes, while leaving radio astronomy as a research subject, but the decision was made eight years later by Wetherill. That Bolton did not see or did not believe there were problems with the bio group has answers that lie beyond this author’s competence. The scientists studying radio astronomy and perfecting the image tube began doing astronomy in a way completely unexpected and not particularly pleasing to the Institution’s administration. The Pasadena group (under various names) was intended to be the repository of that kind of science, and a section at DTM did not fit into a logical organizational chart that some saw as important. This troubled neither Tuve nor Bolton nor Wetherill nor Solomon, as astronomers did much to prevent the Department from becoming intellectually narrow. Tuve’s 1946 return to geophysics was placed on new but mutually supporting foundations. Seismology changed from its original and novel approach of using explosions to the more conventional methods of earthquakes, but gave this reversion an original stamp through a South American association and two remarkable instruments, the borehole strainmeter and the broadband seismometer. Emphasis began to change when instruments designed and fabricated in the Department gave way to government run arrays with access by all to the data. The path disclosed by NRAO was more deftly followed by the seismologists than the radio astronomers. The various studies that came from developing the mass spectrometer as an instrument for geological work have proved to be enduring methods for studying the Earth and have attracted allied research in the examination of extraterrestrial materials in the laboratory. This line of study has been expanded through observations with instruments carried into space by NASA, which functions in the manner of NRAO but with incomparably more money. A recent outgrowth has brought optical astronomy into the study of planets, giving all of the Department’s researchers zones of overlapping interest. The remarkable property of the Department has been its flexibility, helped by a name that links it to past achievement and tradition but no longer suggests constraints on what the Director can undertake. This flexibility is apparent in the span of time for which various kinds of research have been carried out. Research fields that have been conducted but eventually terminated are terrestrial magnetism, atmospheric electricity, ionospheric sounding, nuclear physics, cosmic rays, paleomagnetism, biophysics and radio astronomy. There are three major differences in the scientific environment that mark the beginning and end of DTM’s century. First, the level of education, at least as measured by the formal degree of schooling achieved, has greatly increased.
270
The Department of Terrestrial Magnetism
A few years of college often sufficed for one to enter scientific research at the Department in the beginning; a hundred years later the requirement is generally a Ph.D. degree with years of highly competitive post-doctoral work. This change can be traced to many causes, but government support of college education for veterans of World War II and the Korean War altered the character of US universities from strongholds of the elite and well born to accepted parts of general education. The result was the production of an unparalleled number of well-prepared scientists. It is worth mentioning here that, although the Department’s magnetic observers did not have advanced degrees, they were nevertheless encouraged to publish independently, as the listings of publications in the Year Books bear witness. One also notes that Berkner, Cowie, Dahl, Fleming and Forbush did not have earned Ph.D.s. By contrast, in 2002 two members of the support staff had Ph.D.s. Second, science changed from observational methods dependent almost exclusively on the skills of laboratory instrument makers, who made modest demands of the industrial base, to methods that require a highly developed technical society. Instrument design has not vanished from DTM, but seismology, geochemistry and astronomy rely heavily on equipment furnished by specialized manufacturers and are frequently operated by unknown persons at a distant location. And everything depends on computers, which were from the beginning purchased as one might any other industrial supply. The dependence of scientific life on the modern communication techniques of satellites and fiber-optic cables alone makes current research a profession that Bauer would have not even recognized but would have esteemed. The third follows closely from the second. Science has become expensive and this began to make competition during the last 30 years difficult for the Institution when compared with organizations that draw lavishly on government resources. Bush and Tuve rejected government support through NSF except for special projects that spread well beyond the staff, such as seismic expeditions and the image tube, and specified that none of this money was to find its way into the Department’s normal operation. Subsidies for salaries, overhead and routine expenses were uncompromisingly rejected, but Ebert’s tenure brought an end to these restrictions. The total cost of running the Department during fiscal year 1909–10 was $149 874, a period for which operations had left the initial start-up and were generally stable. The total cost for 2000–1 was $7 238 428, reflecting an average annual increase of 4.3%, a reasonable value for a century characterized by chronic inflation, but the 2000 figure differs significantly from that of 1909 in another way, as the Institution then covered the entire amount whereas in 2000 it covered only 54%, the remainder coming from outside funds. Not reflected in the 2000 sum is the cost of such research necessities as the Hubble Space Telescope, the worldwide seismic arrays and the international efforts needed to implant strainmeters. There are no figures that present the
Epilogue
271
Department’s research gain from those extraordinary instruments and undertakings, but it is clear that the Institution’s outlay for the science achieved by the staff is far smaller than the 54% reflected in the 2000 expenses, especially when compared with 100% for 1909. This has had a not-too subtle effect on the way in which science is conducted, the “cost” of which the reader must decide. Securing government funds places the burden on the “principal investigator” not the Institution, and the “results of prior support” are a definite consideration in the evaluation of his proposal. The investigator’s success turns heavily on sticking close to his specialty and getting “results.” This provides a definite incentive not to drop what one is doing to follow a clever idea that is unrelated to the grant or even one’s specialty, once a strong characteristic of the Department. Individual staff grants cover salaries for temporary positions, purchases of instruments, costs of operation and travel, and the overhead accrued from them and are eagerly sought by the P Street administration as assistance in maintaining the endowment, which still allows the Director to support science judged valuable but which has not found favor with the funding agencies. The change from forbidding staff members to apply for grants to encouraging them to do so reduced the Director’s power. The Director’s permission is no longer required for purchases and trips when the staff member has his own budget, and with time the requirement that all papers for publication be submitted through him fell into disuse. Staff members’ control over their work changed from the period when “you are free to do anything that Merle Tuve wants done” to one in which you are free to do anything – but does it endanger your grant? These changes in monetary support have not changed the size of the Department, indeed there were 14 staff scientists (emeriti not counted) in 2000, a number that has varied but little since 1948, when the category of “staff member” was established. The support staff has maintained a roughly fixed proportion. In the last decade there has been an increase in the number of visiting scientists and those with two-year appointments, both consequences of the presence of grant money, which gives at times – especially in the Lunch Club – the impression of a larger size. From its inception the Department has made extensive arrangements with non-Carnegie scientists for the collaborative advancement of their common interests. It has been a great strength that these arrangements are almost completely unconstrained and flexible enough to meet almost any circumstances. The annual reports of the early decades often listed associates, who sometimes worked independently at other locations or in-house or on cruises and expeditions. These kinds of appointments, which might have covered all or partial salary and research expenses, were temporary, although they sometimes changed into regular appointments.
272
The Department of Terrestrial Magnetism
Supporting young researchers for a few years, specifically as an educational aspect of the Institution, was first mentioned in the 1920s when President Merriam proposed establishing a small group of “fellows,” but no appointments were made at DTM until 1937, when Richard B. Roberts became the first postdoctoral fellow, working with the nuclear physics group. Tuve strongly recommended to Fleming the expansion of the program not only for the advancement of the young people but for the new ideas and enthusiasms they brought with them. After World War II President Vannevar Bush made such appointments on a regular basis. Each DTM discipline except cosmic rays has had large numbers of Fellows and postdocs in attendance over the years, along with students working on research for their dissertations. Carnegie Fellows have been granted significant freedom to select their own research, although practice has generally found them following goals similar to one or more of the staff. They were never here to do the research for one of the staff. Selection, once rather casual, has become quite involved with openings being advertised and as many as a hundred applications for a given position being evaluated. Seismology used all of these methods to encourage science in South America and in addition secured scholarships for South American students to do graduate work in the United States. Research with the polarized beam of the Van de Graaff was a Basel–Carnegie collaboration with substantial support coming from Switzerland. During the latter years of the 1990s an NSF-funded summer program allowed about a score of students ranging from high school to undergraduate to spend two months working as interns in both DTM and the Geophysical Laboratory. The Department’s contributions to education, though not formal, have been significant. The manner of filling staff appointments has changed as much as any other facet of Department life. Until the 1960s a vacancy was filled often as not by an inquiry by the Director completed with a handshake and a letter. Advertising for a position was unheard of. By the end of the century a staff appointment was generally filled on the recommendation of a committee that had examined scores of high-qualification applications encouraged by advertisement, quite a contrast to the early 1950s when there was difficulty finding someone for seismology. Selection is still the Director’s decision, but since Bolton it has been made with some kind of consultation with the staff. Even more unprecedented, the staff had worked with the President in selecting the last two directors, Wetherill and Solomon. Women filled office positions at the Department from the very beginning as computers, for which they appear to have been preferred to men, and as stenographers. Most remained only a few years, an important exception being Ella Balsam, employed as a stenographer and later as laboratory assistant from 1926 until 1952. But a woman was not appointed to the scientific staff until Rubin was so named in 1969, preceded by four years as a staff associate; a
Epilogue
273
second was Morris in 1987, a Fellow first for three years. They were not, however, the first women to work as scientists. The first was Mrs. L. J. H. Barnett, who aided her husband in his laboratory studies of the gyromagnetic ratio during the course of that research. The second was Dr. Winifred G. Whitman, who made very important measurements in 1930 of the effects of gamma radiation on rats, establishing exposure limits whose values have held up remarkably well over the years. Both appointments were without pay and came about as a consequence of their husbands’ staff positions. In 2002 two women were added to the staff. There has been a better balance of the sexes among scientists when postdoctoral positions are counted, especially after grants that significantly increased their number. Seen from the point of view of those striving for expanded justice in the allocation of opportunity, the current situation is unsatisfactory; seen from the contrast with world science in 1904 and as the indication for the future, the current situation is a significant change for the better. One social aspect in the Department has not changed since Bauer’s time: it is an international undertaking. Expeditions, worldwide collaborations and large numbers of foreign visitors working at Broad Branch stamp activities as much at the end as at the beginning of the century. Another aspect that has not changed is the relationship between the scientist and his observations. The long tradition that staff scientists are deeply and intimately involved in the tasks that constitute their science has not changed since the opening of the Department. Astronomy does make use of observations from telescopes such as the National Radio Astronomy Observatory or the Hubble Space Telescope, instruments that the observer does not even operate directly, and seismology makes use of data from worldwide networks maintained by government agencies, but interpreting the resulting data is a duty that is accomplished by the scientist. Such work is not relegated to students or postdocs; these people work beside the staff member not for him. With exceptions, geochemists collect their own rocks, do their own chemical extractions and operate and maintain the various instruments of their craft. Seismologists deploy their instrument arrays and encapsule their strainmeters. Technicians furnish valuable assistance but never stand between the scientist and his observations. But one social aspect of the Department that has changed significantly is the hectic pace of current operations. The staff has always worked hard, always tried to maximize the efficiency of performing their tasks, and modern methods of communication and travel have worked to reduce wasted time to a minimum. Questions posed to colleagues around the world can be answered instantly. Data are submitted for incredible calculations with the results expected within hours. This has led to a staff member being able to utilize every minute of his time in performing experiments, making observations, analyzing data, writing papers and proposals, and attending meetings.
274
The Department of Terrestrial Magnetism
What has been lost with this gain is the idleness enforced by surface mail and steamship travel, idleness that often led to contemplation, but it is an aspect of life that the young have not experienced and do not miss. And what does future research hold for the Department? It is a question that continually confronts directors, a question the answer to which can bring treasures of knowledge from a good decision, dross for a bad one. Indeed, science parallels mining that is conducted where knowledge of the geology is superficial. Some veins peter out and their tunnels are abandoned, while exploration or chance discloses other sites that become the scenes of exciting and rewarding activity. Sometimes stubborn rejection of contrary opinion leads to rich ores in what was thought to be a played-out crevice. For most scientists it is the search that matters. Das ist der echte Schatz, den zu suchen dir dein Leben zu kurz erscheint. (B. Traven, Der Schatz der Sierra Madre)
NOTES
The sources for this study lie overwhelmingly in Carnegie publications, especially the Year Books. Information from the Year Books is provided as a citation only for quotation or some highly specific item, as it is not difficult to make the connection between the text and the Year Books. Reference to other material is cited specifically. 1
Establishment
1 Interview with Bauer’s granddaughter, Lucy Pirtle, May 1992. 2 The Ontario History Committee, The Ontario (Washington, DC: The Ontario Owners, 1983). 2
1 2 3 4 5 6
Cruises and war
This chapter draws on material from L. A. Bauer, Researches of the Department of Terrestrial Magnetism, vol. III (Washington, DC: Carnegie Institution of Washington, 1917); vol. V (Washington, DC: Carnegie Institution of Washington, 1926); Log of the Yacht Carnegie, various volumes identifiable by date. Encyclopaedia Britannica, vol. XXIV (London, 1911), p. 872. Felix Riesenberg, Standard Seamanship for the Merchant Service (New York: Van Nostrand, 1922), pp. 876–8. Martin Hedlund, unpublished manuscript provided by his son, R. Hedlund of Westport, New Zealand. Lowell Thomas, Count Luckner, the Sea Devil (Garden City, NY: Garden City Publishing Co., 1927). J. P. Ault, Navigation of aircraft by astronomical methods. In Researches of the Department of Terrestrial Magnetism, vol. V, pp. 317–37. The files describing the Department’s research in World War I were destroyed by E. A. Johnson at the instruction of Director John Fleming. Johnson, who also saw no value in preserving them, wrote a memorandum dated 19 December 1946, which was placed in the archives, describing the contents. 3
Expeditions
1 L. A. Bauer and J. A. Fleming, Land Magnetic Observations 1911–1913 (Washington, DC: Carnegie Institution of Washington, 1915), p. 67. 2 Ibid., pp. 69, 72. 3 Ibid., pp. 105–6. 275
276
Notes
4 L. A. Bauer, J. A. Fleming, H. W. Fisk and W. J. Peters, Land Magnetic Observations 1914–1920 (Washington, DC: Carnegie Institution of Washington, 1921), p. 102. 5 Ibid., p. 108. 6 Ibid., pp. 115, 125, 129. 7 Ibid., pp. 133, 136, 138. 8 Ibid., p. 149. 9 Ibid., pp. 197–8, 200. 10 Ibid., pp. 202, 203, 204.
4
Measurements: magnetic and electric
This chapter draws on material from S. Chapman and J. Bartels, Geomagnetism (Oxford: Clarendon Press, 1940); J. A. Fleming, Terrestrial Magnetism and Electricity (New York: McGraw-Hill, 1939).
5
The Fleming transition
1 S. J. Barnett, On magnetization by angular acceleration. Science, 30 (1909), 413. 2 S. J. Barnett, Magnetization by rotation. Physical Review, 6 (1915), 239–70. 3 Street cars will be stopped to permit work of scientists. Evening Star, 12 February 1923, p. 1. 4 For a detailed discussion of the Barnett and Einstein–de Haas experiments see Peter Galison, How Experiments End (Chicago, IL: University of Chicago Press, 1987), pp. 39–74. 5 As told repeatedly to the author.
6 The last cruise This chapter draws on material from J. Harland Paul, The Last Cruise of the Carnegie (Baltimore, MD: The Williams and Wilkins Co., 1932); Statements regarding circumstances attending the destruction of the Carnegie in Apia Harbor, November 29, 1929, by those of the scientific and sailing staff who proceeded on the steamship Ventura from Apia, Western Samoa, December 6, 1929, DTM Archives; Statements dictated and supplied through Messrs. Louis T. Snow and Company, of San Francisco, Cal., on February 6, [1930] by Messrs. Carl Sturk, Engineer, Erik Stenstrom, Mechanic, and J. Lindstrom, seaman, of the ship Carnegie. DTM Archives (drawer with file jacket so marked).
7
The magnetic observatories and final land observations
1 Earth currents are conducted through the ground and are observed directly with buried wires. They are generated primarily by induction from the time variations of the geomagnetic field and were first observed by long-distance telegraphers. 2 W. F. Wallis and J. W. Green, Land and Ocean Magnetic Observations, 1927–1944 (Washington, DC: Carnegie Institution, 1947), p. 17.
Notes 8
277
The ionosphere
This chapter draws on material from C. Stewart Gillmor, The big story: Tuve, Breit, and ionospheric sounding. In History of Geophysics, vol. V, ed. Gregory A. Good (Washington, DC: American Geophysical Union, 1994), pp. 133–41. 1 Allan A. Needell, Science, Cold War and the American State: Lloyd V. Berkner and the Balance of Professional Ideals (Amsterdam: Harwood Academic Publishers, 2000). 2 The Boston Herald, 19 March 1944.
9
Collaboration and evaluation
1 H. U. Sverdrup, Magnetic, Atmospheric-Electric, and Auroral Results, Maud Expedition, 1918–1925 (Washington, DC: Carnegie Institution, 1927). 2 W. F. Wallis and J. W. Green, Land and Ocean Magnetic Observations, 1927–1944 (Washington, DC: Carnegie Institution, 1947), pp. 21–34; Per F. Dahl, From Nuclear Transmutations to Nuclear Fission (Bristol: Institute of Physics Publishing, 2002), p. 44. 3 Fleming, ed., Terrestrial Magnetism and Electricity. 4 Chapman and Bartels, Geomagnetism. 5 Illustrative of this is the following marginal notation: “Data used in the construction of this chart is the result of the comprehensive analysis of worldwide magnetic observations since 1905. It was prepared for the Navy Department under contract with the Department of Terrestrial Magnetism, Carnegie Institute [sic] of Washington.” 6 E. H. Vestine, Lucile Laporte, Caroline Cooper, Isabelle Lange and W. C. Hendrix, Description of the Earth’s Main Magnetic Field and its Secular Change, 1905–1945, CIW publication 578 (Washington, DC: Carnegie Institution of Washington, 1947); E. H. Vestine, Lucile Laporte, Isabelle Lange and W. E. Scott, The Geomagnetic Field, its Description and Analysis, CIW Publication 580 (Washington, DC: Carnegie Institution of Washington, 1947). 7 Isoporic: pertaining to an imaginary line or a line on a map of the Earth’s surface connecting points of equal annual change in one of the magnetic elements. 8 Sydney Chapman and Julius Bartels, Geomagnetism (Oxford: Oxford University Press, 1940), p. iii. 9 E. H. Vestine, On the variation of the geomagnetic field, fluid motions, and the rate of the Earth’s rotation. Journal of Geophysical Research, 63 (1953), 127–45.
10
The Tesla coil
This chapter draws on material from Merle A. Tuve, “Respectfully Submitted, Merle A. Tuve.” Monthly reports, Tuve to John Fleming, Director of DTM, June 1929– December 1931, 382 pp. These formal reports present in extreme detail the work of the nuclear physics group for the period indicated. Edited and annotated by Louis Brown. Archives of the Department of Terrestrial Magnetism, Carnegie Institution of Washington, 1978.
278
Notes 11
The Van de Graaff accelerator
This chapter draws on material from Tuve, “Respectfully Submitted, Merle A. Tuve.” Monthly reports, Tuve to John Fleming, Director of DTM, September 1931–May 1934, 382 pp.
12
The nuclear force
This chapter draws on material from Tuve, “Respectfully Submitted, Merle A. Tuve.” Monthly reports, Tuve to John Fleming, Director of DTM, June 1934–December 1938, 382 pp. 1 Dahl, From Nuclear Transmutations to Nuclear Fission, p. 44.
13
Fission
This chapter draws on material from Tuve, “Respectfully Submitted, Merle A. Tuve.” Monthly reports, Tuve to John Fleming, Director of DTM, January 1939–December 1939, 382 pp. 14
Cosmic rays
This chapter draws on material from James A. Van Allen, ed., Cosmic Rays, the Sun and Geomagnetism: The Works of Scott E. Forbush (Washington, DC: American Geophysical Union, 1993); S. E. Forbush, Some recollections of experiences associated with cosmic-ray investigations. In Early History of Cosmic-Ray Studies, ed. Y. Sekido and H. Elliot (Dordrecht: D. Reidel Publishing Company, 1985), pp. 167–9.
15
The proximity fuze and the war effort
This chapter draws on material from Louis Brown, The proximity fuze: the smallest radar. In Brown, A Radar History of World War II (Bristol: Institute of Physics Publishing, 1999) ch. 4.4; Ralph B. Baldwin, The Deadly Fuze: Secret Weapon of World War (San Rafael, CA: Presidio Press, 1980). 1 R. B. Roberts, extract from a manuscript autobiography and reminiscences, DTM Archives, 1978.
16
The Tuve transition
This chapter draws on material from H. E. Le Grand, Chopping and changing at DTM 1946–1958: M. A. Tuve, rock magnetism, and isotope dating. In History of Geophysics, vol. V, ed. Gregory A. Good (Washington, DC: American Geophysical Union, 1994), pp. 173–84. 1 Vestine, Laporte, Lange and Scott, The Geomagnetic Field, its Description and Analysis, p. 5. 2 G. W. Wetherill, letter to Shaun Hardy, 30 August 2002.
Notes 18
279
The cyclotron
This chapter draws on material from DTM Archives marked “Cyclotron” and “Cataracts,” November 1946–February 1955. 1 R. B. Roberts, extract from a manuscript autobiography and reminiscences, Archives of the Department of Terrestrial Magnetism, Carnegie Institution of Washington, 1978. 2 P. H. Abelson and P. G. Kruger, Cyclotron-induced radiation cataracts. Science, 110 (1949), 655–7. 3 R. B. Roberts and P. H. Abelson, Reactions of 15 MeV. Physical Review, 72 (1947), 76L. 19
Biophysics
This chapter draws on material from Philip H. Abelson, Genesis and evolution of the biophysics program. Transcription of remarks at a staff conference, 9 March 1953. DTM Archives; Richard B. Roberts, Historical review of biophysics section. Yearbook 74 (1975), 172–9. 1 Richard B. Roberts, Dean B. Cowie, Philip H. Abelson, Ellis T. Bolton and Roy J. Britten, Studies of Biosynthesis in Escherichia Coli (Washington, DC: Carnegie Institution Publication 607, 1955). 2 Richard B. Roberts, ed., Studies of Macromolecular Biosynthesis (Washington, DC: Carnegie Institution Publication 624, 1964). There is a history of the group’s efforts as its final chapter. 20
Explosion seismology
Thomas D. Cornell, Merle A. Tuve’s postwar geophysics: explosion seismology. In History of Geophysics, vol. V, ed. Gregory A. Good (Washington, DC: American Geophysical Union, 1994), pp. 185–214. 1 Andrew C. Lawson et al., The California Earthquake of April, 18, 1906: Report of the State Earthquake Investigation Commission (Washington, DC: Carnegie Institution, 1908, reprinted 1969). 2 John S. Steinhart and Robert P. Meyer, Explosion Studies of Continental Structure (Washington, DC: Carnegie Institution Publication 622, 1961). 21
Isotope geology
This chapter draws on material from C. T. Harper, ed., Benchmark Papers in Geology: Geochronology (Stroudsburg, PA: Dowden, Hutchinson and Ross, Inc., 1973). 22
Radio astronomy
1 Relevant papers of the National Science Foundation’s Advisory Panel on Radio Astronomy, Washington Meeting, 18 and 19 November 1954. DTM Archives – Radio Astronomy. 2 Needell, Science, Cold War and the American State, pp. 127–41, 192–5, 259–92.
280
Notes
3 M. A. Tuve and S. Lundsager, Velocity structure in hydrogen profiles. Astronomical Journal, 77 (1972), 652–60. 4 William E. Carter and Douglas S. Robertson, Studying the Earth by very-longbaseline interferometry. Scientific American, 255 (November 1986), 46–54. The photograph on p. 47 is of the Derwood dish.
23
Image tubes
This chapter draws on Reports by the Committee on Image Tubes for Telescopes, Carnegie Yearbook 53 (1953–4) through 69 (1969–70). 1 J. S. Miller, L. B. Robinson and E. J. Wampler, The present status of the Lick Observatory image tube scanner. Advances in Electronics and Electron Physics, 40 (1976), 693–8. 2 Craig D. Mackay, Charge-coupled devices in astronomy. Annual Review of Astronomy and Astrophysics, 24 (1986), 255–83.
24
Computers
This chapter draws on Paul E. Ceruzzi, A History of Modern Computing (Cambridge, MA: MIT Press, 1999).
25
Earthquake seismology
1 J. E. Ramirez, S. J. and L. T. Aldrich, eds., El transici´on oc´eano-continente en el suroeste de Colombia (Bogota: Instituto Geofisico, Universidad Javeriana, 1977). 2 I. S. Sacks, A broad-band large dynamic range seismograph. In The Earth Beneath the Continents, ed. J. S. Steinhart and T. J. Smith (Washington, DC: American Geophysical Union, 1966), pp. 543–54.
28
Astronomy
1 Vera C. Rubin, W. Kent Ford and Norbert Thonnard, Yearbook 78 (1979), 363–73. 2 Wendy L. Freedman, The Hubble Space Telescope and measuring the expansion rate of the universe. Yearbook 94 (1995), 141–7.
29
The solar system
1 T. C. Chamberlin, Fundamental problems in geology. Year Book 2 (1903), 261–70; Year Book 3 (1904), 195–254; (F. R. Moulton), Year Book 4 (1905), 186–90. 2 George W. Wetherill, Possible consequences of absence of “Jupiters” in planetary systems. Astrophysics and Space Science, 212 (1994), 23–32. 3 P. Brown, Z. Ceplecha, R. L. Hawkes, G. Wetherill, M. Beech and K. Mossman, The orbit and atmospheric trajectory of the Peekskill meteorite from video records. Nature, 367 (1994), 624–6.
Notes 30
281
Geochemistry
This chapter draws on Alan Zindler and Stan Hart, Chemical geodynamics. Annual Review of Earth and Planetary Sciences, 14 (1986), 493–571. 31
Island-arc volcanoes
1 G. Raisbeck, F. Yiou, M. Fruneau, M. Lieuvin and J. M. Loiseaux, Measurement of 10 Be in 1000 and 5000-year-old Antarctic ice. Nature, 275 (1978), 731–2. 2 Tsuyoshi Ishikawa and Eizo Nakamura, Origin of the slab component in arc lavas from across-arc variation of B and Pb isotopes. Nature, 370 (1994), 205–8. 3 Tsuyoshi Ishikawa and Fouad Tera, Geology, 27 (1999), 83–6. 32
Seismology revisited
1 Paul G. Silver, Robert P. Meyer and David E. James, Intermediate scale observations of the Earth’s deep interior from the APT89 Transportable Teleseismic Experiment. Geophysical Research Letters, 20 (1993), 1123–6. 2 David E. James and Marcelo Assumpcao, Tectonic implications of S-wave anisotropy beneath SE Brazil. Geophysical Journal International, 126 (1996), 1–10. 3 Paul G. Silver, Seismic anisotropy beneath the continents: probing the depths of geology. Annual Review of Earth and Planetary Sciences, 24 (1996), 385–432. 34
The Solomon transition
1 Geoffrey W. Marcy and R. Paul Butler, Planets orbiting other suns. Publications of the Astronomical Society of the Pacific, 112 (2000), 137–40. 2 Sean C. Solomon, Mark A. Bullock and David H. Grinspoon, Climate change as a regulator of tectonics on Venus. Science, 286 (1999), 87–90. 3 Sean C. Solomon, Return to the iron planet. New Scientist, 29 January 2000, pp. 32–5. 36
Epilogue
1 Merle A. Tuve, Is science too big for the scientist? Saturday Review, 6 June 1959, pp. 48–51.
INDEX
Note: page numbers in italics refer to figures. Abelson, Philip 101 accelerator construction 134 biophysics 119, 139, 140 candidates for Director 254 departure from DTM 141 President of Carnegie Institution 147, 204 uranium studies 114–15 accelerator tube 75 accelerators 118 termination of studies 132 see also Van de Graaff accelerator Acierno, Michael 264 Adams, Leason H. 149 agar column technique 143–5 experiments 144, 145 air conductivity 38 aircraft sensing 110 airships 67 Airy hypothesis 155 Alaska, earthquake (1964) 188 Alaska, seismic expedition 152 Aldrich, Thomas 158 Acting Director 204 isotope geology 159 seismology team 152 Alexander, Conel 249 alkaline-earths 229 alkaline elements 229 alpha–alpha scattering 126 Amaldi, Edorado 93 American Geophysical Union 45, 122–3 amino acid synthesis 141 Amundsen, Roald 65, 67 analytical techniques 142, 143–5 Anderson, John 150 Andes Mesozoic rocks 231 origin 231–2 seismic expedition 152 seismic stations 153–4 earthquake research 187–8 seismological studies 153 structure 193
282
anisotropy expansion of universe 213 mantle 243 Antarctica circumnavigation 19, 20 expeditions 66, 67–8 antiaircraft trials 112 Applied Physics Laboratory 112, 114, 255 archives 206–7, 265 Arctic observations 66–7 argon, mass spectrometry 251 argon-40 161–2 Asada, Toshi 188, 190 Assousa, George 131, 215 Assumpc¸ao, Marcelo 241 asteroids 223 Aston, Francis W. 157–9 astrometric wobble 225 astronomy 209–19 computers 215 extragalactic 254 image intensification 122 imaging devices 120 program 253–4 Solomon’s views 254 see also optical astronomy; radio astronomy atmosphere conducting layer 55 electrical properties 37, 38, 42, 118 upper levels 44 atomic bomb project 109 Atomic Energy Commission 125 atomic nucleus 59, 73–102 Atomic Physics Observatory 88, 132 Ator, Claudine C. 262–3 Ault, James Percy 7 death 49–50 Galilee cruise 9 leadership 44–5 World War I 19 Australasian Antarctic Expedition 67 Bacote, Maceo 261–2 bacteriophages 140
Index Baffin Island (Canada) 25, 30 Balmer lines of hydrogen 172, 250 Balsam, Charles 261 Balsam, Ella 272 Barnett, L. J. H. 273 Barnett, Samuel J. 43–4 Barnett effect 43 barrier-film tube 177–8 Bartels, Georg 259–60 Bartels, Julius 104 Bartholomew, Richard 260 Bartlett, Jay 260 basalts 235 chemical extraction techniques 245–6 ocean-island 230 purity 249 Bauer, Dorothea Louise 11–12 Bauer, Louis Agricola xiii, 1, 267–8 data collection 36 fundamentals of magnetism studies 43 incapacity 45 international outlook 3 mapping magnetic field of earth 33 personal papers xi sea deflector 34 US Coast and Geodetic Survey 2 World War I 20 Baum, William A. 176, 177–8 image tubes 178 Beach, Liselotte 107, 183, 264 Beehler, Emma L. 262 Berkner, Lloyd V. 60, 59–60, 62 dispute with Tuve 166–7 geomagnetic activity 71 geophysics 119–20 ionosphere research 119–20 National Radio Astronomy Observatory 166 World War II 115 Berky, Darius Weller 25–7 beryllium-8 128–9 beryllium 10, 95 extraction from lava 233 mass spectrometry 233–4 volcano samples 235–7 big science 119–20, 167 biophysics 119, 139–47, 203–4 analytical techniques 142 cyclotron 135 language 142–3 bird song, inheritance 146–7 Bohr, Niels 97, 100 Bolton, Ellis xvi, 140, 143 agar column experiment 144 Assistant Director 147, 203 Director 170–1, 203, 204, 268–9 permission for grant proposals 205 resignation 204
283 Bolton–McCarthy agar column experiment 145 boron 235, 236–7 beryllium tracking 237 Bors, Gary 261–2 Boss, Alan P. 224–5 Boyd, F. R. 247 Bramhall, E. H. 66 branching ratio 162 Brazil, portable seismic stations 241 Breit, Gregory 44, 57–9, 100, 268 ionosphere studies 55–6, 58, 59 lithium bombardment 93 personality difficulties 75–7 proton resonances 85 proton scattering by protons 90 quantum mechanics of scattering 89 Tesla coil 73, 75 Briggs, Lyman 102 Britain, proximity fuze use 114 Britten, Roy 140, 142–3 computer use 184–5 departure from DTM 146 DNA studies 144, 145, 146 radio astronomy 163–4 Brooke, Philip E. 261 Brooks, Christopher 230 Brown, C. F. 260 Brown, Frederick 27–9 Chinese expedition 28, 29 work in China 29, 54 Brown, Louis 128, 131 Acting Director 254 geochemistry 245 mass spectrometry 205 accelerator 233–4, 237 Swiss collaboration 128 Brown, Peter 224 building design 7 Burke, Bernard F. departure from DTM 170 mass spectrometry 158 radio astronomy 163–4, 165, 165, 168 radio hydrogen measurement 170 Burkett, Peter 260 Burrhus, Kenneth D. 260–1 Bush, Vannevar 110, 117, 176 Butement, W. A. S. 111 Butler, R. Paul 225–6 appointment 254–5 Buynitzky, Stephen J. 137, 260 Byrd, Brady 260–1 Byrd, Richard E. 67–8 Cabre, R. 187–8, 189 Caherty, Francis J. 259 calcium-40 162 California, strainmeters 200–1 Canadian Shield studies 152
284 Capello, James John 262 carbon-15 125 caretakers 261 Carlson, Richard W. 232, 245–6 geochemistry 247–8, 249–50 xenolith 247 Carnegie, Andrew vii–viii Carnegie (expedition ship) 9 acceptance 14 boys 18 circumnavigation of Antarctica 19, 20 construction 11–16 cost 14 data acquisition 37 decommissioning 41 destruction 49–50 electric power demands 47–8 engine 14–16, 17 Forbush on 103 hemp hawsers/rigging 48 instrument cupolas 16 launching 13 magnetic corrections 36 mail barrel 21 modifications 14–16 oceanographic work 49 radio 47–8 refitting 47 sea trials 15 Carnegie Institution, Presidents Abelson 147, 204 Bush 110 Ebert 253 Merriam 41–3 Singer 253–4 Woodward 3 carpenters 261 cascaded tube 177, 178–80 data quality 181–2 development 180 installation 181 type C33011 180, 181 cataracts, cyclotron operators 135–6 cell wall permeability 140 Cepheid variable stars 218–19 Cerro Tololo (Chile) telescope 214 Chamberlin, Margaret E. 264 Chamberlin, Thomas C. 221–2 Chapman, Sydney 68 charge-coupled device 182 China Brown, Frederick expedition 28, 29 work in 29, 54 strainmeters 201–2 Christmas Island Station (Indian Ocean) 62 chromatography 142 chromosphere examination 53
Index chronometers 20 calibration 27 use 33 Churchill, Winston 110–12 clerical staff 262–3 Coder, Mary K. 262–3 collaborative work 271 Collins, Richard 261 comets 224 communication military needs 62 ships 21 compasses 20, 33 Compton, Arthur H. 103 Compton–Bennett precision cosmic-ray meter 103–4, 106 barometric corrections 104 diurnal effect 104 Huancayo observatory 104 computers 183–6 astronomy 215 data reduction 184, 185 mass spectrometry 185–6 seismology 239, 241–2, 260 staff 264 use by geochemists 185–6 use in seismology 155, 183, 186 concordia 161, 161 Condon, Edward 90 continental shelf study 120 continuum radiation 170 core, boundary 188 corona current 84, 85 cosmic radio noise 120, 163 cosmic rays 103–7 discovery 38 diurnal variations 107 flux recording 53 solar cycle 107 cosmochemistry 248–9 Coulomb excitation 126 Coulomb forces 89, 90, 126 Cowie, Dean 101, 119, 131, 134, 136 biophysics 119, 139 Institut Pasteur, Paris 147 radiation accident 135–6 Crab nebula 165 radio source 164–5 cratons 246–7 Cretaceous–Tertiary boundary 224 cyclotron 101, 111, 118, 119, 137, 133–7, 138 cataract incidence 135–6 disposal 138 end of use 137–8 magnet with coils 135 neutron distribution 136–7 operational function 135 radiation hazards concerns 134–5 radioisotope production 139 technicians 260
Index Dahl, Odd 59, 65–6 return to Norway 95 Tesla coil 75 Darby, Hugh H. 139 data acquisition from land stations 54 electronic collection 36–7 reduction 184, 185, 260 Davidson, Eric 146 Davis, Gordon L. 159, 230 Day, Arthur L. 149 Day, Michael 261–2 declination 1–2 errors 2 measurement 33, 53 Department of Terrestrial Magnetism (DTM) Broad Branch Road location 3, 6, 7, 206, 207 support staff 257–5 Wetherill’s disagreement 253 broadening of scope 41–3 cost of running 270–1 formation 2–3 international undertakings 273 location 3 new buildings 206 pace of operations 273–4 physical plant 207 scientist relationship with observations 273 site 5 size 271 Derwood dish 167, 168, 268–9 deuterons 87 reactions 87 diamonds 247–8 Die Naturwissenschaften (journal) 97 diffusion theory 93 Digital Equipment Corporation (DEC) 186 Dillin, Dorothy B. 263, 262–3, 264–5 Dingus, Roy 261–2 dip circle 33 dipole array 165 direct reactions 136–7 dishes, radio astronomy 167–70 Derwood 167, 168, 268–9 end of use at DTM 171 La Plata 167, 169 water-line emission 171 W¨urzburg 164, 167, 171 DNA studies 142, 143–5, 146, 147 double-stranded 146 reassociation rates 146 repeated sequences 144, 146 Doak, John B. 260 Domton, Carl R. 261 Doppler shift 172, 225 Dove, William N. 262
285 draftsmen 263 Dunlap, Janice Scheherazade 262–3 Duryee, William R. 139 E layer 57 electron density 63 Earth age of 157 history of 228 inductor 35 life on 223–4 tides 201 see also mantle Earth crust studies 149, 150 core boundary 188 inhomogeneities 152 layered model 154–5 mantle boundary 188 models 155 strain measurement 195 earthquakes 149, 153, 187 borehole strainmeters 199 research 187–8 seismic wave generation 239 slow 200 strain releases in episodic events 198–200 see also seismology East Coast Onshore–Offshore Experiment 156 Ebert, James 253 Ecklund, Everett T. 151 machinist skills 260 radio astronomy 163–4, 169 seismology team 152 Edmonds, Harry Marcus 19 Huancayo observatory 53–4 Edmunds, Charles Keyser 27 education 269–70 Einstein–de Haas effect 43 electricity atmospheric 37, 38, 42, 118 potential gradient measurement 38–9 protection of instruments/observers in field observations 24, 25 electrometers 38 tube 75 electron density 63 electron microprobe 228 electronic imagery 175–82 electronic motion, orbital 43 electronics data collection 36–7 technicians 260–1 electrons, accelerated 75 electrostatic focusing 177, 178 electrostatic generator 77, 81 high voltages 87 Emler, John A. 260–1 energetic alpha particles 84 Ennis, Carroll Christopher 263
286 Escherichia coli studies 140–1 amino acid synthesis 141 laboratory handbook publication 141 nucleic acid synthesis 141–2 protein synthesis 141–2 Esparza, Pablo 261 Esparza, Rosa Maria 262–3 Evertson, Dale W. 195, 198 Ewen, Harold 167 expeditions 23–31 expense 41 field reports 23, 25 leaders 25–30 planning 23 transport 23–5, 30–1 explosion seismology 122, 149–56, 187, 189 Andes 153 Peru 154 South America 191 extra-sensory perception studies 147 F layer 57 contours 63 electron density 63 Fassett, John D. 245 faulting 195 fellows 272 Fermi, Enrico 93, 97–8, 100 uranium fission 101 ferromagnetism 43 field work, restrictions 41 Findlay, John 172 Firor, John departure from DTM 170 radio astronomy 163–4 Fisk, H. W. 20 fission 100, 97–100, 102 source of power 109 Fleming, John Adam xiv, 7, 98, 100, 268 Assistant Director 44–5 Director 45 ionosphere research 59 publication of textbook 68 Tesla coil studies 73 World War I 20 Flexner, Louis B. 101, 139 exchange across fetal membranes 140 long-term memory studies 146–7 Forbush, Scott 49, 107, 263 Carnegie voyage 103 Compton–Bennett meter 103–5 cosmic rays 103, 107 data support 264 diurnal variations 107 Huancayo observatory 103 postwar work 105–7 statistical analyses 183 sunspot cycle 107 war years 105 Forbush decrease 105, 106
Index Ford, W. Kent 177–8, 181 digital recording 182 expansion of universe 213 image tubes 179, 203 rotation of spirals 214–15 supernova recording 215 work with Rubin 209, 214 Fortran computer language 185 Fourier transform of time-dependent signal 239 Franklin, Benjamin 37 Franklin, Kenneth L. 165, 165 Frick, D. German 187–8 funding 270 see also government funding fuzes project 110–11 radio 111–12 see also proximity fuze galactic rotation 168, 172, 209–10, 213–15 rotation curves 215 galaxy classification 215–16 collision 216, 217 distances 218–19 elliptical 215–16 formation 215–16 long tails 216 movement 213 polar rings 216 speed of recession 218 spiral 215–16 galaxy M31 in Andromeda 170, 173 dark matter 212 rotation 168, 172, 209–10, 212 galaxy NGC 801 211 galaxy NGC 3115 214 galaxy UGC 12591 214–15 Galilee (expedition ship) 9–10, 11, 12 cuddy 12 final cruise 11 magnetic anomalies 35–6 need for power 14 Pacific cruise 11 sinking 11 typhoon 11 Gamow, George 97 gardeners 261 Gauss, Carl Friedrich 2 Gawrys, Stanley 261 genetics 140 genetic code 142 geochemistry 227–32 computer use 185–6 instrumentation 249–50, 251 ion microprobe 138, 250 isotope pairs 245 geochronology 227 isotope pairs 245
Index geology, isotope 157–62 geomagnetic field 2 diurnal variation 71 mapping 2 reversal 190 secular variation 68–71 temporal variation 2 geomagnetism 1–2 Geophysical Laboratory colocation 6, 206, 207, 253, 261–2 library 265 geophysics 119–20 Venus 255 Gilliand, Theodore R. 59 Gish, Oliver 71 thunderstorm observations 118 Global Digital Seismic Network 240 global positioning satellites 240 Goddard, Richard H. 66–7 government funding 205, 253, 270, 271 Graham, John A. 216 extragalactic observations 253 Hubble expansion constant 218–19 star formation 216–18 Graham, John W. 121 mobile rock sampling laboratory 121 Graham-Smith, Francis 163–4 Green, Bank see National Radio Astronomy Observatory (West Virginia) Green, George K. 134 accelerator construction 134 Green, J. W. 54 Green, Rod 242 Greenwich Mean Time 33 Gum nebula 218 star formation 218 Gutenberg, Beno 154 gyromagnetic ratio 44 Haase, Bruno J. 259 Haber, Leo J. 261 Hafstad, Lawrence 59, 94 electrometer tube 75 fuze project 110–11 nuclear physics 204 nuclear structure theory 89 Tesla coil 75 uranium fission 97–100 World War II 109 Hahn, Otto 97 Hales, Anton 156 Hall, John S. 176, 177–8 Hardy, Shaun 206–7, 265 Harradon, Harry Durward 264 Harris, Bennie 261 Hart, Stanley R. 162, 227, 228, 228–9 alkaline-earths 229 alkaline elements 229 group 230 isotope geology 162
287 Hauri, Erik 249, 250 Hawaii hot-spot volcanoes 249 portable seismic stations 241 xenoliths 249 Hayes, J. T. 9, 11 Heaviside, Oliver 55 Hedlund, Martin 18–19 Hekla (Iceland) 201 helium 157 elastic scattering of protons 130 Hendrix, William C. 263 Herb, R. G. 85 Herbig–Haro (HH) objects 217, 218 Heumann, G. K. 246 Heydenburg, Norman 94 departure from DTM 170 Florida State University 128 nuclear physics 125–6, 204 nuclear structure theory 89 World War II 109, 114 Hill, Kathleen 262–3 Hoering, Thomas 231 Hofmann, Albrecht 230 departure from DTM 232 isotope diffusion studies 230 Hofmaster, Robert 259 Holmes, A. 157 Horan, Mary 260–1 Howell, G. D. 30 Hoyer, Bill H. 147 Huancayo (Peru) observatory 52, 53–4 Compton–Bennett meter 104 eclipse observations 63 Forbush at 103 given to Peruvian government 117–18 Hubble expansion constant 218–19 Huff, Christian 259 Huff, Tallman Frederick 259 Hughes, Vernon 127 Hunter, Brooke 262–3 hydrogen all-sky hydrogen-line survey 169 Balmer lines 172, 250 galactic 167, 170, 173 isotopes 85–7 IBM, computers 183 Iceland portable seismic stations 241 strain releases in episodic events 200 strainmeters 201 image tubes 175–82, 203, 269 digital recording 182 measuring engine 210 spectrograph 181, 209 testing 179 see also cascaded tube inclination measurement 33
288 industrial companies, image tube development 176–7 instrument shop 257–8, 260 instruments design 270 development 2, 36 World War I 20 early 33 geochemistry 249–50, 251 isolation 53 magnetic observatories 53 interferometer 164 International Union of Geodesy and Geophysics 45 interstellar clouds of gas and dust 222, 224–5 ratio of thermal to gravitational energy 225 star formation 217, 224–5 interstellar dust 248–9 ion microprobe 248 cosmochemistry 248–9 geochemistry 138, 250 technician 261 ion optics 84 ionosphere 55, 63 composition 63 height measurement 53, 58 radio wave reflection 57 research 61, 62, 119–20 ions, accelerated 75, 81–8 Ishikawa, Tsuyoshi 237 isospin quantum numbers 129–30 isostasy 154 isotope dating 122 isotope geology 157–62 isotopes diffusion studies through basalt melts 230–1 fractionation 251 meteorite composition 248–9 mid-ocean ridges 230 pairs 245 Van de Graaff accelerator 134 variations in rocks 229 see also radioisotopes, artificial isotopic competition 140, 141 isotopic signatures 227 mantle 230 Izu-Oshima earthquake (1978) 198–200 James, David 190, 194 origin of Andes 231–2 seismology 240, 241 Jansky, Karl G. 120, 163 Japan 190–1 strainmeters 196–200 borehole 197, 198, 199 network installation 198–200 Japan Marine Science and Technology Agency 244
Index Jeffreys, Harold 188 Jodrell Bank (England) 166 Johnson, Paul A. cyclotron construction 260 radio astronomy 164, 163–4 seismology team 152 Johnston, E. A. 120–1 Johnston, H. F. Huancayo observatory 53–4 Watheroo observatory 53 Jupiter 165, 173, 223 comet path alteration 224 radio noise 165 synchronous gravitational perturbations 223 Kaapvaal Craton (South Africa) 247 portable seismic stations 241, 242 Keiser, Sandra A. 264 Keck telescope 225–6 Kennelly, Arthur Edwin 55 Kennelly–Heaviside layer 57 Key, William E. 261–2 Kilgore, Willis Jr. 261 kimberlite 247, 247–8 Kitt, Peak, telescope 214 Klein, Jeffrey 234 Kohne, David 146 Krebs, Hans Adolf 141 Krogh, Thomas 228, 230 Ksanda, Charles J. 136, 260 Kuehnel, Adriana 242, 260 Kuehnel, Randy 241, 242 Kuentz, David 260–1 Kuhn, Richard 139 Kuiper, Gerard 222 Kuiper belt 224 La Plata (Argentina) observatory 169, 167–9, 170–1 improvements 171 laboratory, clean 228 Lallemand, A. 175 Lallemand tube 176 Lange, Isabelle 264 Laplacian nebula 221 lavas, subducted material 233 Lawrence, Ernest 133 lead isotopes 157–9, 160 determinations 160–1 loss 160 Lederer, Edgar 139 Leeman, William P. 236–7 Lennard window tube 177, 178 library 206–7 information on expeditions 23 staff 264–5 Lick telescope 225–6 Lima, seismic analysis center 188
Index Linde, Alan 201 seismicity at regions of plate subduction 244 seismometers 193 strainmeters 198–200, 201 liquid chromatography 139, 141 lithium-7 128–9 lithium bombardment 93–5 neutron production 95 lithospheric plate subduction 233 Little, Charles A. 168, 260 Lorz, John G. 259, 259 Lovell, Bernard 163 Lunch Club 123, 123–4, 206, 207, 271 Luo, Yungue 265 McCarthy, Brian J. 143 McGee, J. D. 178 machine shop 3 machinists 259, 260 Macmillan Arctic Association 66–7 McNish, A. geomagnetic activity 71 palaeomagnetism 120–1 World War II 115 McWhorter, Nelson 259–60 magma eroded material in continental platforms 232 plumes 230 reaction with plate materials 249 reservoirs 249 sources 227 volcanism 249 magnetic fields corrections 35 land station locations 70 magnitude 34 ships 20 solar 105 structure 63 variation 36, temporal 36 vertical intensity 69 magnetic focusing 177, 178 magnetic observations marine 7 protection of instruments/observers 24, 25 magnetic observatories 51–4 criteria 51 instruments 53, isolation 53 locations 51–3 magnetic storms 63, 71 cosmic ray studies 104–5 magnetism local sources 35 remanent 122 rocks 121 studies on fundamentals 43
289 terrestrial 3 termination of research in 117–18 magnetometers 3, 66 fluxgate 37 theodolite 34 towed 50, 190 magnetron, resonant 111 Malvin, Stephen W. 261 Manhattan Project 119 mantle anisotropy 243 boundary 188 indicators of compositional differences 230 isotopic signatures 230 mapping 20 maps 68 Marconi, Guglielmo 55 Marcy, Geoffrey 225–6 Marton, Ladislaus L. 176 mass spectrometry 120, 157–9, 205, 228, 269 accelerator 233–4, 237 argon 251 beryllium 233–4 chemical extraction techniques 245–6 computer use by geochemists 185–6 highly enriched isotopes 159 instruments 158 improved 227–8 ion microprobe 248 isotope fractionation 251 isotopic signatures 227 plasma 251 technicians 260–1 thermal 251 Mauchly, John 39 Mauchly, Sebastian J. 39 Maud (expedition ship) 65 Mayer, Maria Goeppert 127 measurements 33–9 Meitner, Lise 98 memory, long-term 146–7 Mendousse, Jean S. 134 Mercury 255 Merriam, John C. 41–3, 267–8 nuclear physics program evaluation 77 messenger RNA 143 MESSENGER spacecraft 255 meteorites 223, 229, 246 isotopic composition 248–9 trajectory recording 224 video recordings 224 meteors, ion trails 163 Meyer, Robert C. 99–100, 240, 260 Meyer, Robert P. 152 mica-window tube 177, 178 microbiology 147 microlensing 225
290 mid-ocean ridges isotope ratios 230 magma 249 Middleton, Roy 234 Millikan, Robert A. 103 Mills Cross 165–6 Millsaps, Jefferson H. 262 mineral crystals, age determination 159–60 mines, magnetic influence 20–1 Moats, Paul L. 263 Mock, Timothy D. 260–1 Moho discontinuity 149, 150, 151–2 boundary 156 location 155 Mohorovicic, Andrija 149 Moon rocks 229 Morgan, Ben O. 178 Morris, Julie 236–7 appointment 272–3 geochemistry 245 Mott scattering 92, 126 Moulton, F. R. 221 Mount Wilson Observatory 163 mountains, origin 190 mouse satellite 143–5 multivibrator circuit 57 NASA, projects 254, 255 National Bureau of Standards 57–9 uranium bomb 102 National Radio Astronomy Observatory (West Virginia) 166, 171–2, 203, 268–9 90 m dish 169–70 pressure on DTM radio astronomy 170 navigation, celestial 33 neutron reactor, controlled 100–1 neutron–proton scattering 90–3 neutrons delayed 100–1 distribution 136–7 radon–beryllium source 93 thermal 97–8 thermal fluxes 93 Nichols, M. L. 20–1 Nier, Alfred 158, 157–8, 159 Nittler, Larry R. 249 Nobile, Umberto 67 Noll, Hazel 262 non-magnetic huts 3 north, true 33 nuclear force 89–95 nuclear physics 204 and Tuve 118–20 Heydenburg’s studies 125–6 laboratory assistant for group 260 neutron distribution 136–7 postwar 125–32 program evaluation 77 skills development 75
Index termination of program 204–5 universe origins/evolution 125 nuclear spectroscopy 85 nuclear structure theory 89 nucleic acid synthesis 141–2 oceanographic work 49 olivine 243 Ontario Apartments 4, 7 Oort cloud 224 ¨ Opik, E. J. 215 optical astronomy 173, 181, 203 planet study 269 ore formations 35 orthicon 176, 178 orthopyroxene 243 oscillators, quartz-crystal controlled 57 oscilloscope, gas-focused cathode-ray 57 osmium behavior in partial melts 246 Hawaiian hot-spot volcanoes 249 negative ions 246 osmium-187 245–8 osmium-187/osmium-188 246 oxygen-16/oxygen-18 231–2 Pacific Ocean declination errors 2 magnetic survey 9–16 paleomagnetism 120–2, 190 plate tectonics 122 Pandit, Ben K. 250, 260 Parkinson, W. C. 26 Huancayo observatory 53–4 Watheroo observatory 53 PASSCAL (Program for Array Seismic Studies of the Continental Lithosphere) 240 Patrick, Lawrence 261–2 Pearson, Graham 247–8 Pedersen, Niels 262, 263 pegmatite uranite 162 Peru explosion seismology 154 see also Huancayo (Peru) observatory Peters, William John 11, 50 photocathode electronic image 177 semi-transparent 176 photoelectric devices, sensitivity 120 photoelectrons 175, 176, 177 photographic emulsions 175 photometric transiting 225 photomultiplier 1P21 175 Piggott, Charles S. 157–9 planetary systems 225–6 planetesimals 221, 222, 223 planets formation 223, terrestrial 205
Index optical astronomy 269 origins 221–3 plasma spectroscopy, inductively coupled 250 plasma wind 217 Plate Boundary Observatory 244 plate tectonics 122, 190, 228–9, 243–4 motion 37 seismicity at regions of plate subduction 244 Poe, Glenn Russell 260 strainmeters 201–2 polarized-proton beam 128–30 position determination at sea 33–4 postdoctoral staff 272 potassium-40 161–2 potential gradient measurement 38–9 Power, Daniela 260 Pratt, J. F. 9 Pratt hypothesis 155 Pre-Cambrian rocks, global assessment 162 Presidents of Carnegie Institution see Carnegie Institution, Presidents pressure (P) waves 188, 242 Pritchett, Henry S. 3 protein synthesis 141–2 Prothro, Lelah J. 264 proton energy 84 proton–proton scattering 89, 91, 92, 126 flux measurement 90 incident current measurement 90 quantum mechanics 89, 90 protons elastic scattering 127, 130 nuclear force between 93 polarized source 129 resonances 85 protostars 217 proximity fuze 111, 109–11, 113, 115 antiaircraft trials 112 manufacture 114 prototypes 112 radio 111–12 use 112–14 Purcell, Edward 167 Purgathofer, Alois 181, 209 Quade, Elliott M. 261 radar 110–11 fire direction 112 radiation, ionizing biological hazards 79–80, 84–5 dosage limit 84–5 monitoring of laboratory workers 79, 80, 84–5 radiation continuum 170 radio astronomy 120, 122, 163–73, 203, 268–9 see also dishes, radio astronomy
291 radio-echo apparatus 53 radio-echo sounding 56 radio-frequency generator 134 radio fuze 111–12 radio heterodyne technique 167 radio stars 163, 166, 171 radioactive decay 120 radioactivity collection 38 discovery 38, 157 terrestrial age determinations 157 radioisotopes, artificial 101, 119, 134 biophysics 139 radionuclides 119 Raisbeck, Grant 234 rare-earths 231 redepletion ages 247 Reber, Grote 120, 163 receivers, radio astronomy 167, 169 reciprocity failure 175 red shifts anomalous velocity distribution 211–13 measurement 210–11 remanent magnetism 122 resistors 85 resonance ionization 245 resonant magnetron 111 rhenium behavior in partial melts 246 metal affinity 246 negative ions 246 rhenium-187 beta decay to osmium-187 245–8 ribosomal RNA 143 Rice, Nancy R. 147 Rinehart, Carl 259 RNA studies 142, 143 Roa, Pedro 261–2 Roberts, Richard B. 95, 131 aircraft sensing 110 antiaircraft trials of fuze 112 biophysics 119, 139, 140 brain studies 146–7 delayed neutrons 100–1 exchange across fetal membranes 140 extrasensory perception studies 147 fission as source of power 109 fuze project 110–11 nuclear physics 204 uranium bomb 102 uranium fission 97–101 World War II 109 rocks age studies 157 alterations 227 isotope variations 229 magnetism 121 melting 227 mobile sampling laboratory 121 Moon 229
292 rocks (cont.) strain 195 study of age 120 Rodriguez, A. 187–8, 189 Rohrer, Urs 131 Roosevelt, Theodore (President) 102, 110–12 rotation curves 215 measurement 214 rubidium-87 159–60, 161–2 Rubin, Vera C. 180–1, 184 appointment 209, 272–3 expansion of universe 213 extragalactic observations 253 galactic rotation 213–15 image tubes 181, 203, 210 research 209–11 rotation of spirals 214–15 supernova recording 215 Rumbaugh, Lynn 95 Rumble, Douglas 231 Russell, Helen 262, 263 Rutherford, Ernest 73, 89, 157 Ryan, Jeff 236–7 S-wave phase shift 126 Saa, G. 189 Sacks, I. Selwyn 188 lithospheric plate subduction 233 seismicity at regions of plate subduction 244 seismology 189, 240 seismometer design 192, 191–2, 193, 193 strainmeters 195, 198–200 borehole 197, 198, 201 China 201–2 Safronov, V. S. 221, 222 Safronov–Wetherill approach 221 Salguerio, R. 187–8 samarium-147 decay into neodymium-143 231 San Andreas fault 201 Saturn comet path alteration 224 synchronous gravitational perturbations 223 Scalco, Roy 261–2 scattering chamber 91 Schleigh, Brian 260 Schweizer, Franc¸ois 215–16 appointment 254–5 colliding galaxies 217 extragalactic observations 253 scintillator, organic liquid 142 Scott, Walter E. 263 sea deflector 34 Seager, Sara 226 seamen lost overboard 18–19 Section T 111, 112 sedimentary rock studies 121 Seemann, Michael 197, 198
Index strainmeters 197, 259 China 201–2 seismic waves 187 attenuation 191 earthquake generation 239 propagation 242 seismicity, at regions of plate subduction 244 seismographs 150, 239 seismology 149–50, 239–44 computers 239, 260 use 155, 183, 186, 241–2 earthquake 187–94 explosions 187 Mid-Atlantic Ridge 255 post World War II 269 technicians 260 see also explosion seismology seismometers 150–1, 240 data analysis 241–2 design changes 240 false low-frequency response 191–3 horizontal motion 151, 192 improvement 239 portable 241, 240–1 technicians 260 three-component, broad-band 242, 242 sextants 20 shear (S) waves 90, 188, 242 core–mantle boundary originating 242–3 Shimizu, Nobumichi 230 ships communication 21 government-funded research vessels 37 magnetic anomalies 35–6 magnetic fields 20 names 30–1 sightings 21–2 see also Carnegie (expedition ship); Galilee (expedition ship) Shirey, Steven 232, 245–6 geochemistry 247–8, 249–50 xenolith 247 siderophiles 246 Silver, Paul 239–40, 242 shear wave studies 242–3 tectonic plate boundaries 244 Singer, A. David 263 Singer, Maxine 253–4 candidates for Director 254 single-stranded DNA 143, 145 Sinha, Akhaury Krishna 230 Skonberg, Erik Knut 257–8, 259 slabs, subducting 193 small science 167 Smith, Albert 261 Smith, James H. C. 101–2 Smith, McClain B. 262 Smith, T. Jefferson 156 sodium-24 101
Index solar flares 63 cosmic rays 105, 107 solar system 221–6 solion 195, 196 Solomon, Sean C. xviii, 249 Director 254–5 research interests 255 visitors to department 255 South Africa borehole strainmeters 201 Kaapvaal Craton 247 portable seismic stations 241, 242 xenolith 247 South America deep lithospheric structure 241 encouragement of science 272 explosion seismology 154, 189, 191 portable seismic stations 241 see also Andes; Huancayo (Peru) observatory; La Plata (Argentina) observatory spectrograph 211 spectrohelioscopes 53 spin–orbit force 127 staff appointments 272 women 272–3 see also support staff Stahl, Terry 262 star formation 215, 216–18, 225 clouds of gas and dust 217, 224–5 Gum nebula 218 two-step from large clouds 218 stars see radio stars Steiner, William 259, 259 Steinhart, John S. departure from DTM 156 seismology team 152 Sterling, Allen 29 Stoudenheimer, Richard G. 178 strainmeters 195–202 bellows 196 borehole 197, 198, 199, 259 California 200–1 China 201–2 dc component of strain 197–8 development 195 electrical connection protection 202 Iceland 200, 201 Japan 196–200 Sacks–Evertson 196 Strassmann, Fritz 97 strontium, Hawaiian hot-spot volcanoes 249 strontium-87 159–60, 161–2 mineral crystal age determination 159–60 strontium-87/strontium-86 Andes Mesozoic rocks 231 variations in rocks 229 submarine detection 20
293 Sun formation 216 radio emission 163 see also solar flares sunspots activity and radio emission 163 cycle 107 Superior, Lake, seismology studies 155–6 supernovae 215 support staff 257–65 caretakers 261 carpenters 261 clerical 262–3 computer staff 264 cyclotron technicians 260 draftsmen 263 electronics technicians 260–1 gardeners 261 instrument shop 257–60 machinists 259, 260 mass spectrometry technicians 260–1 seismology technicians 260 see also library Suyehiro, Kiyoshi 244 Suyehiro, Shigeji 190, 197, 198 seismometers 193 strainmeters 197, 198 Sverdrup, Harald Ulrik 65 Swann, William Francis Gray 38, 42, 43 radio-echo sounding 56 solar magnetic fields 105 World War I 20 Swantkowski, Stanley 261 swinging ship 11, 14 Tatel, Howard E. 150 death 152 radio astronomy 163–4 radio stars 171 seismology models 155 Taylor, Hoyt 57 Taylor, Milton T. 259 technology requirements of science 270 tectonic plates, boundaries 244 tektites 248 television 120, 175 Teller, Edward 97 Temmer, Georges M. 126 Florida State University 128 Rutgers University 234 Tera, Fouad 233 Terrestrial Magnetism: An International Quarterly 1, 3, 122 Terrestrial Magnetism and Atmospheric Electricity (journal) 45, 122 terrestrial planet formation 205 Tesla coil 74, 73–4, 80 accelerator tube 75, 76, 78 Van de Graaff accelerator 81
294 thermal ionization 159 Thonnard, Norbert 171, 213 work with Rubin 214 thorium 157, 160 thunderstorm observations 118 Tibbetts, Emma L. 262 tidal disruption theory 221, 222 Tilton, George R. 159, 162 time signals, wireless propagation 33 Tizard Mission 110–12 Torulopsis utilis studies 141 tricarboxylic acid (TCA) cycle 141 tritium 142 bombardment with protons 129–30 Turner, Kenneth C. 203 La Plata observatory 170–1 radio hydrogen measurement 170 Tuve, Merle xv, 56, 57–9, 98, 268 all-sky hydrogen-line survey 169 astronomer appointment 180–1 biological hazards of ionizing radiation 79–80 Bolton as successor 203 changes to Department 117–22 deuteron reactions 87 Director of Applied Physics Laboratory 112 Director of Department 105–7, 117 dish at La Plata 167–9 dispute with Berkner 166–7 Earth crust studies 149, 150 Forbush’s work 105–7 fuze development 109–10 project 110–11 image tubes 176 ionosphere studies 58, 59 isotope geology 159 laboratory assistant for nuclear physics groups 260 nuclear physics 204 nuclear structure theory 89 opposition to National Radio Astronomy Observatory 166 radiation hazards concerns 134–5 radio astronomy 163–4, 172–3, 203 radio hydrogen measurement 170 radio stars 171 refusal of government funds 205 seismology 269 team 152 Tesla coil 73 uranium bomb 102 Van de Graaff accelerator 81–2, 133 work with Breit 77 World War II 109 ultracentrifuge 142 UNIVAC computer 183
Index universe expansion 213, 218 large-scale motions 213 origins/evolution 125 uranium 157, 160 decay rate 161, 162 diffusion studies 230–1 variations in rocks 229 uranium bomb research 102 uranium fission 97, 109 publication suppression 101 uranium–lead method of rock age determination 161, 160–1, 162 Urey, Harold C. 85–7, 222 US Coast and Geodetic Survey 2, 3 World Wide Standardized Seismograph Network (WWSSN) 187 USS Helena 112 V-1 flying bombs 114 Van de Graaff, Robert 77, 81 Van de Graaff accelerator 86, 81–6, 88, 133–4 1-meter machine 84–5 2-meter machine 85 accelerator tube 82, 83, 131 belt materials 84, 87 corona current 84, 85 current 84 isotope source 134 pressure-tank 87, 93, 98, 99, 99–100 high-voltage terminal 127 polarized proton source 129 radioisotope production 139 resistor 85 tandem 128 voltage 81–3 X-rays 84–5 Van de Velde, Neltje 264 van del Hulst, Hendrik 167 variometers 53 Venus 165 geophysics 255 vessel for Pacific Ocean magnetic survey 9–11 Vestine, Ernest Harry 68–71 data publication 117–18 geomagnetic activity 71 World War II 115 vidicon 178 Vine, Fred 190 volcanism explosive 246 kimberlite 247–8 magma 249 volcanoes, hot-spot 249 Hawaii 249 volcanoes, island-arc 232, 233–7 beryllium sampling 235–7 magma 249 mathematical model 236
Index Volponi, F. 189 voltmeter, generating 85 Wait, G. R. thunderstorm observations 118 Watheroo observatory 53 Walker, Richard J. 245 Wallis, William F. 30 data acquisition 54 Watheroo observatory 53 Wang, Jianhua 250, 260–1 Washington Conferences on Theoretical Physics 97, 140 Watheroo (Australia) observatory 51–3, 61 eclipse observations 63 given to Australian government 117–18 wavelengths 55 Weber, Wilhelm Eduard 2 Wegener, Alfred Lothar 188 Weinberger, Alycia 226 Weiz¨acker, Carl von 222 Wells, Harry 62 departure from DTM 170 geomagnetic activity 71 radio astronomy 163–4 Wetherill, George xvii, 122–3, 160–1, 162 Director 171, 204, 253–4 location of laboratories 205–6 meteorite studies 223 orbital calculations 223–4 planet formation 223 planetesimal study 222, 223 relations with President 253–4 reorganization of Department 204–5
295 retirement 205 solar system studies 221 Whitman, Winifred G. 79, 79–80, 84–5, 119, 273 Widmer, Adolf 257, 258 Wilson cloud chamber 90–1 wireless telegraph 55 Wolf, Merri 265 women staff 272–3 Wood, Harry Oscar 149–50 Wood, Waddy B. 7 Woodward, Robert Simpson Institution President 3 support for Bauer 267 World War I 19–21 World War II 37, 109–15 contracts with military 115 World Wide Standardized Seismograph Network (WWSSN) 187 W¨urzburg Riese paraboloid 164, 167, 171 X-rays discovery 38 Van de Graaff accelerator 84–5 xenoliths 247, 246–7 Hawaii 249 Year Book(s) Year Book 62 227 Year Book 66 142–3 Year Book 68 227 Year Book 78 214 Yukon seismic expedition 152 zircon 160, 161