A Taste of Jordan Algebras
Kevin McCrimmon
Department of Mathematics University of Virginia Charlottesville, Virginia ...
23 downloads
794 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
A Taste of Jordan Algebras
Kevin McCrimmon
Department of Mathematics University of Virginia Charlottesville, Virginia
Dedicated to the memory of Jake and Florie { in mathematico parentis To Jake (the Doktor-Vater) for his mathematical in uence on my research, and to Florie (the Doktor-Mutter) for helping me (and all Jake's students) to get to know him as a warm human being. Future histories of mathematics should take into account the role of DoktorMutters in the fostering of mathematics.
1. INTRODUCTION
iii
1. Introduction On several occasions I and colleagues have found ourselves teaching a 1-semester course for students at the second year of graduate study in mathematics who want to gain a general perspective on Jordan algebras, their structure and their role in mathematics, or want to gain direct experience with nonassociative algebra. These students typically have a solid grounding in rst-year graduate algebra and the Artin-Wedderburn theory of associative algebras, and a few have been introduced to Lie algebras (perhaps even Cayley algebras, in an ohand way), but otherwise they have not seen any nonassociative algebras. Most of them will not go on to do research in nonassociative algebra, so the course is not primarily meant to be a training or breeding ground for research, though the instructor often hopes one or two will be motivated to pursue the subject further. This text is meant to serve as an accompaniment to such a course. It is designed rst and foremost to be read. It is a direct mathematical conversation between the author and a reader whose mind (as far as nonassociative algebra goes) is a tabula rasa. In keeping with the tone of a private conversation, I give more heuristic material than is common in books at this level (pep talks, philosophical pronouncements on the proper way to think about certain concepts, random historical anecdotes, ohand mention of some mathematicians who have contributed to our understanding of Jordan algebras, etc.), and employ a few English words which do not standardly appear in mathematical works. It is important for the reader to develop a visceral intuitive feeling for the subject, to view the mathematics as a living and active thing: to see isomorphisms as cloning maps, isotopes as subtle rearrangements of an algebra's DNA, radicals as pathogens to be isolated and removed by radical surgery, annihilators as biological agents for killing o elements, Peircers as mathematical enzymes (\Jordan-ase") which break an algebra down into its Peirce spaces. Like Charlie Brown's kite-eating trees, Jordan theory has Zel'manov's tetrad-eating ideals (though we shall stay clear of these carnivores in our book). The book is intended for students to read on their own without assistance by a teacher. In particular, I have tried to make the proofs complete and understandable, giving much more heuristic and explanatory comment than is usual in graduate texts. To help the reader through the proofs in Parts III, IV (and the proof-sketches in Part II, Chapter 8), I have tried to give each important result or formula a mnemonic label, so that when I refer to an earlier result, instead of saying \by Formula 21.3(i), which of course you will remember, ..." I
iv
can say \by Nuclear Slipping 21.3(i)", hoping to trigger long-repressed memories of a formula involving nuclear elements of alternative algebras. While I wind up doing most of the talking, there is some room in Parts III and IV for the reader to participate (and stay mathematically t) by doing exercises. The Exercises give slight extensions, or alternate proofs, of results in the text, and are placed immediately after the results; they give practice in proving variations on the previous mathematical theme . At the end of each chapter I gather a few problems and questions. The Problems usually take the form \Prove that something-or-other"; they involve deeper investigations or lengthier digressions than exercises, and develop more extensive proof skills on a new theme. The Questions are more open-ended, taking the form \What can you say about something-or-other" without giving a hint which way the answer goes; they develop proof skills in uncharted territories, in composing a mathematical theme from scratch (most valuable for budding researchers). Hints are given at the back of the book for some of the exercises, problems, and questions (though these should be consulted only after a good-faith eort to prove them). Part I is in the nature of an extended colloquium talk, a brief survey of the life and times of Jordan algebras, to provide appreciation of the role Jordan algebras play on the broader stage of mathematics. I indicate several applications to other areas of mathematics: Lie algebras, dierential geometry, and projective geometry. Since the students at this level cannot be assumed to be familiar with all these areas, the description has to be a bit loose; readers can glean from this partjust enough respect and appreciation to sanction and legitimate their investment in reading further. Part II is designed to provide an overview of Jordan structure theory in its historical context. It gives a general historical survey from the origins in quantum mechanics in 1934 to E m Zel'manov's breathtaking description of arbitrary simple algebras in 1983 (which later played a role in his Fields Medal work on the Burnside Problem). I give precise de nitions and examples, but omit proofs. In keeping with its nature, I have not included any exercises. Parts III and IV are designed to provide direct experience with nonassociativity, and either one (in conjunction with Part I) could serve as a basis for a one-semester course. Throughout, I stick to linear Jordan algebras over rings of scalars containing 1/2, but give major emphasis to the quadratic point of view. Part III gives a development of Jacobson's classical structure theory for Jordan algebras with capacity, in complete detail and with full
1. INTRODUCTION
v
proofs. It is suitable for a one-semester course aiming to introduce students to the methods and techniques of nonassociative algebra. The details of Peirce decompositions, Peirce relations, and coordinatization theorems are the key tools leading to the Classical Structure Theorem. Part IV gives a full treatment of Zel'manov's Exceptional Theorem, that the only simple i-exceptional Jordan algebras are the Albert algebras, closing the historical search for an exceptional setting for quantum mechanics. This part is much more concerned with understanding and translating to the Jordan setting some classical ideas of associative theory, including primitivity; it is suitable for a one-semester course aiming to introduce students to the modern methods of Jordan algebras. The ultra lter argument, that if primitive systems come in only a nite number of avors then a prime system must come in one of those pure avors, is covered in full detail; ultra lters provide a useful tool that many students at this level are unacquainted with. I have dedicated the book to Nathan and Florie Jacobson, both of whom passed away during this book's long gestation period. They had an enormous in uence on my mathematical development. I am greatly indebted to my colleague Kurt Meyberg, who carefully read through Part III and made many suggestions which vastly improved the exposition. I am also deeply indebeted to my colleague Wilhelm Kaup, who patiently corrected many of my misconceptions about the role of Jordan theory in diferential geometry, improving the exposition in Part I and removing agrant errors. My colleague John Faulkner helped improve my discussion of applications to projective geometries. I would also like to thank generations of graduate students at Virginia who read and commented upon the text, especially my students Jim Bowling, Bernard Fulgham, Dan King, and Matt Neal.
vi Index of Notations
General Typographical Conventions
Rings of scalars (unital, commutative, associative rings) are indicated by capital Greek letters ; : Scalars are denoted by lower case Greek letters: ; ; ; : : : Almost all our algebraic systems will be algebras or modules over a xed ring of scalars , which will almost always contain an element 21 . Mere sets are indicated by italic capital letters X; Y; Z at the end of the alphabet, index sets also by I; J; S . Modules and linear spaces are denoted by italic capital letters: A; B; C; J; V; W; : : :. The zero subspace will be denoted by boldface 0 to distinguish it from the element (operator, vector, or scalar) 0. This signals a subtle and not-too-important distinction between the set 0 = f0g consisting of a single element zero, and the element itself. The range f (A) of some function on a set A will always be a set, while the value f (a) will be an element. Algebraic systems are denoted by letters in small caps: general linear algebras by A; B; C, ideals by I; J; K. Associative algebras are indicated by D when they appear as coordinates for Jordan algebras. Jordan algebras are indicated by J; Ji ; J0 ; etc. Maps or functions between sets or spaces are denoted by italic lower case letters f; g; h; : : :, morphisms between algebraic systems often by lower case Greek letters '; ; ; , sometimes upper case italic letters T; S . Functors and functorial constructions are denoted by script capital letters F ; G ; H; T ; : : :. Speci c Notations
The identity map on a set X is denoted by 1X . The projection of a set on a quotient set is denoted by : X ! X= s; the coset or equivalence class of x 2 X is denoted by (x) = x = [x]. Cartesian products of sets are denoted X Y . Module direct sums are indicated by V W . Algebra direct sums, where multiplication as well as addition is performed componentwise, are written A B to
distinguish them from mere module direct sums. Subsets are denoted X Y , with strict inclusion denoted by X Y or X < Y ; subspaces of linear spaces are denoted A B , while kernels (submodules of modules, ideals of algebras) are indicated by B / A. Left, right, and -ideals of algebras are denoted by I /` A; I /r A; B / A.
1. INTRODUCTION
vii
Involutions on algebras are indicated by a star (though invo-
lutions on coordinate algebras of matrix algebras are often denoted x ! x). H(A; ) denotes the hermitian elements x = x of an algebra A under an involution . Products in algebras are denoted by x y or just xy (especially for associative products); the special symbol x y is used for the bilinear product in Jordan algebras. The left and right multiplication operators by an element x in a linear algebra are denoted Lx ; Rx or `x; rx or x; x . The quadratic and trilinear products in Jordan algebras are denoted by Ux y and fx; y; zg, with operators Ux ; Vx;y (in Jordan triples Px; Lx;y , in Jordan pairs Qx ; Dx;y ). Unit elements of algebras are denoted by 1. We will speak of a unit element and unital algebras rather than identity element and algebras with identity; we will reserve the term identity for identical relations or laws (such as the Jordan identity or associative law). Ab will denote the formal unital hull -algebra 1 A obtained by formal adjunction of a unit element. n n matrices and hermitian matrices are denoted Mn and Hn. Matrices are denoted by X = (xij ); their traces and determinants are denoted by tr(X ); det(X ) respectively, and the transpose is indicated by X tr . If the matrix entries come from a ring with involution, the adjoint (the conjugate transpose with ij -entry xji ) is denoted rather anonymously by a star, X . Blackboard bold is used for the standard systems N (natural numbers 1; 2; : : :), I (the even-more-natural numbers 0; 1; 2; : : : used as indices or cardinals), Z (the ring of integers), the elds Q (rational numbers), R (real numbers), C (complex numbers), the real division rings H (Hamilton's quaternions), K (Cayley's octonions), O (split octonions), A (the Albert algebra, a formally-real exceptional Jordan algebra).
Contents 1. Introduction
iii
Part I. A Colloquial Survey of Jordan Theory 1. 2. 3. 4. 5. 6. 7. 8. 9.
Origin of the Species The Jordan River Links with Lie Algebras and Groups Links with Dierential Geometry Links with the Real World Links with the Complex World Links with the In nitely Complex World Links with Projective Geometry Conclusion
Part II. The Historical Perspective: An Historical Survey of Jordan Structure Theory Chapter 1. Jordan Algebras in Physical Antiquity 1. The matrix interpretation of quantum mechanics 2. The Jordan Program 3. The Jordan Operations 4. The Jordan Axioms 5. The Basic Examples 6. Special and Exceptional 7. Classi cation Chapter 2. Jordan Algebras in the Algebraic Renaissance 1. Linear Algebras over General Scalars 2. Categorical Nonsense 3. Commutators and Associators 4. Lie and Jordan Algebras 5. The 3 Basic Examples Revisited 6. Jordan Matrix Algebras 7. Forms Permitting Composition 8. Composition Algebras 9. Split Composition Algebras ix
1 3 7 11 15 17 24 27 30 38 39 41 41 42 42 46 48 51 51 55 56 57 59 61 62 63 65 68 70
x
CONTENTS
10. Classi cation
72
Chapter 3. Jordan Algebras in the Enlightenment 1. Forms of Algebras 2. Inverses and Isotopes 3. Twisted Hermitian Algebras 4. Spin and Quadratic Factors 5. Cubic Factors 6. Classi cation
75 75 76 78 80 83 85
Chapter 4. The Classical Theory 1. U -Operators 2. Quadratic Axioms 3. Inverses 4. Isotopes 5. Inner Ideals 6. Nondegeneracy 7. i-Special and i-exceptional 8. Artin-Wedderburn-Jacobson Structure Theorem
89 89 90 92 94 94 95 98 99
Chapter 5. The Final Classical Formulation 1. Capacity 2. Classi cation
103 103 104
Chapter 6. The Classical Methods 1. Peirce Decompositions 2. Coordinatization 3. Minimum Condition
107 107 108 111
Chapter 7. The Russian Revolution: 1977-1983 1. The Lull Before the Storm 2. The First Tremors 3. The Main Quake 4. Aftershocks
115 115 116 118 119
Chapter 8. Zel'manov's Exceptional Methods 1. I -Finiteness 2. Absorbers 3. The Primitive Heart 4. The Big Primitive Classi cation 5. Semiprimitive Imbedding 6. The Ultraproduct Argument
123 123 125 126 129 133 134
Part III. The Classical Theory
139
CONTENTS
xi
Chapter 1. The Category of Jordan Algebras 1. Categories 2. The Category of Linear Algebras 3. The Category of Unital Algebras 4. Unitalization 5. The Category of Algebras with Involution 6. The Category of Jordan Algebras 7. The Centroid 8. Problems for Chapter 1
143 143 144 148 149 150 151 155 161
Chapter 2. The Category of Alternative Algebras 1. The Category of Alternative Algebras 2. Nuclear Involutions 3. Composition Algebras 4. The Cayley-Dickson Construction 5. The Hurwitz Theorem 6. Problems for Chapter 2
165 165 166 168 173 176 180
Chapter 3. Three Special Examples 1. Full Type 2. Hermitian Type 3. Quadratic Form Type 4. Reduced Spin Factors 5. Problems for Chapter 3
181 181 184 188 192 195
Chapter 4. Jordan Algebras of Cubic Forms 1. Cubic Maps 2. The General Construction 3. The Freudenthal Construction 4. The Tits Constructions 5. Problems for Chapter 4
199 199 201 206 207 210
Chapter 5. Two Basic Principles 1. The Macdonald and Shirshov-Cohn Principles 2. Fundamental Formulas 3. Nondegeneracy 4. Problems for Chapter 5
213 213 214 219 224
Chapter 6. Inverses 1. Jordan Inverses 2. Linear and Nuclear Inverses 3. Problems for Chapter 6
225 225 231 233
Chapter 7. Isotopes
235
xii
CONTENTS
1. 2. 3. 4. 5. 6.
Nuclear Isotopes Jordan Isotopes Quadratic Factor Isotopes Cubic Factor Isotopes Matrix Isotopes Problems for Chapter 7
235 237 239 240 243 247
Chapter 8. Peirce Decomposition 1. Peirce Decompositions 2. Peirce Multiplication Rules 3. Basic Examples of Peirce Decompositions 4. Peirce Identity Principle 5. Problems for Chapter 8
251 251 255 257 261 263
Chapter 9. O-Diagonal Rules 1. Peirce Specializations 2. Peirce Quadratic Forms 3. Problems for Chapter 9
265 265 268 271
Chapter 10. Peirce Consequences 1. Diagonal Consequences 2. Diagonal Isotopes 3. Problems for Chapter 10 Chapter 11. Spin Coordinatization 1. Spin Frames 2. Strong Spin Coordinatization 3. Spin Coordinatization 4. Problems for Chapter 11 Chapter 12. Hermitian Coordinatization 1. Cyclic Frames 2. Strong Hermitian Coordinatization 3. Hermitian Coordinatization Chapter 13. Multiple Peirce Decompositions 1. Decomposition 2. Recovery 3. Multiplication 4. The Matrix Archetype 5. The Peirce Principle 6. Modular Digression 7. Problems for Chapter 13 Chapter 14. Multiple Peirce Consequences
273 273 275 278 279 279 283 285 287 289 289 294 295 299 299 303 303 306 307 310 312 315
CONTENTS
1. 2. 3. 4.
xiii
Jordan Coordinate Conditions Peirce Specializations Peirce Quadratic Forms Connected Idempotents
315 316 318 319
Chapter 15. Hermitian Symmetries 1. Hermitian Frames 2. Hermitian Symmetries 3. Problems for Chapter 15
325 325 327 331
Chapter 16. The Coordinate Algebra 1. The Coordinate Triple
333 333
Chapter 17. Jacobson Coordinatization 1. Strong Coordinatization 2. General Coordinatization
337 337 340
Chapter 18. Von Neumann Regularity 1. vNr Pairing 2. Structural Pairing 3. Problems and Questions for Chapter 18
343 343 346 349
Chapter 19. Inner Simplicity 1. Minimal inner ideals 2. Problems for Chapter 19
351 351 354
Chapter 20. Capacity 1. Capacity Existence 2. Connected Capacity 3. Problems for Chapter 20
357 357 358 360
Chapter 21. Herstein-Kleinfeld-Osborn Theorem 1. Alternative Algebras Revisited 2. A Brief Tour of the Alternative Nucleus 3. Herstein-Kleinfeld-Osborn Theorem 4. Problems and Questions for Chapter 21
361 361 364 367 372
Chapter 22. Osborn's Capacity 2 Theorem 1. Commutators 2. Capacity Two
377 377 379
Chapter 23. Classical Classi cation
387
Part IV. Zel'manov's Exceptional Theorem
391
Chapter 1. The Radical
395
xiv
1. 2. 3. 4. 5. 6. 7.
CONTENTS
Invertibility Quasi-Invertibility Proper Quasi-Invertibility Elemental Characterization Radical Inheritance Radical Surgery Problems and Questions for Chapter 1
395 397 402 407 408 409 411
Chapter 2. Begetting and Bounding Idempotents 1. I-gene 2. Algebraic implies I -genic 3. I-genic Nilness 4. I-Finiteness 5. Problems for Chapter 2
413 413 414 415 416 418
Chapter 3. Bounded Spectra Beget Capacity 1. Spectra 2. Bigness 3. Evaporating Division Algebras 4. Spectral Bounds and Capacity 5. Problems for Chapter 3
419 419 421 422 423 426
Chapter 4. Absorbers of Inner Ideals 1. Linear Absorbers 2. Quadratic Absorbers 3. Absorber Nilness 4. Problems for Chapter 4
429 429 432 434 441
Chapter 5. Primitivity 1. Modularity 2. Primitivity 3. Semiprimitivity 4. Imbedding Nondegenerates in Semiprimitives 5. Problems for Chapter 5
443 443 446 448 449 454
Chapter 6. The Primitive Heart 1. Hearts and Spectra 2. Primitive Hearts
457 457 458
Chapter 7. Filters and Ultra lters 1. Filters in General 2. Filters from Primes 3. Ultimate Filters 4. Problems for Chapter 7
461 461 462 463 465
CONTENTS
xv
Chapter 8. Ultraproducts 1. Ultraproducts 2. Examples 3. Problems for Chapter 8
467 467 469 473
Chapter 9. The Final Argument 1. Dichotomy 2. The Prime Dichotomy 3. Problems for Chapter 9
475 475 476 478
Part V. Appendices
479
Appendix A. Cohn's Special Theorems 1. Free Gadgets 2. Cohn Symmetry 3. Cohn Speciality
481 481 482 483
Appendix B. Macdonald's Theorem 1. The Free Jordan Algebra 2. Identities 3. Normal Form for Multiplications 4. The Macdonald Principles 5. Albert i-Exceptionality 6. Problems for Appendix B
487 487 489 491 495 497 502
Appendix C. Jordan Algebras of Degree 3 1. Jordan matrix algebras 2. The general construction 3. The Freudenthal Construction 4. The Tits Construction 5. Albert division algebras 6. Problems for Appendix C
503 503 506 511 513 519 521
Appendix D. The Density Theorem 1. Semisimple Modules 2. The Jacobson-Bourbaki Density Theorem
523 523 525
Appendix E. Hints 1. Hints for Part III 2. Hints for Part IV
527 527 536
Appendix F. Indexes 1. Pronouncing Index 2. Index of Named Theorems 3. Index of De nitions
541 541 543 544
xvi
4. Index of Terms
CONTENTS
545
Part I A Colloquial Survey of Jordan Theory
In this Survey I want to sketch what Jordan algebras are, and why people might want to study them. On a philosophical level, in the rapidly expanding universe of mathematics it is important for you to have a feeling for the overall connectedness of things, to remember the common origins and the continuing ties that bind distant areas of mathematics together. On a practical level, before you plunge into a detailed study of Jordan algebras in the rest of the book, it is important for you to have an overall picture of their \Sitz im Leben" { to be aware of the larger Jordan family of algebraic systems (algebras, triples, pairs, and superalgebras), and to have some appreciation of the important role this family plays in the mathematical world at large. Jordan algebras were created to illuminate a particular aspect of physics, quantum-mechanical observables, but turned out to have illuminating connections with many areas of mathematics. Jordan systems arise naturally as \coordinates" for Lie algebras having a grading into 3 parts. The physical investigation turned up one unexpected system, an \exceptional" 27-dimensional simple Jordan algebra, and it was soon recognized that this exceptional Jordan algebra could help us understand the 5 exceptional Lie algebras. Later came surprising applications to dierential geometry, rst to certain symmetric spaces, the self-dual homogeneous cones in real nspace, and then a deep connection with bounded symmetric domains in complex n-space. In these cases the algebraic structure of the Jordan system encodes the basic geometric information for the associated space or domain. Once more the exceptional geometric spaces turned out to be connected with the exceptional Jordan algebra. Another surprising application of the exceptional Jordan algebra was to octonion planes in projective geometry; once these planes were realized in terms of the exceptional Jordan algebra, it became possible to describe their automorphisms. This presentation is meant for a general graduate-level mathematical audience (graduate students rather than experts in Lie algebras, differential geometry, etc.) As in any colloquium, the audience is not expected to be acquainted with all the mathematical subjects discussed. There are many other applications (to dierential equations, genetics, probability, statistics), but I won't try to present a compendium of applications here { this survey can only convey a taste of how Jordan algebras come to play a role in several important subjects. It will suf ce if the reader comes to understand that Jordan algebras are not an eccentric axiomatic system, but a mathematical structure which arises naturally and usefully in a wide range of mathematical settings.
1. ORIGIN OF THE SPECIES
3
1. Origin of the Species Jordan algebras arose from the search for an \exceptional" setting for quantum mechanics. In the usual interpretation of quantum mechanics (the \Copenhagen model"), the physical observables are represented by self-adjoint or Hermitian matrices (or operators on Hilbert space). The basic operations on matrices or operators are multiplication by a complex scalar, addition, multiplication of matrices (composition of operators), and forming the complex conjugate transpose matrix (adjoint operator). But these underlying matrix operations are not \observable": the scalar multiple of a hermitian matrix is not again hermitian unless the scalar is real, the product is not hermitian unless the factors happen to commute, and the adjoint is just the identity map on hermitian matrices. In 1932 the physicist Pascual Jordan proposed a program to discover a new algebraic setting for quantum mechanics, which would be freed from dependence on an invisible all-determining metaphysical matrix structure, yet would enjoy all the same algebraic bene ts as the highly-successful Copenhagen model. He wished to study the intrinsic algebraic properties of hermitian matrices, to capture these properties in formal algebraic axioms, and then to see what other possible nonmatrix systems satis ed these axioms.
1.1. Jordan algebras. The rst step in analyzing the algebraic properties of hermitian matrices or operators was to decide what the basic observable operations were. There are many possible ways of combining hermitian matrices to get another hermitian matrix, but after some empirical experimentation Jordan decided that they could all be expressed in terms of quasi-multiplication 1 x y := (xy + yx) 2 (we now call this symmetric bilinear product the Jordan product). If x and y are hermitian, so is the symmetric product xy + yx (but not, in general, the product xy itself). Thus in addition to its observable linear structure as a real vector space, the model carried a basic observable product, quasi-multiplication. The next step in the empirical investigation of the algebraic properties enjoyed by the model was to decide what the crucial formal axioms or laws the operations on hermitian matrices obey. Jordan thought the key law governing quasi-multiplication, besides its obvious commutativity, was x2 (y x) = (x2 y ) x
4
(we now call this equation of degree four in two variables the Jordan identity, in the sense of identical relation satis ed by all elements). Quasi-multiplication satis ed the additional \positivity" condition that a sum of squares never vanishes, which (in analogy with the recentlyinvented formally-real elds) was called formal-reality. The outcome of all this experimentation was a distillation of the algebraic essence of quantum mechanics into an axiomatically de ned algebraic system. Definition. A Jordan algebra consists of a real vector space equipped with a bilinear product x y satisfying the commutative law and the Jordan identity: x y = y x; (x2 y ) x = x2 (y x): A Jordan algebra is formally-real if x21 + : : : + x2n = 0 =) x1 = : : : = xn = 0: Any associative algebra A over R gives rise to a Jordan algebra A+ under quasi-multiplication: the product x y := 12 (xy + yx) is clearly commutative, and satis es the Jordan identity since 4(x2 y ) x = (x2 y + yx2 )x + x(x2 y + yx2 ) = x2 yx + xyx2 + yx3 + x3 y = x2 (yx + xy ) + (yx + xy )x2 = 4x2 (y x): A Jordan algebra is called special if it can be realized as a Jordan subalgebra of some A+ . For example, if A carries an involution then the subspace of hermitian elements x = x is also closed under the Jordan product since if x = x; y = y then (x y ) = y x = y x = x y , and therefore forms a special Jordan algebra H(A; ). These hermitian algebras are the archetypes of all Jordan algebras. It is easy to check that the hermitian matrices over the reals, complexes, and quaternions form special Jordan algebras which are formally-real. One obtains another special formally-real Jordan algebra (which we now call a spin factor JS pinn ) on the space R 1 R n for n 2, by making 1 act as unit and de ning the product of vectors v; w in R n to be given by the dot or inner product v w := hv; wi1: In a special Jordan algebra the algebraic structure is derived from an ambient associative structure xy via quasi-multiplication. What the physicists were looking for, of course, were Jordan algebras where there is no invisible structure xy governing the visible structure x y from behind the scenes. A Jordan algebra is called exceptional if it is not special, i.e. does not result from quasi-multiplication.
1. ORIGIN OF THE SPECIES
5
1.2. The Jordan classi cation. Having settled on the basic ax-
ioms for his systems, it remained to nd exceptional Jordan algebras. Jordan hoped that by studying nite-dimensional algebras he could nd families of simple exceptional algebras En parametrized by natural numbers n, so that letting n go to in nity would provide a suitable in nite-dimensional exceptional home for quantum mechanics. In a fundamental 1934 paper, Jordan, John von Neumann, and Eugene Wigner showed that every nite-dimensional formally-real Jordan algebra is a direct sum of a nite number of simple ideals, and that there are only 5 basic types of simple building blocks: 4 types of hermitian n n matrix algebras Hn (C) corresponding to the 4 real division algebras C [the reals R , complexes C , Hamilton's quaternions H , and the Cayley algebra or Cayley's octonions K ] of dimensions 1; 2; 4; 8 (but for the octonions only n 3 is allowed), together with the spin factors. There were two surprises in this list, two new structures which met the Jordan axioms but weren't themselves hermitian matrices: the spin factor and H3 (K ). While the spin factor was not one of the invited guests, it was related to the guest of honor: it can be realized as a certain subspace of all hermitian 2n 2n real matrices, so it too is special. The other uninvited guest, H3 (K ), was quite a dierent creature. It did not seem to be special, since its coordinates came from the non-associative coordinate Cayley algebra K , and A.A. Albert showed that it is indeed an exceptional Jordan algebra of dimension 27 (we now call such 27-dimensional exceptional algebras Albert algebras, and denote H3 (K ) by A ).
1.3. The physical end of exceptional algebras. These results
were deeply disappointing to physicists: there was only one exceptional algebra in this list, for n = 3; for n > 3 the algebra Hn (K ) is not a Jordan algebra at all, and for n = 2 it is isomorphic to JSpin9 , and for n = 1 it is just R + : This lone exceptional algebra H3 (K ) was too tiny to provide a home for quantum mechanics, and too isolated to give a clue as to the possible existence of in nite-dimensional exceptional algebras. Half a century later the brilliant young Novosibirsk mathematician E m Zel'manov quashed all remaining hopes for such an exceptional system. In 1979 he showed that even in in nite dimensions there are no simple exceptional Jordan algebras other than Albert algebras: as it is written, : : : and there is no new thing under the sun especially in the way of exceptional Jordan algebras; unto mortals the Albert algebra alone is given.
6
In 1983 Zel'manov proved the astounding theorem that any simple Jordan algebra, of arbitrary dimension, is either (1) an algebra of Hermitian elements H(A; ) for a -simple associative algebra with involution, (2) an algebra of spin type determined by a nondegenerate quadratic form, or (3) an Albert algebra of dimension 27 over its center. This brought an end to the search for an exceptional setting for quantum mechanics: it is an ineluctable fact of mathematical nature that simple algebraic systems obeying the basic laws of Jordan must (outside of dimension 27) have an invisible associative support behind them.
1.4. Special-Identities. While physicists abandoned the poor orphan child of their theory, the Albert algebra, algebraists adopted it and moved to new territories. This orphan turned out to have many surprising and important connections with diverse branches of mathematics. Actually, the child should never have been conceived in the rst place: it does NOT obey all the algebraic properties of the Copenhagen Model, and so was in fact unsuitable as a home for quantum mechanics, not super cially due to its nite-dimensionality, but genetically because of its unsuitable algebraic structure. In 1963 Jacobson's student C.M. Glennie discovered two identities satis ed by Hermitian matrices (indeed, by all special Jordan algebras) but NOT satis ed by the Albert algebra. Such identities are called special identities (or s-identities) since they are satis ed by all special algebras but not all Jordan algebras, and so serve to separate the special from the nonspecial. Jordan can be excused for missing these identities, of degree 8,9 in 3 variables, since they cannot even be intelligibly expressed without using the (then new-fangled) quadratic Jordan product and Jordan triple product Ux y := 2x (x y ) x2 y; fx; y; zg := 2 x (y z) + (x y) z (x z) y (corresponding to xyx and xyz + zyx in special Jordan algebras). In 1958 I.G. Macdonald had established that this U -operator satis ed a very simple identity, which Jacobson called the Fundamental Formula:
UUx (y) = Ux Uy Ux : In terms of these products, Glennie's identities take the none-toomemorable forms
G8 : H8 (x; y; z ) = H8 (y; x; z );
G9 : H9 (x; y; z ) = H9 (y; x; z );
2. THE JORDAN RIVER
7
where
H8 (x; y; z ) := fUx Uy z; z; x y g Ux Uy Uz (x y ); H9 (x; y; z ) := 2Ux z Uy;x Uz y 2 Ux Uz Ux;y Uy z: Observe that G8 ; G9 vanish in special algebras since H8 ; H9 reduce to the symmetric 8 and 9-tads fx; y; z; y; x; z; x; y g; fx; z; x; x; z; y; y; z; y g respectively [by an n-tad we mean the symmetric associative product fx1; : : : ; xng := x1 xn + xn x1 ]. In 1987 Armin Thedy came out of right eld (or rather, right alternative algebras) carrying an operator s-identity of degree 10 in 3 variables that nally a mortal could remember: T10 : UU[x;y](z) = U[x;y]Uz U[x;y] U[x;y] := 4Uxy 2(Ux Uy + Uy Ux ) : This is just a \fundamental formula" or \structural condition" for the U -operator of the \commutator" [x; y ]. Notice that this vanishes on all special algebras because U[x;y] really is the map z 7! (xy + yx)z (xy + yx) 2(xyzyx + yxzxy ) = [x; y ]z [x; y ], which clearly satis es the fundamental formula. Of course, there is no such thing as a commutator in a Jordan algebra (in special Jordan algebras J A+ the commutators do exist in the associative envelope A), but these spiritual entities still manifest their presence by acting on the Jordan algebra. The quick modern proof that the Albert algebra A is exceptional is to show that for judicious choice of x; y; z the polynomial G8 ; G9 , or T10 does not vanish. Thus it was serendipitous that the Albert algebra was allowed onto the mathematical stage in the rst place. Many algebraic structures are famous for 15 minutes, then disappear from the action, but others go on to feature in a variety of settings (keep on ticking, like the bunny) and prove to be an enduring part of the mathematical landscape. So it was with the Albert algebra. This Survey will describe some of the places where Jordan algebras, especially the Albert algebra, have played a starring role.
2. The Jordan River The stream of Jordan theory originates in Jordan algebras, but soon divides into several algebraic structures (quadratic Jordan algebras, Jordan triples, Jordan pairs, and Jordan superalgebras). All these more general systems take their basic genetic structure from the parental algebras, but require their own special treatment and analysis, and result in new elds of application. Although this book is entirely concerned with algebras, any student of Jordan algebras must be aware of the extended family of Jordan systems to which they belong
8
2.1. Quadratic Jordan Algebras. In their mature roles, Jordan
algebras appear not just wearing a Jordan product, but sporting a powerful quadratic product as well. It took audiences some getting used to this new product, since (unlike quasi-multiplication, Lie bracket, dot products, or any of the familiar algebraic products) it is not bilinear: its polarized version is a triple product trilinear in 3 variables, but it is itself a binary product quadratic in one variable and linear in the other. Over the years algebraists had developed a comprehensive theory of nite-dimensional Jordan algebras over arbitrary elds of characteristic dierent from 2. But it was clear that quasi-multiplication, with its reliance on a scalar 12 , was not suÆcient for a theory of Jordan algebras in characteristic 2, or over arbitrary rings of scalars. In particular, there was no clear notion of Jordan rings (where the ring of scalars was the integers). For example, arithmetic investigations led naturally to hermitian matrices over the integers, and residue class elds led naturally to characteristic 2. In the 1960's several lines of investigation revealed the crucial importance of the quadratic product Ux (y ) and the associated triple product fx; y; z g in Jordan theory. Kantor, and Koecher and his students, showed that the triple product arose naturally in connections with Lie algebras and dierential geometry. Nathan Jacobson and his students showed how these products facilitated many purely algebraic constructions. The U-operator of \two-sided multiplication" Ux by x has the somewhat messy form Ux = 2L2x Lx2 in terms of left multiplications in the algebra, and the important V-operator of \left multiplication" Vx;y (z ) := fx; y; z g := (Ux+z Ux Uz )(y ) by x; y in the Jordan triple product has the even-messier form Vx;y = 2 Lxy + [Lx ; Ly ] . In Jordan algebras with a unit element 1 x = x we have U1 (x) = x and x2 = Ux (1). The brace product is the linearization of the square, fx; yg := Ux;y (1) = fx; 1; yg = 2x y, and we can recover the linear product from the quadratic product. We will see that the brace product and its multiplication operator Vx (y ) := fx; y g are in many situations more natural than the bullet product x y and Lx . The crucial property of the quadratic product was the so-called Fundamental Formula UUx (y) = Ux Uy Ux ; this came up in several situations, and seemed to play a role in the Jordan story like that of the associative law Lxy = Lx Ly in the associative story. After a period of experimentation (much like Jordan's original investigation of quasimultiplication), it was found that the entire theory of unital Jordan algebras could be based on the U -operator. A unital quadratic Jordan
2. THE JORDAN RIVER
9
algebra is a space together with a distinguished element 1 and a product Ux (y ) linear in y and quadratic in x, which is unital and satis es the Commuting Formula and the Fundamental Formula U1 = 1J; Ux Vy;x = Vx;y Ux ; UUx (y) = Ux Uy Ux : These are analogous to the axioms for the bilinear product x y of old-fashioned unital linear Jordan algebras L1 = 1J ; Lx = Rx ; Lx2 Lx = Lx Lx2 in terms of left and right multiplications Lx ; Rx. Like Zeus shoving aside Cronos, the quadratic product has largely usurped governance of the Jordan domain (though to this day pockets of the theory retain the old faith in quasi-multiplication). Moral: The story of Jordan algebras is not the story of a nonassociative product x y , it is the story of a quadratic product Ux y which is about as associative as it can be.
2.2. Jordan Triples. The rst Jordan stream to branch o from algebras was Jordan triple systems, whose algebraic study was initiated by Max Koecher's student Kurt Meyberg in 1969 in the process of generalizing the Tits-Kantor-Koecher construction of Lie algebras (to which we will return when we discuss applications to Lie theory). Jordan triples are basically Jordan algebras with the unit thrown away, so there is no square or bilinear product, only a Jordan triple product fx; y; zg which is symmetric and satis es the 5-linear Jordan identity fx; y; zg = fz; y; xg fx; y; fu; v; wgg = ffx; y; ug; v; wg fu; fy; x; vg; wg + fu; v; fx; y; wgg: A space with such a triple product is called a linear Jordan triple. This theory worked only in the presence of a scalar 61 ; in 1972 Meyberg gave axioms for quadratic Jordan triple systems which worked smoothly for arbitrary scalars. These were based on a quadratic product Px (y ) (the operators U; V are usually denoted P; L in triples) satisfying the Shifting Formula , Commuting Formula , and the Fundamental Formula LPx (y);y = Lx;Py (x) ; PxLy;x = Lx;y Px ; PPx (y) = Px Py Px : Any Jordan algebra J gives rise to a Jordan triple T (J) by applying the forgetful functor, throwing away the unit and square and setting Px := Ux ; Lx;y := Vx;y . In particular, any associative algebra A gives rise to a Jordan triple T (A) := T (A+ ) via Px (y ) := xyx; fx; y; z g := xyz + zyx. A triple is special if it can be imbedded as a sub-triple of some T (A), otherwise it is exceptional. An important example of a Jordan triple which doesn't come from a bilinear product consists of
10
the rectangular matrices Mpq (R ) under xy tr z + zy tr x; if p 6= q there is no natural way to multiply two p q matrices to get a third p q matrix. Taking rectangular 1 2 matrices M12 (K ) = K E11 + K E12 =: 2 K over the Cayley algebra gives an exceptional 16-dimensional bi-Cayley triple (so-called because it is obtained by gluing together two copies of the Cayley algebra). Experience has shown that the archetype for Jordan triple systems is T (A; ) obtained from an associative algebra A with involution via Px(y ) := xy x; fx; y; zg := xyz + zyx: The presence of the involution on the middle factor warns us to expect reversals in that position. It turns out that a triple is special i it can be imbedded as a sub-triple of some T (A; ), so that either model T (A) or T (A; ) can be used to de ne speciality
2.3. Jordan Pairs. The second stream to branch o from alge-
bras ows from the same source, the Tits-Kantor-Koecher construction of Lie algebras. Building on an ohand remark of Meyberg that the entire TKK construction would work for \verbundene Paare," two independent spaces J + ; J acting on each other like Jordan triples, a full-grown theory of Jordan pairs sprang from Loos' mind in 1974. Linear Jordan pairs are pairs V = (V + ; V ) of spaces with trilinear products fx+ ; u ; y +g 2 V + ; fu ; x+ ; v g 2 V (but incestuous triple products containing adjacent terms from the same space are strictly forbidden!) such that both products are symmetric and satisfy the 5-linear identity. Quadratic Jordan pairs have a pair (Q+ ; Q ) of quadratic products Q"x" (u ") 2 V " (" = ) satisfying the 3 quadratic Jordan triple axioms (the operators P; L are usually denoted Q; D in Jordan pairs). Every Jordan triple J can be doubled to produce a Jordan pair V (J) = (J; J ); V " := J under Q"x := Px. The double of rectangular matrices Mpq (F ) could be more naturally viewed as a pair (Mpq (F ); Mqp (F )), more generally for any two vector spaces V; W over a eld F we have a \rectangular" pair (HomF (V; W ), HomF (W; V )) of dierent spaces under products xux; uxu making no reference to a transpose. This also provides an example to show that pairs are more than doubled triples. In nite dimensions all semisimple Jordan pairs have the form V (J) for a Jordan triple system J, in particular V + and V have the same dimension, but this is quite accidental and ceases to be true in in nite dimensions. Indeed, for a vector space W over F of in nite dimension d the rectangular pair V := (HomF (W; F ), HomF (F; W )) = (W ; W ) has dim(W ) =j F jd 2d > d = dim(W ).
3. LINKS WITH LIE ALGEBRAS AND GROUPS
11
The perspective of Jordan pairs has clari ed many aspects of the theory of Jordan triples and algebras.
2.4. Jordan Superalgebras. The third main branching of the
Jordan River leads to Jordan superalgebras introduced by KKK (Victor Kac, Issai Kantor, Irving Kaplansky). Once more this river springs from a physical source: Jordan superalgebras are dual to the Lie superalgebras invented by physicists to provide a formalism to encompass supersymmetry, handling bosons and fermions in one algebraic system. A Lie superalgebra is a Z2-graded algebra L = L0 L1 where L0 is a Lie algebra and L1 a L0 -module with a \Jordan-like" product into L0 . Dually, a Jordan superalgebra is a Z2-graded algebra J = J0 J1 where J0 is a Jordan algebra and J1 a J0 -bimodule with a \Lie-like" product into J0 . For example, any Z2-graded associative algebra A = A0 A1 becomes a Lie superalgebra under the graded Lie product [xi ; yj ] = xi yj ( 1)ij yj xi (reducing to the Lie bracket xy yx if at least one factor is even, but to the Jordan brace xy + yx if both i; j are odd), and dually becomes a Jordan superalgebra under the graded Jordan brace fxi; yj g = xi yj + ( 1)ij yj xi (reducing to the Jordan brace xy + yx if at least one factor is even, but to the Lie bracket xy yx if both factors are odd). Jordan superalgebras shed light on Pchelinstev Monsters, a strange class of prime Jordan algebras which seem genetically unrelated to all normal Jordan algebras. We have indicated how these branches of the Jordan river had their origin in an outside impetus. We now want to indicate how the branches, in turn, have in uenced and enriched various areas of mathematics.
3. Links with Lie Algebras and Groups The Jordan river ows parallel to the Lie river with its extended family of Lie systems (algebras, triples, and superalgebras), and an informed view of Jordan algebras must also re ect awareness of the Lie connections. Historically, the rst connection of Jordan algebras to another area of mathematics was to the theory of Lie algebras, and the remarkably fruitful interplay between Jordan and Lie theory continues to generate interest in Jordan algebras. Jordan algebras played a role in Zel'manov's celebrated solution of the Restricted Burnside Problem, for which he was awarded a Fields
12
Medal in 1994. That problem, about niteness of nitely-generated torsion groups, could be reduced to a problem in certain Lie p-rings, which in turn could be illuminated by a natural Jordan algebra structure of characteristic p. The most diÆcult case, when the characteristic was p = 2, could be most clearly settled making use of a quadratic Jordan structure. Lie algebras in characteristic 2 are weak, pitiable things; linear Jordan algebras aren't much better, since the Jordan product xy + yx is indistinguishable from the Lie bracket xy yx in that case. The crucial extra product xyx of the quadratic Jordan theory was the key which turned the tide of battle.
3.1. Lies. Recall that a Lie algebra is a linear algebra with product [x; y ] which is anti-commutative and satis es the Jacobi Identity [x; y ] = [y; x];
[x; [y; z ]] + [y; [z; x]] + [z; [x; y ]] = 0:
Just as any associative algebra A becomes a linear Jordan algebra + under the anticommutator or Jordan brace fx; y g = xy + yx, it A also becomes a Lie algebra A under the commutator or Lie bracket [x; y ] := xy yx. Unlike the Jordan case, all Lie algebras (at least those which are free as modules, e.g. algebras over a eld) are special in the sense of arising as a commutator-closed subspace of some A . A contains many subspaces which are closed under the Lie bracket but not under the ordinary product; any such subspace produces a Lie subalgebra which need not be of the form E for any associative subalgebra E of A. There are 3 main ways of singling out such subspaces. The rst is by means of the trace: if A is the algebra of n n matrices over a eld, or more abstractly all linear transformations on a nite-dimensional vector space, then the subspace of elements of trace zero is closed under brackets since the Lie bracket T1 T2 T2 T1 of any two transformations has trace zero, due to symmetry tr(T1 T2 ) = tr(T2 T1 ) of the trace function. The second way is by means of an involution: in general for any involution on an associative algebra A the subspace xkew(A; ) of skew elements x = x is closed under brackets since [x1 ; x2 ] = [x2 ; x1 ] = [ x2 ; x1 ] = [x2 ; x1 ] = [x1 ; x2 ]. If a nite-dimensional vector space V carries a nondegenerate symmetric or skew bilinear form, the process hT (v ); wi = hv; T (w)i of moving a transformation from one side of the bilinear form to the other induces an adjoint involution on the linear transformations A = End (V ), and the skew transformations T satisfying hT (v ); wi = hv; T (w)i form a Lie subalgebra.
3. LINKS WITH LIE ALGEBRAS AND GROUPS
13
The third method is by means of derivations: in the associative algebra A = End(C) of all linear transformations on an arbitrary linear algebra C (not necessarily associative or commutative), the subspace Der(A) of derivations of A (linear transformations satisfying the product rule D(xy ) = D(x)y + xD(y )) is closed under brackets, due to symmetry in D1 ; D2 of (D1 D2 )(xy ) (D1 D2 )(x)y x(D1 D2 )(y ) = D1 (D2 (x)y + xD2 (y )) (D1 (D2 (x))y x(D1 (D2 (y )) = D2 (x)D1 (y ) + D1 (x)D2 (y ). The exceptional Jordan algebra (the Albert algebra), the exceptional composition algebra (the Cayley or octonion algebra), and the 5 exceptional Lie algebras, are enduring features of the mathematical landscape, and they are all genetically related. Algebraists rst became interested in the newborn Albert algebra through its unexpected connections with exceptional Lie algebras and groups. The 4 Great Classes of simple Lie algebras (resp. groups) An ; Bn ; Cn ; Dn consist of matrices of trace 0 (resp. determinant 1) or skew T = T (resp. isometric T = T 1 ) with respect to a nondegenerate symmetric or skewsymmetric bilinear form. The 5 exceptional Lie algebras and groups G2 ; F4 ; E6 ; E7 ; E8 appearing mysteriously in the 19th century CartanKilling classi cation were originally de ned in terms of a multiplication table over an algebraically closed eld. When these were found in the 1930's, 40's, and 50's to be describable in an intrinsic coordinate-free manner using the Albert algebra A and Cayley algebra K , it became possible to study them over general elds. The Lie algebra (resp. group) G2 of dimension 14 arises as the derivation algebra (resp. automorphism group) of K ; F4 of dimension 52 arises as the derivation algebra (resp. automorphism group) of A ; E6 arises by reducing the structure algebra S trl(A ) = L(A ) + Der(A ) (resp. structure group U (A )A ut(A )) of A to get S trl0 (A ) = L(A 0 ) + Der(A ) of dimension (27 1) + 52 = 78 (the subscript 0 indicates trace zero elements); E7 arises from the TitsKantor-Koecher construction T KK(A ) = A S trl(A ) A (resp. T KK group) of A of dimension 27 + 79 + 27 = 133, while E8 of dimension 248 arises in a more complicated manner from A and K by a process due to Jacques Tits. We rst turn to this construction.
3.2. The Freudenthal-Tits Magic Square. Jacques Tits discovered in 1966 a general construction of a Lie algebra FT (C; J), start-
ing from a composition algebra C and a Jordan algebra J of \degree 3;" which produces E8 when J is the Albert algebra and C the Cayley algebra. Varying the possible ingredients leads to a square arrangement that had been noticed earlier by Hans Freudenthal: The Freudenthal - Tits Magic Square: FT (C; J)
14 C
R C H K
n J R H3(R ) H3 (C ) H3(H ) H3(K )
0 A1 A2 C3 F4 0 A2 A2 A2 A5 E6 A1 C3 A5 A6 E7 G2 F4 E6 E7 E8 Some have contested whether this is square, but no one has ever contested that it is magic. The ingredients for this Lie recipe are a composition algebra C (R ; C ; H ; K of dimension 1; 2; 4; 8) and a Jordan algebra J (either R or H3 (D) (D = R ; C ; H ; K ) of dimension either 1 or 6; 9; 15; 27). The recipe creates a space FT (C; J) := Der(C) (C0 J0 ) Der(J) (again the subscript 0 indicates trace zero elements) with complicated Lie product requiring 16 . For example, the lower right corner of the table produces a Lie algebra of dimension 14 + (7 26) + 52 = 248, which is precisely the dimension of E8 . 3.3. The Tits-Kantor-Koecher Construction. In the late 60's Issai Kantor and Max Koecher independently showed how to build a Lie algebra T KK(J) := L1 L0 L 1 with \short 3-grading" [Li ; Lj ] Li+j by taking L1 to be two copies of any Jordan algebra J glued together by the inner structure Lie algebra L0 = Instrl(J) := VJ;J spanned by the V -operators (note that the 5-linear elemental identity fx; y; fu; v; wgg = ffx; y; ug; v; wg fu; fy; x; vg; wg + fu; v; fx; y; wgg becomes the operator identity [Vx;y ; Vu;v ] = Vfx;y;ug;v Vu;fy;x;vg acting on the element w, showing that the inner structure algebra is closed under the Lie bracket). Thus we have a space T KK(J) := J1 Instrl(J) J 1 (where the Ji are copies of the module J under cloning maps x 7! xi ) with an anticommutative bracket determined by [T; x1 ] = T (x)1 ; [T; y 1 ] = T (y ) 1 ; [xi ; yi] = 0 (i = 1); [x1 ; y 1 ] = Vx;y ; [T1 ; T2 ] = T1 T2 T2 T1 for T 2 Instrl(J); x; y 2 J ; the natural involution on Instrl(J) = Vy;x extends to an involution (x; T; y ) 7! (y; T ; x) determined by Vx;y on all of L. Later it was noticed that this was a special case of a 1953 construction by Jacques Tits of a Lie algebra L0 = (J L0 ) Inder(J) built out of a Jordan algebra J and a simple 3-dimensional Lie algebra L0 of type A1 with bracket determined by [D; x `] = D(x) `; [D1 ; D2 ] = D1 D2 D2 D1 ; [x `; y m] = (x y ) [`; m] + 18 (`; m)Dx;y ; for D 2 Inder(J) spanned by all Dx;y := Vx;y Vy;x = [Vx ; Vy ]; x; y 2 J; `; m 2 L0 ; the Killing form tr (ad(x)ad(y )) on L0 . The archetypal
4. LINKS WITH DIFFERENTIAL GEOMETRY
15
example of such an L0 is a split s`2 , the special linear algebra of trace 0 endomorphisms on a 2-dimensional vector space V = F 2 , spanned by 3 transformations e = E12 = ( 00 10 ), f = E21 = ( 01 00 ), h = E11 E22 = ( 10 01 ) with [e; f ] = h; [h; e] = 2e; [h; f ] = 2f: Since always VJ;J = VJ Inder(J), in the special case where L0 = s`2 , the map (x e + y f + z h) D 7! x1 (Vz + D) y 1 is an isomophism sending e; f; h to 11 ; 1 1 ; V1 = 21J , and L is a clone of L0 . The Tits-Kantor-Koecher Construction is not only intrinsically important, it is historically important because it gave birth to two streams in Jordan theory. The Jacobi identity for TKK to be a Lie algebra boils down to outer-symmetry and the 5-linear identity for the Jordan triple product. This observation led Meyberg to take these two conditions as the axioms for a new algebraic system, a Jordan triple system, and he showed that the Tits-Kantor-Koecher construction L = J S trl(J) J produced a graded Lie algebra with reversal involution x T y 7! y T x i J was a linear Jordan triple system. This was the rst Jordan stream to branch o the main line. The second stream branches o from the same source, the T KK construction. Loos formulated the axioms for Jordan pairs V = (V1 ; V 1 ) (a pair of spaces V1 ; V 1 acting on each other like Jordan triples), and showed they are precisely what is needed in the T KK-Construction of Lie algebras: L = V1 Inder(V) V 1 produces a graded Lie algebra i V = (V1 ; V 1 ) is a linear Jordan pair. Jordan triple arise precisely from pairs with involution, and Jordan algebras arise from pairs where the grading and involution came from a little s`2 = fe; f; hg = f11; 1 1; 2(1J)g: Thus Jordan systems arise naturally as \coordinates" for graded Lie algebras, leading to the dictum of Kantor \There are no Jordan algebras, there are only Lie algebras." Of course, this can be turned around: 9 times out of 10, when you open up a Lie algebra you nd a Jordan algebra inside which makes it tick.
4. Links with Dierential Geometry Though mathematical physics gave birth to Jordan algebras and superalgebras, and Lie algebras gave birth to Jordan triples and pairs, dierential geometry has had a more pronounced in uence on the algebraic development of Jordan theory than any other mathematical discipline. Investigations of the role played by Jordan systems in differential geometry has revealed new perspectives on purely algebraic features of the subject. We now indicate what Jordan algebras were doing in such a strange landscape.
16
4.1. Inverses and Isotopy. For the rst application we need to
say a few words about inverses in unital Jordan algebras. An element x is invertible in J i the operator Ux is an invertible operator on J, in which case its inverse is the operator Ux 1 = Ux 1 of the inverse element x 1 := Ux 1 (x). In special Jordan algebras J A+ an element x is invertible i it is invertible in A and its associative inverse x 1 lies in J, in which case x 1 is also the Jordan inverse. An important concept in Jordan algebras is that of isotopy. The fundamental tenet of isotopy is the belief that all invertible elements of a Jordan algebra have an equal entitlement to serve as unit element. If u is an invertible element of an associative algebra A, we can form a new associative algebra, the associative isotope Au with new product, unit, and inverse given by
xu y := xu 1 y; 1u := u; x[ 1;u] := ux 1 u: We can do the same thing in any Jordan algebra: the Jordan isotope J[u] has new bullet, quadratic, and triple products 1 x [u] y := fx; u 1 ; y g; Ux[u] := Ux Uu 1 ; fx; y; z g[u] := fx; Uu 1 y; z g 2 and new unit and inverses 1[u] := u; x[ 1;u] = Uu x 1 : Thus each invertible u is indeed the unit in its own isotope. As an example of the power of isotopy, consider the Hua Identity (x + xy 1 x) 1 + (x + y ) 1 = x 1 in associative division algebras, which plays an important role in the study of projective lines. It is not hard to get bogged down trying to verify the identity directly, but for x = 1 the \Weak Hua Identity" (1 + y 1 ) 1 + (1 + y ) 1 = 1 has a `third-grade proof": 1 1 y 1 y+1 + = + = = 1: 1 1+y 1+y y+1 1+y 1+y Using the concept of isotopy, we can bootstrap the commutative thirdgrade proof into a non-commutative graduate-school proof: taking Weak Hua in the isotope Ax gives (1x + y [ 1;x])[ 1;x] + (1x + y )[ 1;x] = 1x which becomes, in the original algebra, using the above formulas x(x + xy 1 x) 1 x + x(x + y ) 1x = x:
5. LINKS WITH THE REAL WORLD
17
Applying Ux 1 to both sides to cancel x fore and aft gives us Strong Hua (x + xy 1 x) 1 + (x + y ) 1 = x 1 : One consequence of the Hua Identity is that the U -product can be built out of translations and inversions. T.A. Springer later used this to give an axiomatic description of Jordan algebras just in terms of the operation of inversion, which we may loosely describe by saying an algebra is Jordan i its inverse is given by the geometric series in all isotopes, (1
x)[ 1;u]
=
1 X n=0
x[n;u]:
This formula also suggests why the inverse might encode all the information about the Jordan algebra: it contains information about all the powers of an element, and from x[2;u] = Ux u 1 for u = 1 we can recover the square, hence the bullet product.
5. Links with the Real World 5.1. Riemannian Symmetric Spaces. A Riemannian manifold is a smooth (C 1 ) manifold M carrying a Riemannian metric, a smoothly -varying positive-de nite inner product h ; ip on the tangent space Tp (M ) to the manifold at each point p. For simplicity we will assume all our manifolds are connected. An isomorphism f of Riemannian manifolds is a dieomorphism of smooth manifolds whose dierential df
is isometric on each tangent space, i.e. preserves the inner product hdfp(u); dfp(v)if (p) = hu; vip. Recall that the dierential dfp lives on the tangent space Tp (M ) with values dfp(v ) = @v f jp on v representing the directional derivative of f in the direction of the tangent vector v at the point p: In such manifolds we can talk of geodesics (paths of \locally shortest length"), and at p there is a unique geodesic curve p;v (t) (de ned for t in a neighborhood of 0) passing through p at t = 0 with tangent vector v . At each point p there is a neighborhood on which the geodesic symmetry p;v (t) 7! p;v ( t) is an involutive local dieomorphism [in general not an isometry with respect to the Riemann metric] having p as isolated xed point. The exponential map expp maps a neighborhood of 0 in Tp (M ) down onto a neighborhood of p in the manifold via v 7! p;v (1), so that the straight line segments through the origin in the tangent space go into the local geodesics through p. In case M is complete (the geodesics p;v (t) are de ned for all t), expp maps all of Tp(M ) down onto all of M:
18
A Riemannian symmetric space is a Riemannian manifold M having at each point p a symmetry sp (an involutive global isometry of the manifold having p as isolated xed point, equivalently having dierential 1 at p). The existence of such symmetries is a very strong condition on the manifold. For example, it forces M to be complete and the group of all isometries of M to be a real Lie group G acting transitively on M , in particular it forces the manifold to be real analytic instead of merely smooth. Then the symmetries sp are unique and induce the geodesic symmetry about the point p [sp (expp (v )) = expp ( v )]. Note that by uniqueness any isometry g 2 G conjugates the symmetry sp at a point p to the symmetry g Æ sp Æ g 1 at the point g (p). Thus we can rephrase the condition for symmetry for M as (i) there is a symmetry sp0 at one point p0 , (ii) isometries act transitively on M . The subgroup K of G xing a point p is compact, and M is isomorphic to G=K . This leads to the traditional Lie-theoretic approach to symmetric spaces.
5.2. Self-dual Cones. Certain symmetric spaces can be described more fruitfully using Jordan algebras. For instance, there is a 1-1 correspondence between the self-dual open homogeneous cones in R n and n-dimensional formally-real Jordan algebras, wherein the geometric structure is intimately connected with the Jordan algebraic structure living in the tangent space. A subset C of a real vector space V is convex if it is closed under convex combinations [tx + (1 t)y 2 C for all x; y 2 C and 0 t 1], and therefore connected. A set C is a cone if it is closed under positive dilations [tx 2 C for all x 2 C and t > 0]. An open convex cone is regular if it contains no aÆne lines. The dual of a regular open convex cone C is the cone C := f` 2 V j `(C ) > 0g of functionals which are strictly positive on the original cone. For example, for 0 < < 2 the wedge of angle given by W := fz 2 C j 2 < arg (z ) < 2 g is an open cone in R 2 = C , which is convex i and is regular i < . A positive-de nite bilinear form on V allows us to identify V with V and C with the cone fy 2 V j (y; C ) > 0g in V , and we say a cone is self-dual with respect to if under this identi cation it coincides with its dual. For example, the dual of the above wedge W with respect to the usual inner product on R 2 is W = W , so W is self-dual with respect to only for = 2 : 5.3. formally-real Jordan Algebras. The formally-real Jordan algebras investigated by Jordan, von Neumann, and Wigner are precisely the nite-dimensional real unital Jordan algebras such that every element has a unique spectral decomposition x = 1 e1 + : : : + r er for
5. LINKS WITH THE REAL WORLD
19
an increasing family 1 < : : : < r of distinct real eigenvalues and a supplementary family 1 = e1 + : : : + er of orthogonal idempotents in J [meaning e2i = ei and ei ej = 0 for i 6= j ; the archetypal example is the diagonal matrix units ei = Eii in Mn (F ).] This family of eigenvalues, the spectrum of x, is uniquely determined as Sp(x) = f 2 C j 1 x is not invertible in the complexi cation JC g. In analogy with Jordan canonical form for matrices (Camille, not Pascual), we can say that the elements all have a diagonal Jordan form with real eigenvalues. It is easy to see this condition is equivalent to formal-reality x2 + y 2 = 0 =P ) x = y = 0: if x2 = Pk 2k ek with spectrum f2k g agrees with y 2 = ` ( 2` )f` with spectrum f 2` g, then all k ; ` must be 0 and hence x = y = 0, conversely if some k = k + i k for k 6= 0, then x~2 + y~2 = 0 for x~ := Uek x k ek = i k ek ; y~ := k ek 2 J: The spectral decomposition leads to a functional calculus: every real-valued function f (t) de ned on a subsetPS R induces a map on those x 2 J having Sp(x) S via f (x) = k f (k )ek . In particular, the invertible elements are those with Sp(x) P S = R n f0g, and f (t) = 1=t induces inversion f (x) = x 1 = k k 1 ek . [When f is the discontinuous function de ned on all of S = R by f (t) = 1=t for t 6= 0 and f (0) = 0, then f (x) is the the generalized or pseudo-inverse of any x.] If S is open and f (t) is continuous, smooth, or given by a power series as a function of t, then so is f (x) as a function of x. It is clear that the set of positive elements (those x with positive spectrum 2 = P 2 e Sp(x) R + ) coincides with the set ofPinvertible squares x k k k 1 xn = P ek e , and can and also the exponentials exp(x) = 1 k k n=0 n! (with some eort) be described as the connected component of the identity element in the set J 1 of invertible elements. The set of positive elements is called the positive cone C one(J) of the formally-real Jordan algebra J. Theorem. The positive cone C := C one(J) of an n-dimensional formally-real Jordan algebra J is an open regular convex cone in J = Rn which is self-dual with respect to the positive de nite bilinear trace form (x; y ) := tr(Vxy ) = tr(Vx;y ). The linear operators Ux for x 2 C generate a group G of linear transformations acting transitively on C . Since C is a connected open subset of J, it is naturally a smooth ndimensional manifold, and we may identify the tangent space Tp (C ) with J; taking hx; y ip := (Up 1 x; y ) at each point gives a G-invariant Riemannian metric on C . Thus the positive cone C one(J) of a formallyreal Jordan algebra is in a canonical way a homogeneous Riemannian manifold.
20
The inversion map j : x 7! x 1 induces a dieomorphism of J 1 of period 2 leaving C invariant, and having there a unique xed point 1 [the xed points of the inversion map are the e f for e + f = 1 supplementary orthogonal idempotents, and those with f 6= 0 lie in the other connected components of J 1 ], and provides a symmetry of the Riemannian manifold C at p = 1; here the exponential map is the ordinary algebraic exponential exp1 (x) = ex from T1 (M ) = J to C one(J), and negationx 17! x in the tangent space projects to inversion ex 7! e x = ex on the manifold. The Jordan product and U -operator arise from the inversion symmetry s1 = j via djx = Ux 1 and d2 j1 (u; v ) = @u @v j j1 = Vu(v ) = 2 uv . If we choose a basis fxk g for J;, then the Christoel symbols determining the aÆne connection at 1 P are just the multiplication constants for the algebra: xi xj = k kij xk . Any other point p in C one(J) can be considered the unit element in its own algebraic system J[p]. J[p] has the same invertible elements as J, and has the same connected component of the identity since by choice p lies in the same connected component as e, so C one(J[p]) = C one(J). Therefore the manifold has a symmetry at the point p given by x 7! x[ 1;p], the exponential map is expp (x) = e[x;p], and the Christoel symP bols are just the multiplication constants of J[p] : xi p xj = k kij [p]xk . Thus every point is an isolated xed point of a symmetry, given by inversion in a Jordan isotope, and the self-dual positive cone C one(J) becomes a Riemannian symmetric space. Rather than isotopy, we noted above that we can use transitivity to create the symmetry at p = g (1) once we have one at 1. The structure group S tr(J) of the Jordan algebra is a real Lie group leaving the set of invertible elements invariant, and having isotropy group at 1 precisely the automorphism group Aut(J). Its connected component G := S tr(J)0 of the identity leaves the cone C invariant, and acts transitively because already the linear P transformations Uc (c 2 C ) belong to G and every positive p = pk k eP k 2pC (k > 0) has the form p = c2 = Uc (1) for positive c = p = k k ek 2 C . Thus C = G=K for the isotropy group K = G \ Aut(J) of G at the identity (the structural transformations which x 1 are precisely the automorphisms of J). Aut(J) is compact since it leaves invariant the positive-de nite inner product (x; y ) := tr(Vx;y ) and thus lives in the (compact) orthogonal group of , so K is compact too; K is also connected, since a standard argument from simple connectivity of C shows K = Aut(J)0 . We get a G-invariant metric on C via hUc x; Uc y iUc(1) := (x; y ), so hx; yip := (Uc 1 x; Uc 1 y) = (Uc 2 x; y) = (Up 1 x; y) for all points p = Uc (1) 2 C .
5. LINKS WITH THE REAL WORLD
21
A particular example may make this clearer.
(Hermitian Complex Matrices). Let J = Hn (C ) be the formally-real Jordan algebra of dimension n2 over the reals consisting of all Z 2 Mn (C ) with Z = Z . Then the positive cone C one(J) consists precisely of the positive-de nite matrices (the hermitian matrices whose Jordan form has only positive real eigenvalues). The structure group is generated by the two involutory transformations Z 7! Z and Z 7! Z = Z tr together with the connected subgroup G = S tr(J)0 of all Z 7! AZA for A 2 GLn (C ). The connected component K = Aut(J)0 of the automorphism group consists of all Z 7! UZU = UZU 1 for unitary U 2 Un (C ). The P Riemannian metric ows from (X; Y) := tr(VX;Y ) = n 2n tr(XY ) = 2n j;k=1 xjk yjk for X = xjk ; Y = yjk , corresponding Pn 2 1=2 . to a multiple of the Hilbert-Schmidt norm kX k = j x j jk j;k=1 The Riemann metric hX; Y iP at a point P is given by the matrix trace 2n tr(P 1 XP 1Y ). Example
5.4. To In nity and Beyond: JB -algebras. In 1978 Alfsen, Schultz, and Strmer obtained a Gelfand-Naimark Theorem for \Jordan C -algebras" in which they showed that the only exceptional C factors are Albert algebras. In retrospect, this was a harbinger of things to come, but at the time it did not diminish hope among algebraists for in nite-dimensional exceptional algebras. These Jordan C -algebras are a beautiful generalization of formallyreal Jordan algebras to the in nite-dimensional setting, which combines dierential geometry with functional analysis. The in nite-dimensional algebra is kept within limits by imposing a norm topology. A norm on a real or complex vector space V is a real-valued function k k : V ! R which is homogeneous [ kxk = jjkxk for all scalars ], positive de nite [x 6= 0 =) kxk > 0], and satis es the triangle inequality kx + yk kxk + kyk]. A Euclidean or hermitian norm is one that comes from a Euclidean or hermitian inner product h ; i on V via kxk = p hx; xi. Any norm induces a metric topology with neighborhoods of x the x + Br for Br the open r-ball fy 2 V j ky k < rg (so xn ! x i limn!1 kxn xk = 0). A Banach space is a normed linear space which is complete in the norm topology. In nite dimensions every Banach space can be re-normed to become a Hilbert space (with norm given by an inner product), but in in nite dimensions this is far from true. A Jordan-Banach algebra or JB -algebra is a real Jordan algebra J which is at the same time a Banach space, with the two structures
22
related by (JB1) (JB2) (JB3)
kx yk kxk kyk (Banach algebra condition) kx2 k = kxk2 (C -condition) kx2 k kx2 + y2k (positivity condition).
An associative algebra is called Banach algebra if it has a Banach norm satisfying kxy k kxk ky k; it is called a real C -algebra if in addition it has an isometric involution satisfying kxx k = kxk2 and all 1 + xx are invertible. Thus JB -algebras are a natural Jordan analogue of the associative C -algebras. Such a JB -norm on a Jordan algebra is unique if it exists, and automatically k1k = 1 if J has a unit element. With some eort it can be shown that any JB -algebra can be imbedded in a unital one 0 0 J := R J using the spectral norm on J , and we will henceforth assume all our JB -algebras are unital. Notice that the conditions (JB23) have as immediate consequence the usual formal-reality condition x2 + y 2 = 0 ) x = y = 0. Conversely, all nite-dimensional formallyreal Jordan algebras (in particular the real Albert algebra A = H3 (K )) carry a unique norm making them JB -algebras, so in nite dimensions JB is the same as formally-real. This is false in in nite-dimensions: a formally-real Jordan algebra can satisfy (JB1) and (JB2) but not the crucial (JB3). The classical \concrete" JB -algebra is the algebra H(B (H ); ) consisting of all bounded self-adjoint operators on a Hilbert Space H with operator norm and the usual Hilbert-space adjoint. The analogue of special algebras in the JB category are the \concrete" JC -algebras, those isomorphic to norm-closed subalgebras of some H(B (H ); ). The Albert algebra A is JB but not JC since it is not even special. The celebrated Gelfand-Naimark Theorem for JB -algebras of Erik Alfsen, Frederic Schultz, and Erling Strmer asserts that every JB algebra can be isometrically isomorphically imbedded in some H(B (H ); ) C (X; A ), a direct sum of the JC -algebra of all hermitian operators on some real Hilbert space, and the purely exceptional algebra of all continuous Albert-algebra-valued functions on some compact topological space X . This has the prophetic consequence that as soon as a JB -algebra satis es Glennie's Identity G8 it is immediately a special JC -algebra. We have a continuous functional calculus for JB -algebras: for any element x, the smallest norm-closed unital subalgebra containing x is isometrically isomorphic to the commutative associative C -algebra
5. LINKS WITH THE REAL WORLD
23
C (Sp(x)) of all continuous real-valued functions on the compact spectrum Sp(x) := f 2 R j 1 x is not invertible in Jg, under an isomorphism sending x to the identity function id(t) = t. Note that x is invertible i 0 2= Sp(x). An element is positive if it has positive spectrum Sp(x) R + ; once more this is equivalent to being an invertible square, or to being an exponential. The positive cone C one(J) consists of all positive elements; it is a regular open convex cone, and again the existence of square roots shows that the group G generated by the invertible operators Uc (c 2 C ) acts transitively on C , so we have a homogeneous cone. Again each point p 2 C is an isolated xed point of a symmetry sp (x) = x[ 1;p] = Up x 1 . Since in in nite dimensions the tangent spaces Tp(C ) are merely Banach spaces, not Hilbert spaces, there is in general no G-invariant Riemannian metric to provide the usual concepts of dierential geometry (geodesics, curvature, etc.), and these must be de ned directly from the symmetric structure. For example, a geodesic is a connected 1dimensional submanifold M of C which is symmetric (or totally geodesic) in the sense that it is invariant under the local symmetries, sp(M ) = M for all p 2 M , and any two distinct points in C can be joined by a unique geodesic. The category of JB -algebras is equivalent under complexi cation to the category of JB -algebras. A JB -algebra is a complex Jordan algebra with a (a conjugate-linear algebra involution) having at the same time the structure of a complex Banach space, where the algebraic structure satis es the norm conditions (JB 1) (JB 2) (JB 3)
kx yk kxk kyk (Banach algebra condition) kx k = kxk (isometry condition) kfx; x ; xgk = kxk3 (C condition):
A complex associative Banach algebra is called a C -algebra it has a C antilinear algebra involution (necessarily isometric) satisfying (xy ) = y x and kx xk = kxk2 . Thus JB -algebras are a natural Jordan analogue of the complex C -algebras. Every JB -algebra J0 produces a JB -algebra H(J0 ; ) consisting of all self-adjoint elements, and conversely for every JB -algebra J the natural complexi cation JC = J iJ with involution (x + iy ) = x iy can (with diÆculty) be shown to carry a norm which makes it a JB algebra. We have a similar Gelfand-Naimark Theorem for JB algebras that every JB -algebra can be isometrically isomorphically imbedded in some B (H ) C (X; A C ), a direct sum of the C -algebra of bounded hermitian operators on some complex Hilbert space H and a purely
24
exceptional algebra of all continuous functions on some compact topological space X with values in the complexi ed Albert algebra. These complex spaces provide a setting for an important bounded symmetric domain associated to the cone.
6. Links with the Complex World 6.1. Bounded Symmetric Domains. The complex analogue of
a Riemannian manifold is a hermitian manifold, a complex analytic manifold M carrying a hermitian metric, a smoothly-varying positivede nite hermitian inner product h ; ip on the tangent space Tp (M ) to the manifold at each point p. An isomorphism of Hermitian manifolds is a biholomorphic map of analytic manifolds whose dierential is isometric on each tangent space. A hermitian symmetric space is a (connected) hermitian manifold having at each point p a symmetry sp [an involutive global isomorphism of the manifold having p as isolated xed point]. We henceforth assume all our Hermitian manifolds are connected. These are abstract manifolds, but every Hermitian symmetric space of \noncompact type" [having negative holomorphic sectional curvature] is a bounded symmetric domain, a down-to-earth bounded domain in C n each point of which is an isolated xed point of an involutive biholomorphic map of the domain. Initially there is no metric on such a domain, but there is a natural way to introduce one (for instance, the Bergmann metric derived from the Bergmann kernel on a corresponding Hilbert space of holomorphic functions). In turn, every bounded symmetric domain is biholomorphically equivalent via its Harish-Chandra realization to a bounded homogeneous circled domain, a bounded domain containing the origin which is circled in the sense that the circling maps x 7! eit x are automorphisms of the domain for all real t, and homogeneous in the sense that the group of all biholomorphic automorphisms acts transitively on the domain. These domains are automatically convex, and arise as the open unit ball with respect to a certain norm on C n : The classical example of an unbounded realization of a bounded symmetric domain is the upper half plane M in C , consisting of all x + iy for real x; y 2 R with y > 0. This is the home of theory of automorphic forms and functions, a subject central to many areas of mathematics. The upper half plane can be mapped by the Cayley iz transform z 7! ii+zz = 1+ 1 iz onto the open unit disc (consisting of all w 2 C with jwj < 1, i.e. 1 ww > 0).
6. LINKS WITH THE COMPLEX WORLD
25
This was generalized by Carl Ludwig Siegel to Siegel's upper half plane M (consisting of all Z = X + iY for symmetric X; Y 2 Mn (R ) with Y positive de nite), to provide a home for the Siegel modular forms in the study of functions of several complex variables. Again, this is mapped by the Cayley transform Z 7! (i1 Z )(i1 + Z ) 1 = (1 + iZ )(1 iZ ) 1 onto the generalized unit disc D consisting of all symmetric W 2 Mn (C ) with I W W positive de nite. Max Koecher began his life as an arithmetic complex analyst, but his studies of modular functions led him inexorably to Jordan algebras. He generalized Siegel's upper half space to the case of an arbitrary formally-real Jordan algebra J: Koecher's upper half space M = H (J) consists of all Z = X + iY for X; Y 2 J with Y in the positive cone C one(J). These half-planes or tube domains J iC are open and convex in the complexi cation JC = J iJ. The geometry of the unbounded H (J) is nicely described in terms of the Jordan algebra JC : the biholomorphic automorphisms of H (J) are the linear fractional transformations generated by inversion Z 7! Z 1 , translations Z 7! Z + A (A 2 J), and Z 7! T~(Z ) = T (X ) + iT (Y ) for complexlinear extensions T~ of automorphisms T 2 Aut(J). These half-spaces are mapped by the Cayley transform Z 7! (1 + iZ )(1 iZ ) 1 onto the generalized open unit disc D consisting of all Z = X + iY 2 JC = J iJ with I 12 VZ;Z positive de nite with respect to the trace form tr(VW;Z ) on the Jordan algebra JC (where Z := X iY ). This D can also be characterized as the unit ball of JC under a certain spectral norm.
6.2. Positive Hermitian Triple Systems. Ottmar Loos showed there is a natural 1-1 correspondence between bounded homogeneous circled domains D in C n and the nite-dimensional positive hermitian Jordan triples. A hermitian Jordan triple is a complex vector space J with triple product fx; y; z g = Lx;y (z ) which is symmetric and C -linear in the outer variables x; z and conjugate-linear in the middle variable y , satisfying the 5-linear axiom fx; y; fz; w; v gg = ffx; y; z g; w; v g fz; fy; x; wg; vg + fz; w; fx; y; vgg for a Jordan triple system. A nitedimensional hermitian Jordan triple is positive if the trace form tr(Lx;y ) is a positive de nite Hermitian scalar product. decomposition x = P Every nonzero element has a unique spectral k ek for nonzero orthogonal tripotents e3k = ek and distinct positive singular values 0 < 1 < : : : < r 2 R (called the singular spectrum of x); the spectral norm is the maximal size kxk = maxi i = r of the singular values. At rst sight it is surprising that every element seems to be \positive", but recall that by conjugate linearity in the
26
middle variable a tripotent can absorb any unitary complex scalar to produce an equivalent tripotent e0 = e, so any complex \eigenvalue" = ei = can be replaced by a real singular value : e = e0 . The real singular value is determined up to , but if we use only an P odd functional calculus f (x) = k f (k )ek for odd functions f on R this ambiguity ( )( e) = e is resolved: f ( )( e) = f ()( e) = f ()e is independent of the choice of sign. The Bergmann kernel function K (x; y ), the reproducing function for the Hibert space of all holomorphic L2 -functions on the domain, is intimately related to the Bergmann operator Bx;y = 1J Lx;y + Px Py of the triple by the formula K (x; y ) = = det(Bx;y ) for a xed constant [this is the reason for the name and the letter B , although Stefan Bergmann himself had never heard of a Jordan triple system!]. We have a complete algebraic description of all these positive triples. Theorem (Classi cation Theorem). Every nite-dimensional positive hermitian triple system is a nite direct sum of simple triples, and there are exactly 6 classes of simple triples (together with a positive involution) : 4 great classes of special triples (1) rectangular matrices Mpq (C ), (2) skew matrices Skewn (C ), (3) symmetric matrices Symmn (C ), (4) spin factors JS pinn (C ), and 2 sporadic exceptional systems, (5) the bi-Cayley triple 2 K C of dimension 16, (6) the Albert triple A = H3 (K C of dimension 27 determined by the split octonion algebra K C over the complexes. The geometric properties of bounded symmetric domains are beautifully described by the algebraic properties of these triple systems. Theorem (Jordan Unit Balls). Every positive hermitian Jordan triple system J gives rise to a bounded homogeneous convex circled domain D(J) which is the open unit ball of J as a Banach space under the spectral norm, or equivalently 1 D(J) = fx 2 J j 1J L > 0g: 2 x;x Every bounded symmetric domain D arises in this way: its bounded realization is D(J) where the algebraic triple product can be recovered from the Bergmann metric h ; i and Bergmann kernel K of the domain as the logarithmic derivative of the kernel at 0:
hfu; v; wg; zi0 = d40K (x; x)(u; v; w; z) = @u@v @w @z log K (x; x)jx=0 :
7. LINKS WITH THE INFINITELY COMPLEX WORLD
27
The domain D = D(J) of the triple J becomes a hermitian symmetric space under the Bergmann metric hx j y ip = tr(LBp;p1 x;y ). The automorphisms of the domain D xing 0 are linear and are precisely the algebraic automorphisms of the triple J. At the origin the exponential map exp0 : T0 (D) = J ! D is a real analytic dieomorphism given by the odd function exp0 (v ) = tanh(v ), and the curvature tensor is given by R(u; v )0 = Lv;u Lu;v . The Shilov boundary of D is the set of maximal tripotents of J, and coincides with the set of all extremal points of the convex set D; it can be described algebraically as the set of z 2 J with Bz;z = 0. The holomorphic boundary components of D are precisely thefaces of the convex set D, which are just the sets e + D \ ker(Be;e ) for all tripotents e of J. As an example, in rectangular matrices Mpq (C ) for p q every X can be written as X = UDV for a unitary p p matrix U and a unitary q q matrix V and diagonal real p q matrix D with scalars d1 : : : dp 0 down the \main diagonal". These di are precisely the singular values of X , and are precisely the non-negative square roots of the eigenvalues of XX = UDDtr U 1 ; the spectral norm of X is kX k = d1 . The condition 1J 21 LX;X > 0 is equivalent to 1 > d1 , i.e. the unit ball condition. The triple trace is tr(LX;Y ) = (p + q )tr(XY ), so the hermitian inner product is hX; Y iP = (p + q) tr (1pp P P ) 1X (1qq P P ) 1Y :
7. Links with the In nitely Complex World There are many instances in mathematics where we gain a better understanding of the nite-dimensional situation by stepping back to view the subject from the in nite-dimensional perspective. In Jordan structure theory, Zel'manov's general classi cation of simple algebras reveals the sharp distinction between algebras of Hermitian, Cliord, and Albert type, whereas the original Jordan,von Neumann,Wigner classi cation included H3 (K ) as an outlying member of the hermitian class, and for a long time the viewpoint of idempotents caused the division algebras (algebras of capacity 1) to be considered in a class by themselves. So too in dierential geometry the in nite persepective has revealed more clearly what is essential and what is accidental in the nitedimensional situation.
7.1. From Bounded Symmetric Domains to Unit Balls. In
in nite dimensions not all complex Banach spaces can be renormed as
28
Hilbert spaces, so we cannot hope to put a hermitian norm (coming from an inner product) on tangent spaces. However, it turns out there is a natural way to introduce a norm. A bounded symmetric domain D in a complex Banach space V is a domain [a connected open subset] which is bounded in norm [kDk r for some r] and is symmetric in the sense that at each point p 2 D there is a symmetry sp [a biholomorphic map D ! D of period 2 having p as isolated xed point]. Again the existence of symmetries at all points is a strong condition: the symmetries sp are unique, and the group G = Aut(D) of biholomorphic automorphisms of D is a real Banach-Lie group [locally coordinatized by patches from a xed real Banach space instead of R n ] acting analytically and transitively on D. Note that any biholomorphic g 2 G conjugates a symmetry sp at a point p to a symmetry g Æ sp Æ g 1 at the point q = g (p). Thus as in 5:1 we can rephrase the condition for symmetry for a domain as (i) there is a symmetry sp0 at some particular basepoint, (ii) Aut(D) acts transitively on D. Again in in nite dimensions there is no G-invariant Bergmann metric to provide the usual concepts of dierential geometry. Instead of a hermitian metric there is a canonical G-invariant Banach norm on each tangent space Tp (D), the Caratheodory tangent norm kv k := supf 2F p jdfp(v )j taken over the set F p of all holomorphic functions of D into the open unit disc which vanish at p. In nite dimensions the existence of a hermitian inner product on C n seduces us to form a Hilbert norm, even though in many ways the Caratheodory is more natural (for example, for hermitian operators the Caratheodory norm is the operator norm, whereas the Hilbert-Schmidt norm Pintrinsic n 2 kX k = j;k=1 jxjk j2 is basis-dependent). By G-invariance, for any basepoint p0 of D all tangent spaces can be identi ed with Tp0 (D), which is equivalent to V . The Harish-Chandra realization of a bounded symmetric domain in C n is replaced by a canonical bounded realization as a unit ball: a Riemann Mapping-type theorem asserts that every bounded symmetric domain with basepoint p0 is biholomorphically equivalent (uniquely up to a linear isometry) to the full open unit ball with basepoint 0 of a complex Banach space V = Tp0 (D). Here the norm on V grows naturally out of the geometry of D.
7.2. Jordan Unit Balls. Note that a unit ball always has the symmetry x 7! x at the basepoint 0, so the above conditions (i),(ii) for D to be symmetric reduce to just transitivity. Thus the Banach unit balls which form symmetric domains are precisely those whose biholomorphic automorphism group Aut(D) acts transitively. The map
7. LINKS WITH THE INFINITELY COMPLEX WORLD
29
(y; z ) 7! sy (z ) gives a \multiplication" on D, whose linearization gives rise to a triple product fu; v; wg := 12 @uz @vy @wz sy (z) j(0;0) uniquely determined by the symmetric structure. This gives a hermitian Jordan triple product, a product which is complex linear in u; w and complex anti-linear in v satisfying the 5-linear identity fx; y; fu; v; wgg = ffx; y; ug; v; wg fu; fy; x; vg; wg + fu; v; fx; y; wgg. Moreover, it is a positive hermitian Jordan triple system. In in nite dimensions there is seldom a trace, so we can no longer use as de nition of positivity for a hermitian Jordan triple Loos' condition that the trace tr(Lx;y ) be positive-de nite. Instead, we delete the trace and require that the operators Lx;x themselves be positive de nite in the sense that (JB T1) exp(itLx;x ) is an isometry for all t (Lx;x is hermitian ) (JB T2) Sp(Lx;x) 0 (nonnegative spectrum) 2 (JB T3) kLx;xk = kxk (C condition for L): [The usual notion T = T of hermitian operators on Hilbert space does not apply to operators on a Banach space; the correct general de nition of hermitian operator is a bounded complex-linear transformation T on V such that the invertible linear transformation exp(itT ) is an isometry of V for all real t (just as a complex number z is real i all eitz are points on the unit circle, jeitz j = 1)]. Because of the norm conditions, these are also called JB -triples in analogy with JB -algebras. Nontrivial consequences of these axioms are (JB T4) kfx; y; z gk kxk ky k kz k (Banach triple condition); (JB T5) kfz; z; z gk = kz k3 (C -triple condition): Any associative C -algebra produces a JB -triple via Px (y ) := xy x; fx; y; zg := xyz + zy x. Thus a JB -triple is a Jordan triple analogue of a C -algebra. It can be shown that every JB -algebra J carries a JB -triple structure T (J; ) via Px(y ) := Ux (y ). [Certainly this modi cation is a hermitian Jordan triple product: it continues to satisfy the Jordan triple axiom since is a Jordan isomorphism, and it is conjugate-linear in the middle variable due to the involution . The triple product on the JB -algebra H(J; ) is R -linear and that on its complexi cation J = HC is C -linear. The algebra condition (JB 3) immediately implies (JB 5), but the other JB T -conditions require more work.] The celebrated Gelfand-Naimark Theorem for JB Triples of Yakov Friedman and Bernard Russo asserts that every such triple imbeds isometrically and isomorphically in a triple T (J) obtained
30
by \tripling" a JB -algebra J = B (H ) C (X; A C ). (Notice that the triple of rectangular p q matrices can be imbedded in the algebra of (p + q ) (p + q ) matrices, and the exceptional 16-dimensional triple imbeds in the tripled Albert algebra.) The unit ball D(J) of any JB -triple automatically has a transitive biholomorphic automorphism group. Thus the open unit ball of a Banach space V is a bounded symmetric domain i V carries naturally the structure of a JB -triple. Theorem (Jordan Functor). There is a category equivalence between the category of all bounded symmetric domains with base point and the category of all JB -triples, given by the functors (D; p0) ! J = (Tp0 (D ); f; ; g) and J ! (D (J); 0): We have seen that the unit balls of JB -algebras J are bounded symmetric domains which have an unbounded realization as tube domains H(J; ) + iC via the inverse Cayley transform w 7! i(1 w)(1 + w) 1 The geometric class of tube domains can be singled out algebraically from the class of all bounded symmetric domains: they are precisely the domains that come from JB -triples which have invertible elements. These examples, from the real and complex world, suggest a general Principal: Geometric structure is often encoded in Jordan algebraic structure.
8. Links with Projective Geometry Another example of the serendipitous appearance of Jordan algebras is in the study of projective planes. In 1933 Ruth Moufang used an octonion division algebra to construct a projective plane which satis ed the Harmonic Point Theorem and Little Desargues' Theorem, but not Desargues' Theorem. However, this description did not allow one to describe the automorphisms of the plane, since the usual associative approach via invertible 3 3 matrices breaks down for matrices with nonassociative octonion entries. In 1949 P. Jordan found a way to construct the real octonion plane inside the formally-real Albert algebra A = H3(K ) of hermitian 3 3 Cayley matrices, using the set of primitive idempotents to coordinatize both the points and the lines. This was rediscovered in 1951 by Hans Freudenthal. In 1959 this was greatly extended by T.A. Springer to reduced Albert algebras J = H3 (O) for octonion division algebras over arbitrary elds of characteristic 6= 2; 3, obtaining a \Fundamental Theorem of Octonion Planes" describing the automorphism group of the plane in terms of the \semi-linear structure
8. LINKS WITH PROJECTIVE GEOMETRY
31
group" of the Jordan ring. In this non-formally-real case the coordinates were the \rank 1" elements (primitive idempotents or nilpotents of index 2). Finally, in 1970 John Faulkner extended the construction to algebras over elds of any characteristic, using the newly-hatched quadratic Jordan algebras. Here the points and lines are inner ideals with inclusion as incidence; structural maps naturally preserve this relation, and so induce geometric isomorphisms. The Fundamental Theorem of Projective Geometry for octonion planes says that all isomorphisms are obtained in this way. Thus octonion planes nd a natural home for their isomorphisms in Albert algebras.
8.1. Projective Planes. Recall that an abstract plane = (P ; L; I )
consists of a set of points P , a set of lines L, and an incidence relation I P L. If P I L we say P lies on L and L lies on or goes through P. A collection of points are collinear if they are all incident to a common line, and a collection of lines are concurrent if they lie on a common point. A plane is projective if it satis es the three axioms (I) every two distinct points P1 ; P2 are incident to a unique line (denoted P1 _ P2 ), (II) every two distinct lines L1 ; L2 are incident to a unique point (denoted L1 ^ L2 ), (III) there exists a 4-point (four points, no three of which are collinear). We get a category of projective planes by taking as morphisms the isomorphisms = (P ; L ) : ! 0 consisting of bijections P : P ! P 0 of points and L : L ! L0 of lines which preserve incidence, P I L , P (P ) I0 L (L). (In fact, either of P or L completely determines the other. Automorphisms of a plane are called collineations by geometers, since the P are precisely the maps of points which preserve collinearity: all lines have the form L = P1 _ P2 , and P I P1 _ P2 , fP; P1; P2 g are collinear.) The most important example of a projective plane is the vector space plane Proj(V ) determined by a 3-dimensional vector space V over an associative division ring , where the points are the 1-dimensional subspaces P , the lines are the 2-dimensional subspaces L, and incidence is inclusion P I L , P L. Here P1 _ P2 = P1 + P2 ; L1 ^ L2 = L1 \ L2 (which has dimension dim(L1 )+dim(L2 ) dim(L1 +L2 ) = 2+2 3 = 1). If v1 ; v2 ; v3 form a basis for V3 , then these together with v4 = v1 + v2 + v3 form a four point (no three are collinear since any three span the whole 3-dimensional space). These planes can also be realized by a construction Proj() directly in terms of the underlying division ring. Every 3dimensional left vector space V is isomorphic to ( )3 , with dual space V isomorphic to the right vector space ( )3 under the nondegenerate bilinear pairing hv; wi = h(1 ; 2 ; 3 ); ( 1 ; 2 ; 3 )i = 1 1 + 2 2 + 3 3 .
32
The points P = v = [v ] ( )3 are coordinatized (up to a left multiple) by a nonzero vector v , the lines L are in 1-1 correspondence with their 1-dimensional orthogonal complements L? V corresponding to a \dual point" [w] = w 3 , and incidence P L = (L?)? reduces to orthogonality P ? L?, i.e. [v ] I [w] , hv; wi = 0 (which is independent of the representing vectors for the 1-dimensional spaces). We choose particular representatives v; w of the points ~x; ~y for nonzero vectors ~x = (x1 ; x2 ; x3 ); ~y = (y1 ; y2 ; y3) as follows: if x3 6= 0 choose v = x13 ~x = (x; y; 1); if x3 = 0 6= x1 choose v = x11 ~x = (1; n; 0); if x3 = x1 = 0 choose v = x12 ~x = (0; 1; 0). Dually, if y2 6= 0 choose w = y21 ~y = (m; 1; b); if y2 = 0 6= y1 choose w = y21 ~y = ( 1; 0; a); if y2 = y1 = 0 choose w = y13 ~y = (0; 0; 1). Then we obtain the following table of incidences hv; wi = 0 : Incidence Table [v ] I [w] for Proj() [v ] [w] [(m; 1; b)] [( 1; 0; a)] [(0; 0; 1)] [(x; y; 1)] y = xm + b x=a never [(1; n; 0)] n=m never always [(0; 1; 0)] never always always
8.2. AÆne Planes. In aÆne planes, two distinct lines no longer
need to intersect, they may be parallel (non-intersecting). A plane is called aÆne if it satis es the four axioms (I) every two distinct points are incident to a unique line, (II) every two non-parallel lines are incident to a unique point, (II') for every point P and line L there exists a unique line through P parallel to L (denoted P kL), (III) there exists a 4-point. We again get a category of aÆne planes by taking as morphisms the isomorphisms. Parallelism turns out to be an equivalence relation on lines, and we can speak of the parallel class k(L) of a given line L. Just as in the real aÆne plane familiar from calculus, every 2dimensional vector space V gives rise to an aÆne plane A(V ) with points the vectors of V and the lines the 1-dimensional aÆne subspaces (translates A = v + W of 1-dimensional linear subspaces W ), with incidence being membership v I A , v 2 A. Two such lines A; A0 are parallel i W = W 0 , and coincide i v v 0 2 W = W 0 . Thus the parallel classes correspond to the 1-dimensional subspaces W: More concretely, we may represent V as 2 and A() with aÆne points being (x; y ) for x; y 2 , aÆne lines being either vertical lines [a] = f(x; y ) j x = ag or non-vertical lines [m; b] = f(x; y ) j y = xm + bg of \slope" m and \y -intercept" b, and incidence again being
8. LINKS WITH PROJECTIVE GEOMETRY
33
membership. The parallel classes here are the vertical class (all [a]) together with the slope-classes (all [m; b] for a xed m). 8.3. Back and Forth. We can construct aÆne planes from projective planes and vice-versa, indeed the category of aÆne planes is equivalent to the category of projective planes with choice of distinguished line at in nity. Given a pair (; L) consisting of a projective plane and a distinguished line, we construct the aÆne restriction A(; L) by removing the line L and all points on it, and taking the incidence relation induced by restriction. Notice that two aÆne lines are parallel i their intersection (which always exists in ) does not belong to the aÆne part of , ie. i the lines intersect on the \line at in nity" L. Conversely, from any aÆne plane a we can construct the projective completion = Proj(a ) by adjoining one ideal line L1 and adjoining one ideal point for each parallel class k(L), with incidence extended by declaring all ideal points lie on the ideal line, and an ideal point k(L) lies on an aÆne line M i M kL. The pair (; L1 ) thus constructed is a projective plane with distinguished line. These two constructions are functorial, and provide a category equivalence: a ! (; L1 ) ! A(; L1) = a is the identity functor, while (; L) ! a ! (Proj(a ); L1 ) is naturally isomorphic to the identity functor (; L). Thus we can use projective planes (where all lines are created equal) or aÆne planes (where coordinatization is natural), whichever is most convenient. [Warning: In general, a projective plane looks dierent when viewed from dierent lines L and L0 ; A(; L) 6= A(; L0 ), since in general there will not be an automorphism of the whole plane sending L to L0 :] For example, it is easy to verify that the projective completion of A() is isomorphic to Proj() where points are mapped (x; y ) ! [(x; y; 1)]; (n) ! [(1; n; 0)] ; (1) ! [(0; 1; 0)] , and lines are mapped [m; b] ! [(m; 1; b)] ; [a] ! [( 1; 0; a)] ; [1] ! [(0; 0; 1)], since the incidences (x; y )I[m; b] , y = xm + b etc. in Proj(A()) and [v ] I[w] in Proj() coincide by the incidence table for Proj(): 8.4. Coordinates. In the spirit of Descartes' program of analytic geometry, we can introduce \algebraic coordinates" into any projective plane using a coordinate system, an ordered 4-point = fX1 ; Y1; 0; 1g. Here we interpret the plane as the completion of an aÆne plane by a line at in nity L1 := X1 _ Y1, with 0 as origin and 1 as unit point, X := 0 _ X1 ; Y := 0 _ Y1 the X; Y axes, and U := 0 _ 1 the unit line. The coordinate set consists of the aÆne points x of U , together a symbol 1. We introduce coordinates (coordinatize the plane) for the
34
aÆne points P , points at in nity P1 , aÆne lines L, and line at in nity L1 via P ! (x; y) L 6k Y ! [m; b] P1 6= Y1 ! (n) L k Y ! [a] P1 = Y1 ! (1) L1 ! [1] where the coordinates of points are x = X (P ) := P kY ^ U; y =
Y (P ) := P kX
^ U; n = Y (1; n) = Y (P1 _ 0) ^ (1kY ) , and the coordinates of lines are a = L ^ U; b = Y (0; b) = Y L ^ Y ; m = Y (1; m) = Y (0kL) ^ (1kY ) . With this set of points and lines we
obtain a projective plane Proj(; ) isomorphic to where incidence is given by a table similar to that for Proj() : Incidence Table P I L for Proj(; ) P L [m; b] [a] [1] (x; y ) y = T (x; m; b) x=a never (n) n=m never always (1) never always always Thus, once a coordinate system has been chosen, the entire incidence structure is encoded algebraically in the projective ternary system Ter(; ), the set of aÆne coordinates with the ternary product T (x; m; b) and distinguished elements 0; 1 (somewhat optimistically called a ternary ring in hopes that the product takes the form T (x; m; b) = xm + b for a + b = T (a; 1; b); xm = T (x; m; 0)). In general the ternary system depends on the coordinate system (dierent systems ; 0 produce nonisomorphic ternary systems Ter(; ) 6= Ter(; 0)), and the ternary product cannot be written as xm + b; only when the plane has suÆcient \symmetry" do we have a true coordinate ring with a (commutative, associative) addition and a (nonassociative, noncommutative) bilinear multiplication.
8.5. Central Automorphisms. Projective planes are often classi ed according to how many central automorphisms they possess (following to the motto \the more the merrier"). An automorphism is central if it has a center, a point C which is xed linewise by the automorphism: P (C ) = C; L (L) = L for all L incident to C . In this case automatically has an axis M , a line which is xed pointwise: L (M ) = M; P (P ) = P for all P incident to M , and if 6= 1 it xes no points except the center and those on the axis, and xes no lines except the axis and those on the center. If the center lies on the axis, is called a translation, otherwise it is a dilation. For example, in the
8. LINKS WITH PROJECTIVE GEOMETRY
35
completion of A() translation (P ) = ~t + P by a xed vector is a translation with axis the line at in nity and center the point at in nity corresponding to the parallel class of all lines parallel to ~t; the dilation Æ (P ) = ÆP is a dilation with center 0 the unique aÆne xed point, and axis again the line at in nity. A plane is (C; M )-transitive if the subgroup of (C; M )-automorphisms (those with center C and axis M ) acts as transitively as it can, namely, transitively on the points Q of any line L through C (except for the xed points C; L ^ M ): any point Q o the center and axis can be moved to any other such point Q0 by some (C; M )-automorphism as long as C; Q; Q0 are collinear, because xes the line L = C _ Q and so must send Q to another point Q0 on the same line. Geometrically, a plane is (C; M )-transitive i Desargues' (C; M ) Theorem holds: whenever two triangles 4; 40 with vertices P1 P2 P3 ; P10 P20 P30 and sides L1 L2 L3 ; L01 L02 L03 (Lk = Pi _ Pj for distinct i; j; k) are in central perspective from C [ie. C; Pi; Pi0 are collinear for i = 1; 2; 3] with two sides in axial perspective from L [ie. L; Li ; L0i are concurrent for i = 1; 2], then also the third sides are in perspective from L [L; L3 ; L03 are concurrent], so the three pairs of sides meet at three points of L. If a plane is (C; M ) transitive for two dierent centers C on the axis M , then it is (C; M )-transitive for all centers C on M , and it is called a translation plane with respect to the axis M ; this happens i its ternary product when we coordinatize it using L1 := M is linear, T (x; m; b) = xm+b, in which case we speak of Ter(; ) as a coordinate ring (R; +; ; 0; 1) If a plane is a translation plane with respect to two distinct axes M; M 0 then it is a translation plane with respect to every line on C = M ^ M 0 ; this happens i its coordinate ring using (1) = Y1 := C; L1 = [1] := M is a left Moufang division ring: it is a unital nonassociative ring with all nonzero elements invertible, satisfying the Left Inverse Property x 1 (xy ) = y for all x 6= 0; y , equivalently the left Moufang Law x(yx) z = x y (xz ) [this implies, and in characteristic 6= 2 is equivalent to, the left alternative law x2 z = x(xz )]. We call such a plane a left Moufang plane. If a plane is a translation plane with respect to three non-concurrent axes M; M 0 ; M 00 then it is a translation plane with respect to every line, so all possible translations exist and the plane satis es the Little Desargues' Theorem (Desargues' (C; M ) Theorem for all pairs with C on M ). This happens i its coordinate ring is an alternative division ring: it is a unital nonassociative ring with all nonzero elements invertible, satisfying the Inverse Property x 1 (xy ) = y = (yx)x 1 for all
36
x 6= 0; y , equivalently the Left and Right Moufang Laws [where Right Moufang is z (xy )x = (zx)y x], equivalently the left and right alternative laws x2 z = x(xz ); zx2 = (zx)x, and is called a translation plane or Moufang plane. In this case all coordinatizations produce isomorphic coordinate rings, and we can speak of the alternative coordinate ring D of the plane. In particular, every octonion algebra O produces a Moufang projective plane Mouf(O) coordinatized by all (x; y ); (n); (1); [m; b]; [a]; [1] for x; y; n; m; b; a 2 O; such a plane is called an octonion plane. If a plane is (C; M )-transitive for all centers C and all axes M not on C , then it is (C; M )-transitive for all centers C and all axes M whatsoever, and Desargues' Theorem holds for all (C; M ). Such a plane is called Desarguian. This happens precisely i some coordinate ring is an associative division ring , in which case all coordinates are isomorphic to , and the plane is isomorphic to Proj()). The coordinate ring is a eld = i the plane satis es Pappus' Theorem (which implies Desargues'). Thus the left Moufang, Moufang, Desarguian, or Pappian planes rich in central automorphisms are just the planes coordinatized by nonassociative division rings which are left Moufang, alternative, associative, or elds. For these algebraic systems we have powerful theorems (some of the rst structure theorems proven for algebras of arbitrary dimension). Theorem (Kleinfeld-Skornyakov-San Soucie). Every left Moufang division ring is alternative. Theorem (Bruck-Kleinfeld). Every alternative division ring is either associative or an octonion division algebra. A celebrated theorem of Wedderburn asserts that every nite associative division ring is commutative, hence a nite eld GF (pn ) for some prime power. This holds even for alternative division rings. Theorem (Artin-Zorn). Every nite alternative division ring is associative, hence a nite eld (equivalently, there are no nite octonion division algebras because a quadratic form of dimension 8 over a nite eld is isotropic): From these algebraic theorems, we obtain geometric theorems for which no known geometric proofs exist. Theorem (Moufang Consequences). (1) Every left Moufang plane is Moufang. (2) Every Moufang plane is either a Desarguian Proj() for an associative division ring , or an octonion plane Mouf(O) for an
8. LINKS WITH PROJECTIVE GEOMETRY
37
octonion division algebra O: (3) Every nite left Moufang or Moufang plane is a Pappian plane Proj(GF (pn )) determined by a nite eld. This is a powerful instance of Descartes' program of bringing algebra to bear on geometric questions.
8.6. Albert Algebras and Octonion Planes. We have seen
that the aÆne part of an octonion plane Mouf(O) has a coordinatization in terms of points (x; y ) and lines [m; b]; [a] with octonion coordinates, behaving much like the usual Euclidean plane. However, the Fundamental Theorem of Projective Geometry (which says that the isomorphisms of Desarguesian planes Proj(V ) come from semilinear isomorphisms of V = 3 , represented by 3 3 matrices in M3 ()) breaks down when the coordinates are nonassociative: the associative composition of isomorphisms cannot be faithfully captured by the nonassociative multiplication of octonion matrices in M3 (O). In order to represent the isomorphisms, we must nd a more abstract representation of Mouf(O). A surprising approach through Albert algebras H3 (O) was discovered by Jordan and Freudenthal, then re ned by Springer and Faulkner. Let J = H3 (O) be a reduced Albert algebra for an octonion division algebra O over a eld . We construct an octonion plane Proj(J) with points P the 1-dimensional inner ideals B [these are precisely the spaces B = b determined by rank-one elements b with Ub J = b 6= 0] and lines L the 10-dimensional inner ideals C = fx 2 J j Vx;c = 0g, with inclusion as incidence [B I C , B C ]. Every octonion plane arises (up to isomorphism) by this construction: Mouf (O) = Proj(H3 (O)). 0 If J; J are two such Albert algebras over elds ; 0 , then we call a map T : J ! J0 structural if it is a bijective Z-linear map such that there exists a Z-linear bijection T : J0 ! J with UT0 (x) = T Ux T for all x. Here T is automatically a -linear transformation T (x) = T (x) for an isomorphism : ! 0 of the underlying centroids, and T is uniquely determined as T 1 UT0 (1) . The ring-theoretic structural condition is equivalent to Jacobson's original cubic norm condition that T be a -linear norm similarity, N 0 (T (x)) = N (x) for all x 2 J (necessarily = N 0 (T (1)) and T is the adjoint relative to the trace forms, tr0 (T (x); y 0 ) = tr(x; T (y 0 )):] Any such structural T induces an isomorphism Proj(T ) : Proj(J) ! Proj(J0 ) of projective planes, since it preserves dimension, innerness, and incidence. The general reduced Albert algebra has the form of a diagonal isotope H3 (O) , but isotopic Albert algebras always produce isomorphic Moufang planes since Uu :
38
! J is always structural. Thus we already obtain all the octonion planes from just the J = H3 (O)'s.] (u)
J
In Faulkner's original description, points and lines are two copies of the same set: the 10-dimensional fx 2 J j Vx;c = 0g is uniquely determined by the 1-dimensional c, so we can take as points and lines all [b] and [c] for rank-one elements b; c, where incidence becomes [b] I [c] , Vb;c = 0 [analogous to [v ] I [w] , hv; wi = 0 in Proj()]. A structural T induces an isomorphism of projective planes via P ([b] ) := [T (b)] , L ([b] ) := [T 0 (b)] for T 0 = (T ) 1 since it preserves incidence VT0 (b);T 0 (c) = 0 , Vb;c = 0 because T automatically satis es VT0 (x);T 0 (y) = T Vx;y T 1 [acting on T (z ) this is just the linearization x ! x; z of the structural condition]. Theorem (Fundamental Theorem of Octonion Planes). The isomorphisms Proj(J) ! Proj(J0 ) of octonion planes are precisely all Proj(T ) for structural maps T : J ! J0 of Albert algebras. The arguments for this make heavy use of a rich supply of structural transformations on J in the form of Bergmann operators Bx;y (known geometrically as algebraic transvections Tx; y ); while the Ux for invertible x are also structural, the Bx;y form a more useful family to get from one rank 1 element to another. The methods are completely Jordanic rather than octonion-ic. Thus the octonion planes as well as their automorphisms nd a natural home in the Albert algebras, which has provided shelter for so many exceptional mathematical structures.
9. Conclusion This survey has acquainted you with some of the regions this mathematical river passes through, and a few of the people who have contributed to it. Mathematics is full of such rivers, gaining nourishment from the mathematical landscapes they pass through, enriching in turn those regions and others further downstream.
Part II The Historical Perspective: An Historical Survey of Jordan Structure Theory
In the Survey we discussed the origin of Jordan algebras in Pascual Jordan's attempt to discover a new algebraic setting for quantum mechanics, freed from dependence on an invisible but all-determining metaphysical matrix structure. In studying the intrinsic algebraic properties of hermitian matrices, he was led to a linear space with commutative product satisfying the Jordan identity, and an axiomatic study of these abstract systems in nite dimensions led back to hermitian matrices plus one tiny new system, the Albert algebra. Later the work of E m Zel'manov showed that even in in nite dimensions this is the only simple Jordan algebra which is not governed by an associative algebra lurking in the background. In this Part II we tell the story of Jordan algebras in more detail, describing chronologically how our knowledge of the structure of Jordan algebras grew from the physically-inspired investigations Jordan, von Neumann, and Wigner in 1934 to the inspired insights of Zel'manov in 1983. We include no exercises and give no proofs, though we sketch some of the methods used, comparing the classical methods using idempotents to the ring-theoretic methods of Zel'manov. The goal is to get the \big picture" of Jordan structure theory, to understand its current complete formulation while appreciating the older viewpoints. In this Part we aim to educate as well as enlighten, and an educated understanding requires that one at least be able to correctly pronounce the names of the distinguished mathematicians starring in the story. To that end I have included warning footnotes, and at the end of the book a page of pronunciations for all foreign names to help readers past the most egregious errors (like Henry Higgins leading a mathematical Liza Doolittle, who talks of the Silo Theorem for groups as if it were a branch of American agriculture, to See-lo (as in Shilov boundary), though perhaps never to Sy-lovf (as in Norway)). This would all be unnecessary if I were giving these lectures orally.
CHAPTER 1
Jordan Algebras in Physical Antiquity The search for an exceptional setting for quantum mechanics
Jordan algebras were conceived and grew to maturity in the landscape of physics. They were born in 1933 in a paper of the physicist Verallgemeinerungsmoglichkeiten des FormalisPascual Jordan1 \Uber mus der Quantenmechanik", which, even without DNA testing, reveals their physical parentage. Just one year later, with the help of John von Neumann and Eugene Wigner in the paper \On an algebraic generalization of the quantum mechanical formalism", they reached adulthood. Students can still bene t from reading this paper, since it was a beacon for papers in nonassociative algebra for the next 50 years (though nowadays we would derive the last 14 pages in a few lines from the theory of composition algebras).
1. The matrix interpretation of quantum mechanics In the usual interpretation of quantum mechanics (the so-called Copenhagen interpretation), the physical observables are represented by Hermitian matrices (or operators on Hilbert space), those which are self-adjoint x = x. The basic operations on matrices or operators are: Matrix Operations x multiplication by a complex scalar x + y addition xy multiplication of matrices (composition of operators) x complex conjugate transpose matrix (adjoint operator). This formalism is open to the objection that the operations are not \observable", not intrinsic to the physically meaningful part of the system: the scalar multiple x is not again hermitian unless the scalar is real, the product xy is not observable unless x and y commute (or, as the physicists say, x and y are \simultaneously observable"), and the 1In English, his name and his algebras are Dgor-dan as in Dgudge or Michael,
not Zhor-dahn as in canonical form. As a German (despite Pascual!), his real name was (and his algebras still are) Yor-dahn.
41
42
Jordan Algebras in Physical Antiquity
adjoint is invisible (it is the identity map on the observables, though non-trivial on matrices or operators in general). Observable Operations x multiplication by a real scalar x + y addition xn powers of matrices x identity map. Not only was the matrix interpretation philosophically unsatisfactory because it derived the observable algebraic structure from an unobservable one, there were practical diÆculties when one attempted to apply quantum mechanics to relativistic and nuclear phenomena,
2. The Jordan Program In 1932 Jordan proposed a program to discover a new algebraic setting for quantum mechanics, which would be freed from dependence on an invisible but all-determining metaphysical matrix structure, yet would enjoy all the same algebraic bene ts as the highly-successful Copenhagen model. He proposed: (1) to study the intrinsic algebraic properties of hermitian matrices, without reference to the underlying (unobservable) matrix algebra; (2) to capture the algebraic essence of the physical situation in formal algebraic properties which seemed essential and physically signi cant; (3) to consider abstract systems axiomatized by these formal properties and see what other new (non-matrix) systems satis ed the axioms.
3. The Jordan Operations The rst step in analyzing the algebraic properties of hermitian matrices or operators was to decide what the basic observable operations were. There are many possible ways of combining hermitian matrices to get another hermitian matrix. The most natural observable operation was that of forming polynomials: if x was an observable, one could form an observable p(x) for any real polynomial p(t) with zero constant term; if one experimentally measured the value v of x in a given state, the value associated with p(x) would just be p(v ). Breaking the operation of forming polynomials down into its basic ingredients, we have the operations of multiplication x by a real scalar, addition x + y , and raising to a power xn . By linearizing the quadratic squaring operation
3. THE JORDAN OPERATIONS
43
x2 we obtain a symmetric bilinear operation2 x y , to which Jordan gave the none-too-informative name quasi-multiplication: 1 x y := (xy + yx): 2 This is also frequently called the anticommutator (especially by physicists), but we will call it simply the Jordan product.
3.1. Digression on Linearization. We interrupt our chronologi-
cal narration for an important announcement about the general process of linearization (often called polarization, especially in analysis when dealing with quadratic mappings on a complex space). This is an important technique in nonassociative algebras which we will encounter frequently in the rest of the book. Given a homogeneous polynomial p(x) of degree n, the process of linearization is designed to create a symmetric multilinear polynomial p0 (x1 ; : : : ; xn ) in n variables such that the original polynomial arises by specializing all the variables to the same value x : p0 (x; : : : ; x) = p(x). For example, the full linearization of the square x2 = xx is 21 (x1 x2 + x2 x1 ), and the full linearization of the cube x3 = xxx of degree 3 is 16 (x1 x2 x3 + x1 x3 x2 + x2 x1 x3 + x2 x3 x1 + x3 x1 x2 + x3 x2 x1 ). In general, in order to recover the original p(x) of degree n we will have to divide by n!. This of course never bothers physicists, or workers in characteristic 0, but over general scalars this step is highly illegal. In many algebraic investigation the linearization process is important even in situations where we can't divide by n!, such as when the scalars are integers or a eld of prime characteristic. Thus we will describe a bare-bones linearization that only recovers n!p(x), but provides crucial information even without division. Full linearization is usually achieved one step at a time by a series of partial linearizations, in which a polynomial homogeneous of degree n in a particular variable x is replaced by one of degree n 1 in x and linear in a new variable y . Intuitively, in an expression with n occurrences of x we simply replace each occurrence of x, one at a time, by a y , add the results up [we would have to multiply by n1 to insure that restoring y to x produces the original polynomial instead of n copies of it]. For example, in xx we can replace the rst or the second occurrence of x by y , leading to yx or xy , so the rst linearization is yx + xy , which is linear in x and y . In xxx we can replace the x:y or x y, but since these symbols are often used for run-of-the-mill products in linear algebras, we will use the bolder, more distinctive bullet x y: 2Many authors denote the product
44
Jordan Algebras in Physical Antiquity
rst, second, or third occurrence of x by a y , so the rst linearization produces yxx+xyx+xxy which is quadratic in x and linear in y . [Notice the inconspicuous term xyx buried inside the linearization of the cube; we now know that this little creature actually governs all of Jordan theory!] We repeat this process over and over, reducing variables of degree n by a pair of variables, one of degree n 1 and one of degree 1. Once we are down to the case of a polynomial of degree 1 in each of its variables, we have the full linearization. In most situations we can't naively reach into p(x) and take the x's one-at-a-time: we often have no very explicit expression for p, and must describe linearization in a more intrinsic way. The clearest formulation is to take p(x + y ) for an indeterminate scalar and expand this out as a polynomial in : p(x + y ) = p(x) + p1 (x; y ) + 2 p2 (x; y ) + : : : + n p(y ): Here pi (x; y ) is homogeneous of degree n i in x and i in y (intuitively, we obtain it by replacing i of the x's in p(x) by y 's in all possible ways), and the linearization is just the coeÆcient p1 (x; y ) of . If we are working over a eld with at least n distinct elements, we don't have to drag in an indeterminate, we simply let run through n dierent scalars and use a Vandermonde method to solve a system of equations to pick out p1 (x; y ). Setting y = x, we see by homogeneity that the coeÆcient of i in p(x + x) = (1 + )np(x) is pi (x; x) = ni p(x), in particular p1 (x; x) = np(x). A fancy way of getting the full linearization in one fell swoop would be to replace x by a formal linear combination 1 x1 +2 x2 +: : :+n xn of n new variables xi and n indeterminates i ; then the full linearization p(x1 ; : : : xn ) is precisely the coeÆcient of 1 2 n in the expansion of p(1 x1 + 2 x2 + : : : n xn ). If we let the coeÆcient of i11 : : : inn be p11 ;:::;iP ), setting all xi = x shows the coeÆcient of i11 : : : inn n (x1 ; : : : ; xnP n p(x), in parin p( i i x) = ( i i )n p(x) is pi1 ;:::;in (x; : : : ; x) = i1 :::i n ticular p1:::1 (x; : : : ; x) = n!p(x). The linearization process is very simple when applied to quadratic mappings q (x) of degree 2, which is the only case we need at this particular point in our story. Here we only need to linearize once, and the process takes place wholly within the original space (there is no need to drag in indeterminate scalars): we take the value on the sum x + y of two elements, and then subtract the pure x and y terms to obtain q (x; y ) := q (x + y ) q (x) q (y ). Note that despite the fact that we will assume throughout the book that we have a scalar 12 , we will never divide the expression q (x; y ) by 2. Thus for us q (x; x) = q (2x) q (x) q (x) = 4q (x) 2q (x) = 2q (x) does NOT recover the
3. THE JORDAN OPERATIONS
45
original quadratic map. With a glimpse of quadratic Jordan algebras in our rear-view mirror, we will drive through the Jordan lansdcape avoiding the scalar 21 as far as possible, only calling upon it in our hour of need.
3.2. Biting the Bullet. Returning to the story of Jordan's em-
pirical investigation of the algebraic properties of hermitian matrices, it seemed to him that all the products could be expressed in terms of the Jordan product x y . For example, it was not hard to see that the powers could be de ned from the Jordan product via x1 = x; xn+1 = x xn , so the power maps could be derived from a single bilinear product. We saw that linearization of the cube involved a product xyx linear in y but quadratic in x; linearizing x to x; z leads to a trilinear product xyz + zyx (now known as the Jordan triple product); this too could be expressed in terms of the bilinear Jordan product: xyz + zyx = 2 x (y z ) + z (y x) (x z ) y . Of course, like the important dog in Sherlock Holmes who did NOT bark in the night, the important product which does NOT lead back to hermitian matrices is the associative product xy : the product of two hermitian matrices is again hermitian i the two matrices commute. Thus in addition to its observable linear structure as a real vector space, the model carries a basic observable product, the Jordan product, out of which more complicated observable products such as powers or Jordan triple products can be built, but it does not carry an associative product. Thus Jordan settled on the Jordan product as the basic algebraic operation. We now realize that Jordan overlooked several other natural operations on hermitian elements. Note that if x; y; xi are hermitian matrices or operators, so are the quadratic product xyx, the inverse x 1 , and the n-tad products fx1 ; : : : ; xn g := x1 xn + xn x1 . The quadratic product and inverse can be de ned using the Jordan product, though this wasn't noticed for another 30 years, and 35 years later each of these was used (by McCrimmon and Springer) to provide an alternate axiomatic foundation on which to base the entire Jordan theory. The n-tad for n = 2 is just twice the Jordan product, and we noted that the 3-tad or Jordan triple product could be expressed in terms of the Jordan product. On the other hand, the n-tads for n 4 CANNOT be expressed in terms of the Jordan product. In particular, the tetrads fx1 ; x2 ; x3 ; x4 g := x1 x2 x3 x4 + x4 x3 x2 x1 were inadvertently exluded from Jordan theory. As we shall see, this oversight allowed two uninvited guests to join the theory, the spin factor and the Albert algebra, who were not closed under tetrads but who had in uential friends and applications in many areas of mathematics.
46
Jordan Algebras in Physical Antiquity
4. The Jordan Axioms The next step in the empirical investigation of the algebraic properties enjoyed by the model was to decide what the crucial formal axioms or laws the operations on hermitian matrices obey.3 As far as its linear structure went, the operations of addition and scaling by a real number must of course satisfy the familiar vector-space rules. But what conditions to impose on the multiplicative structure was much less clear. The most obvious rule for the operation of forming polynomials in an observable was the rule that if r(t) = p(q (t)) is the composite of the polynomials p; q then for all observables x we have r(x) = p(q (x)). If we write the powers occurring in the polynomials in terms of the Jordan product, this composition rule is equivalent to power-associativity xn xm = xn+m (power associative law). Jordan discovered an important law or identity (in the sense of identical relation satis ed by all elements) of degree four in two variables satis ed by the Jordan product: x2 (y x) = (x2 y ) x (which we now call the Jordan identity). For example, fourth-power associativity x2 x2 = x4 (:= x3 x) follows immediately from this Jordan identity by setting y = x, and some non-trivial ddling shows the Jordan identity is strong enough to imply associativity of all powers. Thus Jordan thought he had found the key law governing the Jordan product, besides its obvious commutativity. The other crucial property of the Jordan product on hermitian matrices is its \positive-de niteness". Recall that a symmetric bilinear form b on a real or complex vector space is called positive definite if b(x; x) > 0 for all x 6= 0 (of course, this does not mean b(x; y ) 0 for all x; y !!). Notice that for an n n complex hermit), the square has as ith diagonal entry ian (xij ) (xji = xijP Pn matrix X =P n n 2 trace bilinear form j =1 xij xji = j =1 xij xij = j =1 jxij j , so the Pn Pn 2 b(X; Y ) := tr(XY ) has b(X; X ) = tr(X ) = i=1 j =1 jxij j2 > 0 unless all xij = 0, i.e. unless X = 0. This positivity of squares involves the trace bilinear form, which is not part of the axiomatic framework considered by Jordan, but it has purely algebraic consequencesP for the Jordan product: a sum of squares can never vanish, P since if rk=1 Xk2 = 0 then taking traces gives rk=1 tr(Xk2 ) = 0, which 3Notice the typical terms algebraists use to describe their creations: we say an
algebra enjoys a property or obeys a law, as if it never had secret longings to be associative or to lead a revolt against the Jordan identity.
4. THE JORDAN AXIOMS
47
forces each of the positive quantities tr(Xk2 ) to vanish individually, and therefore each Xk = 0. This \formal-reality" property was familiar to Jordan from the recent Artin-Schreier theory of formally-real elds. It was known from the Wedderburn theory of nite-dimensional associative algebras that once you removed the radical (the largest ideal consisting of nilpotent elements) you obtained a nice semisimple algebra which was a direct sum of simple pieces (all of the form Mm () consisting of all m m matrices over a division ring ). Formal-reality immediately guarantees there are no nilpotent elements whatsoever, providing instant semisimplicity: if there are nonzero nilpotent elements of index n > 1 (xn = 0 6= xn 1 ) then there are also elements of order 2 (y 2 = 0 6= y = xn 1 ), and formal-reality for sums of length r = 2 shows squares never vanish, x2 = 0 =) x2 + x2 = 2x2 = 0 =) x = 0. After a little empirical experimentation, it seemed to Jordan that all other laws satis ed by the Jordan product were consequences of commutativity, the Jordan identity, and positivity or formal-reality. The outcome of all this experimentation was a distillation of the algebraic essence of quantum mechanics into an axiomatic de nition of a new algebraic system: Jordan Definition. A real Jordan algebra J = (V; p) consists of a real vector space V equipped with a bilinear product p : V V ! V (usually abbreviated p(x; y ) = x y ) satisfying the commutative law and the Jordan identity: (JAx1) xy =yx (commutative law); (JAx2) (x2 y ) x = x2 (y x) (Jordan identity): A Jordan algebra is called Euclidean (or formally-real) if it satis es the formal-reality axiom (JAxE) x21 + : : : + x2n = 0 =) x1 = : : : = xn = 0. Jordan originally called these r-number algebras ; the term \Jordan algebra" was rst used by A.A. Albert in 1946, and caught on immediately. We now know that Jordan was wrong in thinking that his axioms captured all the algebraic properties of hermitian matrices: he missed some laws which cannot be derived just from the above rules, hence he overlooked some algebraic properties of hermitian matrices which aren't captured in the Jordan axioms. The rst and smallest of these so-called \s-identities" is Glennie's Identity G8 of degree 8 in 3 variables discovered in 1963, so Jordan may perhaps be excused for not noticing it. In 1987 Thedy discovered a more transparent s-identity
48
Jordan Algebras in Physical Antiquity
T10 , even though it was an operator identity of degree 10 in 3 variables. These overlooked identities will come to play a vital role later in our story. By failing to pass legislation demanding these s-identities be satis ed, the exceptional Albert algebra once more squeezed through a gap in Jordan's axioms: not only did the Albert algebra not carry a tetrad operation like hermitian matrices do, but even with respect to its Jordan product it was distinguishable from hermitian matrices by its refusal to obey the s-identities. As we saw in the Survey, a large part of the richness of Jordan theory is due to its exceptional algebras (with their connections to exceptional Lie algebras, and exceptional symmetric domains), and much of the power of Jordan theory is its ability to handle these exceptional objects and hermitian objects in one algebraic framework.
5. The Basic Examples Let us give the main examples of these newly-christened Jordan algebras. Full Example. Any associative algebra A over R can be converted into a Jordan algebra J (A), known familiarly as A+ , by forgetting the associative structure but retaining the Jordan structure: the linear space A, equipped with the product x y := 1=2(xy + yx), is commutative as in (JAx1), and satis es the Jordan identity (JAx2). In the Survey we went through the veri cation of the Jordan identity (JAx2) in any A+ ; however, these Jordan algebras are almost never Euclidean as in (JAx3). For example, for the algebra A = Mn (D) of n n matrices over an associative ring D, the algebra A+ = Mn (D)+ is never Euclidean if n > 1 since the matrix units Eij for i 6= j square to zero. However, we have seen that even if the full matrix algebra is not Euclidean, the symmetric matrices over the reals or the hermitian matrices over the complexes are Euclidean. 5.1. Hermitian algebras. Any subspace of A which is closed under the Jordan product will continue to satisfy the axioms (JAx1) and (JAx2), and hence provide a Jordan algebra. The most natural (and historically most important) method of selecting out a Jordan-closed subspace is by means of an involution , a linear anti-isomorphism of an associative algebra of period 2: Hermitian Example. If is an involution on A then the space H(A; ) of hermitian elements x = x is closed under symmetric products, but not in general under the noncommutative product xy : H(A; ) is a Jordan subalgebra of A+ , but not an associative subalgebra of A.
5. THE BASIC EXAMPLES
49
Indeed, if x and y are symmetric we have (xy ) = y x (by de nition of anti-isomorphism) = yx (by de nition of x; y being hermitian) and dually (yx) = xy , so xy is not hermitian unless x and y happen to commute, but (x y ) = 12 (xy + yx) = 12 ((xy ) +(yx) ) = 12 (yx + xy ) = x y is always hermitian. A particularly important case is that where A = Mn (D) consists of all n n matrices X = (xij ) over a coordinate algebra (D; ), i.e. a unital algebra together with an involution d ! d. In this case the adjoint or conjugate transpose X := X tr (with ij -entry xji ) is an involution on Mn (D).
Jordan Matrix Example. If A = Mn (D) is the algebra of all n n matrices over an associative coordinate algebra D, then the algebra H(A; ) of hermitian elements under the conjugate transpose involution is the Jordan matrix algebra Hn (D; ) consisting of all n n matrices X whose entries satisfy xji = xij . When the involution on D is understood, we will simply denote this by Hn (D). Such a Jordan matrix algebras will be Euclidean as in (JAxE) if the coordinate -algebra D is a Euclidean -algebra in the sense that P x x = 0 =) all xr = 0. In particular, the hermitian matrices r r r Hn(R ); Hn(C ); Hn(H ) with entries from the reals R , complexes R , or quaternions H with their usual involutions, all have this positivity.
5.2. Spin factors. The rst non-hermitian Jordan algebras were the spin factors discovered by Max Zorn, who noticed that we get a Jordan algebra from R n+1 = R 1 R n if we select the elements in R 1 to act as \scalars," and the elements of R n to act as \vectors" with bullet product a scalar multiple of 1 given by the dot or inner product. We will use the term inner product instead of dot product, both to avoid confusion with the bullet and for future generalization. . If h; i denotes the usual Euclidean inner product on R then JS pinn = R 1 R n becomes a Euclidean Jordan algebra if we de ne 1 to act as unit and the bullet product of vectors ~v; w~ in R n to be given by the inner product Spin Factor Example
n,
~v w~ := h~v; w~ i1:
Indeed, commutativity (JAx1) comes from symmetry of the inner product, the Jordan identity (JAx2) comes from the fact that x2 is a linear combination of 1 and x, and formal-reality (JAxE) comes from
50 P
Jordan Algebras in Physical Antiquity
P
P
i 1 ~vi 2 = i2 + < ~vi ; ~vi > 1 2 i~vi ; where the coeÆcient of 1 can only vanish if all i and all ~vi are 0 by positive de niteness of the inner product. This algebra has come to be called a Jordan spin algebra or spin factor, for reasons connected with mathematical physics. The spin group Spin(n) is a simply connected universal covering group for the group of rotations on n-space (the special orthogonal group SO(n)). The spin group has a mathematical realization as certain invertible elements of the Cliord algebra, an associative algebra generated by elements vi (corresponding to an orthonormal basis for n-space) with de ning relations vi2 = 1; vi vj + vj vi = 0 (i 6= j ): Any such system of \orthogonal symmetries" vi in an associative algebra is called a spin system. The linear span of 1 and the vi does not form an associative algebra (the Cliord algebra has dimension 2n , not P n +1),P but it does form a Jordan algebra with square (1 + i i vi )2 = P (2 + i i2 ) 1 + 2 ( i i vi ), which is precisely the square in JS pinn : Notice that the trace form t(1 v ) := leads to a trace bilinear form b(x; y ) := t(x y ) with b(x; x) = 2 + hv; v i > 0 just as for hermitian matrices, and any time we have such a positive de nite bilinear form built out of multiplications we automatically have formal-reality. Another way to see that this algebra is Euclidean is to note that you can (if you're careful!) imbed it in the hermitian 2n 2n real matrices: let W be the real inner product space with 2n orthonormal basis vectors eI parametrized by the distinct n-tuples I = ("1 ; : : : ; "n) for "k = 1. De ne linear transformations vk ; k = 1; 2; : : : n, to act on the basis vectors by vk (eI ) := "1 "k 1 eIk , scaling them by a factor 1 which is the product of the rst k 1 "'s, and switching the k-th index of the n-tuple to its negative Ik := ("1 ; : : : ; "k ; : : : ; "n). Then by careful calculation we can verify that (1) the vk are self-adjoint with respect to the inner product since hvk (eI ); eJ i = heI ; vk (eJ )i are both zero unless I and J are obtained from each other by negating the k-th index, in which case the inner products are both the factor "1 "k 1, and (2) have products vk2 = 1W ; vk vj = 0 for j 6= k. Thus the n \orthogonal symmetries" vk form a spin system in E nd(W ), so the n + 1-dimensional real subspace spanned by 1W and the orthogonal symmetries vk forms a Jordan algebra with the same multiplication table as JS pinn . Thus we may regard the algebra JS pinn as some sort of \thin" Jordan subalgebra of a full algebra of hermitian matrices.
7. CLASSIFICATION
51
6. Special and Exceptional Recall that the whole point of Jordan's investigations was to discover Jordan algebras (in the above sense) which did not result simply from the Jordan product in associative algebras. We call a Jordan algebra special if it comes from the Jordan product in an associative algebra, otherwise it is exceptional. In a special Jordan algebra the algebraic structure is derived from an ambient associative product xy . Special Definition. A Jordan algebra is special if it can be linearly imbedded in an associative algebra so that the product becomes the Jordan product 12 (xy + yx), i.e. if it is isomorphic to some Jordan subalgebra of some Jordan algebra A+ , otherwise it is exceptional. One would expect a variety where every algebra is either special or exceptional to have its roots in Lake Wobegon! The examples H(A; ) are clearly special, living inside the associative algebra A. In particular, matrix algebras Hn (D; ) are special, living inside A = Mn (D). JS pinn is also special, since we just noted that it could be imbedded in suitably large hermitian matrices.
7. Classi cation Having settled, he thought, on the basic axioms for his systems, Jordan set about trying to classify them. The algebraic setting for quantum mechanics would have to be in nite-dimensional, of course, but since even for associative algebras the study of in nite-dimensional algebras was in its infancy, there seemed no hope of obtaining a complete classi cation of in nite-dimensional Jordan algebras. Instead, it seemed reasonable to study rst the nite-dimensional algebras, hoping to nd families of simple exceptional algebras En parametrized by natural numbers n (eg. n n matrices), so that by letting n go to in nity a suitable home could be found for quantum mechanics. The purely algebraic aspects were too much for Jordan to handle alone, so he called in the mathematical physicist Eugene Wigner and the mathematician John von Neumann. In their fundamental 1934 paper the J-vN-W triumvirate showed that in nite-dimensions the only simple building blocks were the usual hermitian matrices and the algebra JS pinn , except for one small algebra of 3 3 matrices whose coordinates come from the nonassociative 8-dimensional algebra K of Cayley's octonions. Jordan-von-Neumann Theorem. Every nite-dimensional formallyreal Jordan algebra is a direct sum of a nite number of simple ideals,
52
Jordan Algebras in Physical Antiquity
and there are 5 basic types of simple building blocks: 4 types of hermitian matrix algebras corresponding to the 4 composition division algebras R; C ; H ; K over the reals, together with the spin factors. Every nitedimensional simple formally-real Jordan algebra is isomorphic to one of 1 2 4 8 Hn = Hn (R ); Hn = Hn (C ); Hn = H n (H ); H3 = H3 (K ); JS pinn :
The notation was chosen so that Hkn (called Mkn in the original paper) denoted the n n hermitian matrices over the k-dimensional real composition division algebra. R and C are old friends, of course, and H amilton's quaternions H (with basis 1; i; j; k) at least a nodding acquaintance; K ayley's octonions K = H H ` may be a stranger { he has basis f1; i; j; kg [ f`; i`; j`; k`g with nonassociative product h1 (h2 `) = (h2 `)h1 = (h2 h1 )`; (h1 `)(h2 `) = (h2 h1 ), and involution (h`) = h` in terms of the new basic unit ` and old elements h;P h1 ; h2 2 H .PAll four of these carry a positive de nite quadratic form Q( i xi ) = i2 (relative to the indicated bases fxi g) which \admits composition" in the sense that Q of the product is the product of the Q's, Q(xy ) = Q(x)Q(y ), and for that reason are called composition algebras. A celebrated theorem of Hurwitz asserts that the only possible composition algebras over any eld are the eld (dimension 1), a quadratic extension (dimension 2), a quaternion algebra (dimension 4), or an octonion algebra (dimension 8). By the Jordan Matrix Example we see the rst 3 basic examples Hkn for k = 1; 2; 4 are special, living inside the full associative algebras Mn (R ); Mn (C ); Mn (H ) of n n matrices, and after the Spin Factor Example we noted the 5th example JS pinn lived inside large hermitian matrices. On the other hand, the 4th example H83 = H3 (K ) did not seem to be special, since its coordinates came from the non-associative algebra K , but the authors were unable to prove its exceptionality (and could only prove with diÆculty that it satis ed the Jordan identity). They turned to a bright young algebraist A.A. Albert, who showed that it was indeed exceptional Jordan. Albert's Exceptional Theorem. The algebra H3 (K ) is an exceptional Jordan algebra of dimension 27. It easy to see that H3 (K ) has dimension 27: in a typical element is 11 a12 a13 a12 22 a23 , each of the 3 independent diagonal entries ii comes from a13 a23 33 the 1-dimensional space R 1 of symmetric octonions, and each of the 3 independent upper diagonal entries aij comes from the 8-dimensional
7. CLASSIFICATION
53
space of all octonions [the sub-diagonal entries are then completely determined], leading to 1 + 1 + 1 + 8 + 8 + 8 = 27 independent parameters. In view of Albert's proof of exceptionality, and his later construction of Jordan division algebras which are forms of H3 (K ), these 27-dimensional exceptional algebras are now known as Albert algebras, in particular H3 (K ) itself is denoted by A . As we noted in the Survey, these results were deeply disappointing to physicists, since the lone exceptional algebra H3 (K ) was too tiny to provide a home for quantum mechanics, and too isolated to give a clue as to the possible existence of in nite-dimensional exceptional algebras. It was still possible that in nite-dimensional exceptional algebras existed, since there were well-known associative phenomena that appear only in in nite dimensions: in quantum mechanics, the existence of operators p; q on Hilbert space with [p; q ] = 2~ 1 (~ = Planck's constant) is possible only in in nite dimensions (in nite dimensions the commutator matrix [p; q ] would have trace 0, hence could not be a nonzero multiple 1n of the identity matrix 1n , since the trace of 1n is n 6= 0). So there remained a faint hope that there might still be an exceptional home for quantum mechanics somewhere.
CHAPTER 2
Jordan Algebras in the Algebraic Renaissance Finite-dimensional Jordan algebras over algebraically closed elds The next stage in the history of Jordan algebras was taken over by algebraists. While the physicists lost interest in the search for an exceptional setting for quantum mechanics (the philosophical objections to the theory paling in comparison to its amazing achievements), the algebraists found unsuspected relations between, on the one hand, the strange exceptional simple Albert algebra of dimension 27 and, on the other hand, the 5 equally-strange exceptional simple Lie groups and algebras of types G2 ; F4 ; E6 ; E7 ; E8 of dimensions 14, 52, 78, 133, 248. While these had been discovered by Elie Cartan in the nineteenth century, they were known only through their multiplication tables: there was no concrete representation for them (the way there was for the 4 great classes An ; Bn ; Cn; Dn ). During the 1930's Jacobson discovered that the Lie group G2 could be realized as the automorphism group (and the Lie algebra G2 as the derivation algebra) of a Cayley algebra, and in the early 1950's Chevalley, Schafer, Freudenthal and others discovered that the Lie group F4 could be realized as the automorphism group (and the Lie algebra F4 as the derivation algebra) of the Albert algebra, the group E6 could be realized as the isotopy group (and the algebra E6 as the structure algebra) of the Albert algebra, and the algebra E7 could be realized as the superstructure Lie algebra of the Albert algebra. [E8 was connected to the Albert algebra in a more complicated manner]. These unexpected connections between the physicists' orphan child and other important areas of mathematics spurred algebraists to consider Jordan algebras over more general elds. By the late 1940's the J-vN-W structure theory had been extended by A.A. Albert, F. and N. Jacobson, and others to nite-dimensional Jordan algebras over an arbitrary algebraically closed eld of characteristic not 2, with essentially the same cast of characters appearing in the title roles. 55
56
Jordan Algebras in the Algebraic Renaissance
1. Linear Algebras over General Scalars We begin our algebraic history by recalling the basic categorical concepts for general nonassociative algebras OVER AN ARBITRARY RING OF SCALARS . When dealing with Jordan algebras we will have to assume 21 2 , and we will point this out explicitly. An algebra is simultaneously a ring and a module over a ring of scalars , such that the ring multiplication interacts correctly with the linear structure. Linear Algebra Definition. A ring of scalars is a unital commutative associative ring . A (nonassociative) linear algebra over (or -algebra, for short) is a -module A equipped with a -bilinear product A A ! A (abbreviated by juxtaposition (x; y ) 7! xy ). Bilinearity just means the product satis es the left and right distributive laws x(y + z ) = xy + xz; (y + z )x = yx + zx;
and that scalars it in and out of products,
(x)y = x(y ) = (xy ); for all elements x; y; z in A. The algebra is unital if there exists a (two-sided) unit element 1 satisfying 1x = x1 = x for all x.
Notice that we do not require associativity of the product nor existence of a unit element in A (though we always demand a unit scalar in ). Lack of a unit is easy to repair: we can always enlarge a linear algebra slightly to get a unital algebra. . A unital hull of a linear -algebra A is an extension = 1+ A A obtained by adjoining a unit element. If A is already unital, then already A1 = A is a unital hull. Any linear algebra is imbedded in the algebra its formal unital hull Unital Hull Definition
1 A
b A
:= 1 A; (1 x)( 1 y ) := 1 (y + x + xy ):
is always an ideal in any hull A1 since multiplication by the new elements 1 are just scalar multiplications; this means that we can often conveniently formulate results inside A making use of a unital hull. For example, in an associative algebra the left ideal Ax + x generated by an element x can be written succinctly as A1 x (the left ideal Ax needn't contain x if A is not already unital). A
2. CATEGORICAL NONSENSE
57
2. Categorical Nonsense We have the usual notions of morphisms, sub-objects, quotients for linear algebras. . A homomorphism ' : A linear map of -modules which preserves multiplication, Morphism Definition
! A0
is a
'(xy ) = '(x)'(y ); an anti-homomorphism is a linear map which reverses multiplication, '(xy ) = '(y )'(x): An isomorphism is a bijective homomorphism; we say A is isomorphic to A0 , or A and A0 are isomorphic (denoted A = A0 ), if there 0 is an isomorphism of A onto A . An automorphism is an isomorphism of an algebra with itself. We have corresponding notions of antiisomorphism and anti-automorphism for anti-homomorphisms.
. An involution is an anti-automorphism
Involution Definition
of period 2,
'(xy ) = '(y )'(x) and '('(x)) = x: We will often be concerned with involutions, since they are a rich source of Jordan algebras. The natural notion of morphism in the category of -algebras (algebras together with a choice of involution) is that of -homomorphism (A; ) ! (A0 ; 0 ), which is a homomorphism ' : A ! A0 of algebras which preserves the involutions, ' Æ = 0 Æ 0 ' (i:e: '(x ) = '(x) for all x 2 A).
The most important scalar involution is the standard involution on a quaternion or octonion algebra. Ideal Definition. A subalgebra B A of a linear algebra A is a -subspace closed under multiplication: BB B. A left ideal B /` A (resp. right ideal B /r A) of A is a -subspace closed under left (resp. right) multiplication by A : AB B (resp. BA B). An ideal B / A of A is a two-sided ideal, closed under both left and right multiplication by A : AB + BA B. If A has an involution, a -ideal is an ideal invariant under the involution: B / A and B B. We will always use 0 to denote the zero subspace, while ordinary 0 will denote the zero scalar, vector, or transformation (context will decide which is meant).
58
Jordan Algebras in the Algebraic Renaissance
. Any ideal B / A is the kernel of the canonical homomorphism : x ! x of A onto the quotient algebra A = A=B; consisting of all cosets x := [x]B := x + B with the induced operations x := x; x + y := x + y; x y := xy . The quotient A=B of a -algebra by a -ideal is again a -algebra under the induced involution x := x : Quotient Definition
We have the usual tripartite theorem relating homomorphisms and quotients. Fundamental Theorem of Homomorphisms. For homomorphisms (and similarly for -homomorphisms) we have (I) If ' : A ! A0 is a homomorphism then ker(') / A; im(') A0 , and A=ker(') = im(') under the map '(x) := '(x): (II) There is a 1 1 correspondence between the ideals (resp. subalgebras) C of the quotient A = A=B and those C of A which contain B, given by C ! (C) and C ! 1 (C); for ideals we have A=C = A=C: (III) If B / A; C A then C=(C \ B) = (C + B)=B under the map '([x]C\B ) = [x]B .
As usual, we have a way of gluing dierent algebras together as a direct sum in such a way that the individual pieces don't interfere with each other. Direct Sum Definition. The direct sum A1 : : : An of a nite number of algebras is the cartesian product A1 : : : An under the componentwise operations
(x1 ; : : : ; xn ) := (x1 ; : : : ; xn); (x1 ; : : : ; xn ) + (y1 ; : : : ; yn) := (x1 + y1 ; : : : ; xn + yn); (x1 ; : : : ; xn )(y1 ; : : : ; yn ) := (x1 y1 ; : : : ; xn yn): [We will consistently write an algebra direct sum with , and a mere module direct sum with Q :] The direct product I Ai of an arbitrary family of algebraic systems Ai indexed by a set I is the cartesian product XI Ai under the comQ ponentwise operations. We may think of this as all \strings" a = xi or \I -tuples" a = (: : : xi : : :) of elements, one from each family member Ai , or more rigorously as all maps X : I ! [i Ai such that x(i) 2 Ai L for each i, under the pointwise operations. The direct sum I Ai is the subalgebra of all tuples with only a nite number of nonzero entries (so can be represented as a nite sum xi1 + : : : + xin of elements
3. COMMUTATORS AND ASSOCIATORS
59
xj 2 Aj ):1 For each i 2 I we have canonical projections i of both the direct product and direct sum onto the ith -component Ai . = Q An algebra is a subdirect product A A ( more often and I i more inaccurately called aQsemidirect sum) of algebras if there is (1) a monomorphism ' : A ! I Ai such that (2) for each i 2 I the canonical projection i ('(A)) = Ai maps onto all of Ai . By the Fundamental Theorem (2) is equivalent to Ai = A=Ki for an ideal Ki / A, and (1) is equivalent to \I Ki = 0. Thus a semi-direct product decomposition of A is essentially the same as a \disjoint" family of ideals. In nite direct products are complicated objects, more topological or combinatorial than algebraic. Direct sums are the most useful (but rarest) \rigid" decompositions, and are the goal of many structure theories. Subdirect product decompositions are fairly \loose," but are crucial in the study of radicals. In a philosophical sense, and algebra A can be recovered from an ideal B and its quotient A=B. The basic building blocks are those algebras which cannot be built up from smaller pieces, i.e. have no smaller ingredients B: . An algebra is simple (resp. -simple) if it has no proper ideals (resp. proper -ideals) and is not trivial, AA 6= 0. A subspace B is proper if it is not zero or the whole space, B 6= 0; A. An algebra is semisimple if it is a nite direct sum of simple algebras. Simple Definition
3. Commutators and Associators We can reformulate the algebra conditions in terms of the left and right multiplication operators Lx and Rx by the element x, de ned by Lx (y ) := xy =: Ry (x): Bilinearity of the product just means the map L : x ! Lx (or equivalently the map R : y ! Ry ) is a linear mapping from the -module A into the -module End (A) of -linear transformations on A. The product (and the algebra) is commutative if xy = yx for all x; y , and skew if xy = yx for all x; y ; in terms of operators, commutativity means Lx = Rx for all x, skewness means Lx = Rx for all x, so in either case we can dispense with the right multiplications `
1The direct product and sum are often called the product Q A and coproduct i Ai ; especially in algebraic topology, these appear as \dual" objects. For nite
index sets, the two concepts coincide.
60
Jordan Algebras in the Algebraic Renaissance
and work only with the Lx . In working with the commutative law, it is convenient to introduce the commutator [x; y ] := xy yx; which measures how far two elements are from commuting: x and y commute i their commutator is zero. In these terms the commutative law is [x; y ] = 0, so an algebra is commutative i all commutators vanish2. The product (and the algebra) is associative if (xy )z = x(yz ) for all x; y; z , in which case we drop all parentheses and write the product as xyz . We can interpret associativity in 3 ways as an operator identity, depending on which of x; y; z we treat as the variable: on z it says Lxy = Lx Ly , i.e. that L is a homomorphism of A into End (A); on x it says Rz Ry = Ryz , i.e. that R is an anti-homomorphism; on y it says Rz Lx = Lx Rz , i.e. that all left multiplications Lx commute with all right multiplications Rz . It is similarly covenient to introduce the
associator
[x; y; z ] := (xy )z x(yz ); which measures how far three elements are from associating: x; y; z associate i their associator is zero. In these terms an algebra is associative i all its associators vanish, and the Jordan identity becomes [x2 ; y; x] = 0: Nonassociativity can never be repaired, it is an incurable illness. Instead, we can focus on the parts of an algebra which do behave associatively. The nucleus Nuc(A) of a linear algebra A is the part which \associates" with all other elements, the elements n such that Nuc(A) : (nx)y = n(xy); (xn)y = x(ny); (xy)n = x(yn) for all x; y in A. In terms of associators, nuclear elements are those which vanish when put into an associator, Nuc(A) := fn 2 A j [n; A; A] = [A; n; A] = [A; A; n] = 0g: 2Most algebraists of yore were right-handed, i.e. they wrote their maps on the
right: a linear transformation T on V had values xT , the matrix of T with respect to an ordered basis was built up row-by-row, and composition S Æ T meant rst do S and then T . For them, the natural multiplication was Ry ; xRy = xy. Modern algebraists are all raised as left-handers, writing maps on the left (f (x) instead of xf ), as learned in the calculus cradle, and building matrices column-by-column. Whichever hand you use, in dealing with modules over noncommutative rings of scalars it is important to keep the scalars on the opposite side of the operators, so linear maps have either T (x) = (T x) or (x)T = (xT ). Since the dual V of a right vector space V over a noncommutative division algebra is a left vector space over , it is important to be ambidextrous, writing a linear map as T (x) on V , but its adjoint as (x )T on the dual.
4. LIE AND JORDAN ALGEBRAS
61
Nuclear elements will play a role in several situations (such as forming nuclear isotopes, or considering involutions whose hermitian elements are all nuclear). The associative ring theorist Jerry Martindale oers this advice for proving theorems about nonassociative algebras: never multiply more than two elements together. We can extend this secret for success to: when multiplying n elements together, make sure that at least n 2 of them belong to the nucleus! Another useful general concept is that of the center Cent(A), the set of elements c which both commute and associate, and therefore act like scalars: Cent(A) : cx = xc; c(xy) = (cx)y = x(cy); or in terms of associators and commutators Cent(A) := fc 2 Nuc(A) j [c; A] = 0g: Any unital algebra may be considered as an algebra over its center, which is a ring of scalars over : we simply replace the original scalars by the center with scalar multiplication c x := cx. If A is unital then 1 Cent(A) and the original scalar action is preserved in the form x = (1) x. In most cases the center forms the \natural" scalars for the algebra; a unital -algebra is central if its center is precisely 1. Central-simple algebras (those which are central and simple) are crucial building-blocks of a structure theory.
4. Lie and Jordan Algebras In de ning Jordan algebras over general scalars, the theory always required the existence of a scalar 12 (ruling out characteristic 2) to make sense of its basic examples, the special algebras under the Jordan product. Outside this restriction, the structure theory worked smoothly and uniformly in all characteristics. Jordan Algebra Definition. If is a commutative associative ring of scalars containing 21 , a Jordan algebra over is a linear algebra J equipped with a commutative product x y which satis es the Jordan identity. In terms of commutators and associators these can be expressed as (JAx1) [x; y ] = 0 (commutative law); (JAx2) [x2 ; y; x] = 0 (Jordan identity): The product is usually denoted by x y rather than by mere juxtaposition. In operator terms, the axioms can be expressed as saying left and
62
Jordan Algebras in the Algebraic Renaissance
right multiplications coincide, and left multiplication by x2 commutes with left multiplication by x: (JAx1op ) Lx = Rx ; (JAx2op ) [Lx2 ; Rx ] = 0: Lie algebras can be de ned over general rings, though in practice pathologies crop up as soon as you leave characteristic 0 for characteristic p (and by characteristic 2 almost nothing remains of the structure theory). 3 Lie Algebra Definition. A Lie algebra over any ring of scalars is a linear algebra L equipped with an anti-commutative product, universally denoted by brackets p(x; y ) := [x; y ], satisfying the Jacobi identity: (LAx1) [x; y ] = [y; x] (anti-comm. law); (LAx2) [x; [y; z ]] + [y; [z; x]] + [z; [x; y ]] = 0 (Jacobi identity): We can write these axioms too as illuminating operator identities: (LAx1op ) Lx = Rx ; (LAx2op ) L[x;y] = [Lx ; Ly ] so that L is a homomorphism L ! End (L) of Lie algebras (called the adjoint representation, with the left multiplication map called the adjoint map Ad(x) := Lx ). The use of the bracket for the product con icts with the usual notation for the commutator, which would be [x; y ] [y; x] = 2[x; y ], but this shows there is no point in using commutators in Lie algebras to measure commutativity: the bracket says it all.
5. The 3 Basic Examples Revisited The creation of the plus- and minus-algebras A+ ; A makes sense
for arbitrary linear algebras, and produce Jordan and Lie algebras when A is associative. These are the rst (and most important) examples of Jordan and Lie algebras. First Example. If A is any linear algebra with product xy over a ring of scalars containing 21 , the plus algebra A+ denotes the linear -algebra with commutative \Jordan product" 1 + A : x y := (xy + yx): 2 If A is an associative -algebra, then A+ is a Jordan -algebra. Just as everyone should show, once and only once in their life, that every associative algebra A gives rise to a Lie algebra A by verifying 3\Lee" as in Sophus or Sara or Robert E., not \Lye".
6. JORDAN MATRIX ALGEBRAS
63
directly the anti-commutativity and Jacobi identity for the commutator product, so should everyone show that A also gives rise to a Jordan algebra A+ by verifying directly the commutativity and Jordan identity for the anti-commutator product. The previous notions of speciality and exceptionality also make sense in general. Special Definition. A Jordan algebra is special if it can be imbedded in an algebra A+ for A associative (i.e. if it is isomorphic to a subalgebra of some A+ ), otherwise it is exceptional. We usually think of special algebras as living inside associative algebras. As before, the most important examples of special Jordan or Lie subalgebras are the algebras of hermitian or skew elements of an associative algebra with involution. Second Example. If a linear algebra A has an involution , then H(A; ) denotes the hermitian elements x = x. It is easy to see that if A is an associative -algebra with involution, then H(A; ) is a Jordan -subalgebra of A+ . The third basic example of a special Jordan algebra is a spin factor, which has no natural Lie analogue. Third Example. We de ne a linear -algebra structure JS pinn () on 1 n over an arbitrary ring of scalars by having 1 act as unit element and de ning the product of vectors ~v ; w~ 2 n to be the scalar multiple of 1 given by the dot product h~v; w~ i ( for column vectors this is ~v tr w~ ): ~v w~ := h~v; w~ i1; so the global expression for the product is (1 ~v) ( 1 w~ ) := + < ~v ; w~ > 1 w~ + ~v : Spin factors over general scalars are Jordan algebras just as they were over the reals, by symmetry of the dot product and the fact that Lx2 is a linear combination of Lx ; 1J , and again it can be imbedded in hermitian 2n 2n matrices over .
6. Jordan Matrix Algebras
An important special case of an Hermitian Jordan algebra H(A; ) is that where the linear algebra A = Mn (D) is the algebra of n n matrices over a coordinate algebra (D; ). These are especially useful
64
Jordan Algebras in the Algebraic Renaissance
since we can give an explicit \multiplication table" for hermitian matrices in terms of the coordinates of the matrices, and the properties of H closely re ect those of D. In order to produce a Jordan matrix algebra, the coordinates must be associative if n 4 and alternative if n = 3. 6.1. Associative coordinates. We begin with the multiplication table. Hermitian Matrix Example. For an arbitrary linear -algebra tr D with involution , the conjugate transpose mapping X = X X := (xij ) is an involution on the linear algebra Mn (D) of all n n matrices with entries from D under the usual matrix product XY . The space Hn(D) of all hermitian matrices X = X with1 respect to this involution is closed under the Jordan product X Y = 2 (XY + Y X ). Using the multiplication table we can see why the exceptional Jordan matrix algebras in the Jordan-von Neumann-Wigner Theorem stop at n = 3: for n 4 the coordinates must be associative.4 Associative Coordinates Theorem. If the hermitian matrix algebra Hn (D) for n 4 and 12 2 is a Jordan algebra under the product X Y = 12 (XY + Y X )), then D must be associative and Hn (D) is a special Jordan algebra.
6.2. Alternative coordinates. When n = 3 we can even allow to be slightly nonassociative: the coordinate algebra must be alternative. Alternative Definition. A linear algebra D is alternative if it satis es the left and right alternative laws (AltAx1) x2 y = x(xy ) (left alternative law); (AltAx2) yx2 = (yx)x (right alternative law): for all x; y in D; in terms of associators or operators these may be expressed as [x; x; y ] = 0; [y; x; x] = 0 or Lx2 = (Lx )2 ; Rx2 = (Rx )2 : An alternative algebra is automatically exible, (xy )x = x(yx), equivalently all elements x; y satisfy [x; y; x] = 0; or [Lx ; Rx ] = 0 ( exible law): From the associator conditions we see that alternativity is equivalent D
4In fact, any \respectable" Jordan algebras of \degree" 4 or more (whether or
not they have the speci c form of matrix algebras) must be special.
7. FORMS PERMITTING COMPOSITION
65
to the associator [x; y; z ] being an alternating multilinear function of its arguments (in the sense that it vanishes if any two of its variables are equal). Perhaps it would be better to call the algebras alternating instead of alternative. Notice that the nuclearity conditions can be written in terms of associators as [n; x; y ] = [x; n; y ] = [x; y; n] = 0, so in alternative algebras nuclearity reduces by alternation to [n; x; y ] = 0. It is not hard to see that for a matrix algebra H3 (D) to be a Jordan algebra it is necessary that the coordinate algebra D be alternative and the hermitian elements H(D) coordinatizing the diagonal matrices lie in the nucleus. The converse is true, but painful to prove. Note that in the octonions the hermitian elements do even better, they are scalars lying in 1: Alternative Coordinates Theorem. The hermitian matrix algebra H3 (D) over containing 12 is a Jordan algebra i the -algebra D is alternative with nuclear involution, i.e. its hermitian elements are contained in the nucleus,
[H(D); D; D] = 0:
7. Forms Permitting Composition Historically, the rst nonassociative algebra to be discovered was the Cayley numbers (progenitor of the theory of alternative algebras), which arose in the context of the number-theoretic problem of quadratic forms permitting composition. We will show how this number-theoretic question can be transformed into one concerning certain algebraic systems, the composition algebras, and then how a precise description of these algebras leads to precisely one nonassociative coordinate algebra (an alternative algebra with nuclear involution) suitable for constructing Jordan algebras: the 8-dimensional octonion algebra with scalar involution.
7.1. The n-squares problem. It was known to Diophantus that sums of two squares could be composed, i.e. that the product of two such terms could be written as another sum of two squares: (x20 + x21 )(y02 + y12) = (x0 y0 x1 y1 )2 + (x0 y1 + x1 y0 )2 . Indian mathematicians were aware that this could be generalized to other \binary" (twovariable) quadratic forms, yielding a \two-square formula" (x20 + x21 )(y02 + y12 ) = (x0 y0
x1 y1 )2 + (x0 y1 + x1 y0 )2 = z02 + z12 :
66
Jordan Algebras in the Algebraic Renaissance
In 1748 Euler used an extension of this to \quaternary" (4-variable) quadratic forms x20 + x21 + x22 + x23 , and in 1770 Lagrange used a general \4-square formula": (x20 + x21 + x22 + x23 )(y02 + y12 + y22 + y32 ) = z02 + z12 + z22 + z32 ; z0 := x0 y0 x1 y1 x2 y2 x3 y3 ; z1 := x0 y1 + x1 y0 + x2 y3 x3 y2 ; z2 := x0 y2 x1 y3 + x2 y0 + x3 y1 ; z3 := x0 y3 + x1 y2 x2 y1 + x3 y0 :
In the 1845 an \8-square formula" was discovered by Cayley; J.T. Graves claimed to have discovered this earlier, and in fact Degan had already noted a more general formula in 1818: (x20 + x21 + x22 + x23 + x24 + x25 + x26 + x27 ) (y02 + y12 + y22 + y32 + y42 + y52 + y62 + y72 ) = (z02 + z12 + z22 + z32 + z42 + z52 + z62 + z72 )
z0 := x0 y0 x1 y1 x2 y2 x3 y3 x4 y4 x5 y5 x6 y6 x7 y7 , z1 := x0 y1 + x1 y0 + x2 y3 x3 y2 + x4 y5 x5 y4 x6 y7 + x7 y6 , z2 := x0 y2 x1 y3 + x2 y0 + x3 y1 + x4 y6 + x5 y7 x6 y4 x7 y5 , z3 := x0 y3 + x1 y2 x2 y1 + x3 y0 + x4 y7 x5 y6 + x6 y5 x7 y4 , z4 := x0 y4 x1 y5 x2 y6 x3 y7 + x4 y0 + x5 y1 + x6 y2 + x7 y3 , z5 := x0 y5 + x1 y4 x2 y7 + x3 y6 x4 y1 + x5 y0 x6 y3 + x7 y2 , z6 := x0 y6 + x1 y7 + x2 y4 x3 y5 x4 y2 + x5 y3 + x6 y0 x7 y1 , z7 := x0 y7 x1 y6 + x2 y5 + x3 y4 x4 y3 x5 y2 + x6 y1 + x7 y0 .
This is clearly not the sort of formula you stumble upon during a casual mathematical stroll. Indeed, this is too cumbersome to tackle directly, with its mysterious distribution of plus and minus signs and assorted scalars.
7.2. Reformulation in terms of composition algebras. A more concise and conceptual approach is needed. If we interpret the variables as coordinates of a vector ~x = (x0 ; : : : ; x7 ) in an 8-dimensional vector space, then the expression x20 + x21 + x22 + x23 + x24 + x25 + x26 + x27 de nes a quadratic norm form N (~x) on this space. The 8-square formula asserts that this quadratic form permits (or admits) composition in the sense that N (~x)N (~y) = N (~z) where the \composite" ~z = (z0 ; : : : ; z7 ) is automatically a bilinear function of ~x and ~y (i.e. each of its coordinates zi is a bilinear function of the xj and yk ). We may think of ~z = ~x ~y as some sort of \product" of ~x and ~y. This product is linear in ~x and ~y, but it need not be commutative or associative. Thus the existence of an n-squares formula is equivalent to the existence of an n-dimensional algebra with product ~x ~y and distinguished P basis ~e0 ; : : : ; ~en 1 such that N (~x) = N (x0~e0 + : : : + xn 1~en 1 ) = ni=0 i x2i permits composition N (~x)N (~y ) = N (~x ~y) (in the classical case all i = 1 and this is a
7. FORMS PERMITTING COMPOSITION
67
\pure" sum of squares). The element e~0 = (1; 0; : : : ; 0) (x0 = 1, all other xi = 0) acts as unit element: ~e0 ~y = ~y; ~x ~e0 = ~x. When the quadratic form is anisotropic (N (~x) = 0 =) ~x = ~0) the algebra is a \division algebra": it has no divisors of zero, ~x; ~y 6= ~0 =) ~x ~y 6= ~0, so in the nite-dimensional case the injectivity of left and right multiplications makes them bijections. The algebra behind the 2-square formula is just the complex numbers C : z = x0 1+x1 i with basis 1; i over the reals, where 1 acts as identity and i2 = 1 and N (z ) = x20 + x21 = zz is the ordinary norm squared (where z = x0 1 x1 i is the ordinary complex conjugate). This interpretation was well known to Gauss. The 4-squares formula led Hamilton to the quaternions H consisting of all x = x0 1+ x1 i + x2 j + x3 k, where the formula for x y means that the basis elements 1; i; j; k satisfy the now-familiar rules
i2 = j 2 = k2 = 1; ij = ji = k; jk = kj = i; ki = ik = j: Clearly this algebra is no longer commutative. Again N (~x) = xx is the ordinary norm squared (where x = x0 1 x1 i x2 j x3 k is the ordinary quaternion conjugate). Cliord and Hamilton invented 8-dimensional algebras (biquaternions), which were merely the direct sum H H of two quaternion algebras. Because of the presence of zero divisors, these algebras were of minor interest. Cayley was the rst to use the 8-square formula to create an 8-dimensional division algebra K of octonions or Cayley numbers. By 1847 he recognized that this algebra was not commutative or associative, with basis ~e0 ; : : : ; e~7 = 1; i; j; k; `; i`; j`; k` with multiplication table
e~0 e~i = e~i~e0 = ~ei ; ~e2i = 1; ~ei~ej = ~ej ~ei = ~ek for ijk = 123; 145; 624; 653; 725; 734; 176: A subsequent ood of (false!!) higher-dimensional algebras carried names such as quadrinions, quines, pluquaternions, nonions, tettarions, plutonions. Ireland especially seemed a factory for such counterfeit division algebras. In 1878 Frobenius showed that the only associative division algebras over the reals (permitting composition or not) were R ; C ; H of dimensions 1,2,4. In 1898 Hurwitz proved via group representations that the only quadratic forms permitting composition over the reals are the standard ones of dimension 1,2,4,8; A.A. Albert later gave an algebra-theoretic proof over a general eld of scalars (with an addition by Irving Kaplansky to include characteristic 2 and non-unital algebras). Only recently was it established that the only
68
Jordan Algebras in the Algebraic Renaissance
nite-dimensional real nonassociative division algebras have dimensions 1,2,4,8; the algebras themselves were not classi ed, and the proof was topological rather than algebraic.
8. Composition Algebras The most important alternative algebras with nuclear involutions are the composition algebras. A composition algebra is a unital algebra having a nondegenerate quadratic norm form N which permits composition, N (xy ) = N (x)N (y ): In general, a quadratic form Q on a space V is nondegenerate if all nonzero elements in the space contribute to the values of the form. The slackers (the set of elements which contribute nothing) are gathered in the radical Rad(Q) := fz 2 V j Q(z) = Q(z; V ) = 0g; so nondegeneracy means Rad(Q) = 0:5 Even better than nondegeneracy is anisotropy. A vector x is isotropic if it has \zero weight" Q(x) = 0, and anisotropic if Q(x) 6= 0. A form is isotropic if it has nonzero isotropic vectors, and anisotropic if it has none: Q anisotropic i Q(x) = 0 () x = 0: For example, the positive de nite norm form Q(x) = x x on Euclidean n-space is anisotropic. Clearly any anisotropic form is nondegenerate.
8.1. The Cayley-Dickson Construction and Process. The
famous Hurwitz Theorem of 1898 states that over the real numbers composition algebras can only exist in dimensions 1; 2; 4, or 8. In 1958 Nathan Jacobson gave a beautiful \bootstrap" method, showing clearly how all composition algebras are generated internally, by repeated `doubling" (of the module, the multiplication, the involution, and the norm) starting from any composition subalgebra. As its name suggests, the Cayley-Dickson doubling process is due to A. A. Albert. Cayley-Dickson Definition. The Cayley-Dickson Construction builds a new -algebra out of an old one together with a choice of scalar. If A is a unital linear algebra with involution a ! a whose Q(z ) = 12 Q(z; z ), when 21 2 the radical of the quadratic form reduces to the usual radical Rad(Q(; )) := fz 2 V j Q(z; V ) = 0g of the associated bilinear form Q(; ) (the vectors which are\orthogonal to everybody"). But in characteristic 2 there is an important dierence between the radical of the quadratic form and the \bilinear radical" of its associated bilinear form. 5Since
8. COMPOSITION ALGEBRAS
69
norms aa = n(a)1 for scalars n(a) 2 , and is an invertible scalar in , then the Cayley-Dickson algebra C (A; ) = A Am is obtained by doubling the module A (adjoining a formal copy Am) and de ning a product, scalar involution, and norm by the CayleyDickson Recipe: ) (da + bc)m; (a bm)(c dm) = (ac + db (a bm) = a bm; N (a bm) = n(a) n(b): The Cayley-Dickson Process consists of iterating the Cayley-Dickson Construction over and over again. Over a eld the Process iterates the Construction starting from the 1-dimensional A0 = (the scalars) with trivial involution and nondegenerate norm N () = 2 to get a 2-dimensional commutative binarion algebra A1 = C (A0 ; 1 ) = i (i2 = 1 1) with nontrivial involution6, then a 4-dimensional noncommutative quaternion algebra A2 = C (A1 ; 2 ) = A1 A1 j (j 2 = 2 1), and nally an 8-dimensional non-associative octonion algebra 2 A3 = C (A2 ; 3 ) = A2 A2 l (l = 3 1). Thus octonion algebras are obtained by gluing two copies of a quaternion algebra together by the Cayley-Dickson Recipe. If the Cayley-Dickson doubling process is carried beyond dimension 8, the resulting algebras no longer permit composition and are no longer alternative (so cannot be used in constructing Jordan matrix algebras). Jacobson's Bootstrap Theorem shows that over a eld the algebras with involution obtained from the Cayley-Dickson Process are precisely the composition algebras with standard involution over : every composition algebra arises by this construction. If we take A0 = = R the reals and 1 = 2 = 3 = 1 in the Cayley-Dickson Process, then A1 = complex numbers C ; A2 = Hamilton's quaternions H (the Hamiltonions), and A3 = Cayley's octonions K (the Caylions or Cayley numbers, or the Cayley algebra), precisely as in the Jordan-von Neumann-Wigner Theorem. We will adopt the convention that the dimension 4 composition algebras will all be called (generalized) quaternion algebras (as is standard in noncommutative ring theory) and denoted by Q, by analogy the dimension 8 composition algebras will be called (generalized) octonion 6In characteristic 2, starting from the construction produces larger and larger
algebras with trivial involution and possibly degenerate norm; to get out of the rut, one must construct by hand the binarion algebra A1 := 1+v (v 21 (1+i)); v2 := v 1; v = 1 v, with nondegenerate norm N ( + v) := 2 + + 2 :
70
Jordan Algebras in the Algebraic Renaissance
algebras and denoted by O (even though this looks dangerously like zero), and the dimension 2 composition algebras will be called binarion algebras and denoted by B. In the alternative literature the octonion algebras are called Cayley algebras, but we will reserve the term Cayley for the unique 8-dimensional real division algebra K (THE Cayley algebra), just as Hamilton's quaternions are the unique 4-dimensional real division algebra H . There is no generally accepted term for the 2-dimensional composition algebras, but that won't stop us from calling them binarions. If the 1-dimensional scalars insist on having a high-falutin' name too, we could grudgingly call them unarions. Notice that a composition algebra C consists of a unital algebra plus a choice of norm form N , and therefore always carries a standard involution x = N (x; 1)1 x. Thus a composition algebra is always a -algebra (and the determines the norm, N (x)1 = xx):
9. Split Composition Algebras We will often be concerned with split unarions, binarions, quaternions and octonions. Over an algebraically closed eld the composition algebras are all split. \Split" is an imprecise metaphysical term, meaning roughly that the system is \completely isotropic", as far removed from an \anisotropic" or \division system" as possible, as well as being de ned in some simple way over the the integers.7 Each category separately must decide on its own de nition of \split". For example, in the theory of nite-dimensional associative algebras we de ne a split simple algebra over to be a matrix algebra Mn () coordinatized by the ground eld. The theory of central-simple algebras shows that every simple algebra Mn () coordinatized by a division algebra becomes split in some scalar extension, because of the amazing fact that nite-dimensional division algebras can be split (turned into Mr ( )) by tensoring with a splitting eld , in particular every division algebra has square dimension dim () = r2 over its center! In the theory of quadratic forms, a \split" form would have \maximal Witt index", represented relative to a suitable basis by the matrix consisting of hyperbolic planes ( 01 10 ) down the diagonal, withPan additional 1 1 matrix (1) P if the dimension is odd, n 2 Q(0 x0 + i=1 (2i 1 x2i 1 + 2i x2i )) = 0 + ni=1 2i 1 2i : The split composition algebras over an arbitrary ring of scalars (not just a eld) are de ned as follows. 7This has no relation to \split" exact sequences 0 ! A ! B
! C ! 0, which have to do with the middle term \splitting" as a semi-direct sum B = A C:
9. SPLIT COMPOSITION ALGEBRAS
71
. The split composition algebras over a scalar ring are de ned to be those algebras with involution of dimension 2n 1 ; n = 1; 2; 3; 4 isomorphic to the following models: (I) , the scalars (or split unarions) U () := , with trivial involution and norm N () := 2 ; (II) B () = , the split binarions, a direct sum of scalars with the standard (exchange) involution (; ) 7! ( ; ) and norm N (; ) := ; (III) Q(), the split quaternions over with its standard involution, i.e. the algebra M2()of 2 2 matrices with symplectic involution
a = Æ for a = Æ and norm N (a) := det(a); (IV) O() = Q()Q()`, the split octonions over , with standard involution a b` = a b` and norm N (a b`) := det(a) det(b). There is (up to isomorphism) a unique split composition algebra of given dimension over a given , and the constructions ! U (), B(), Q(), O() are functors from the category of scalar rings to the category of composition algebras. Notice that over the reals these split composition algebras are at the opposite extreme from the division algebras R ; C ; H ; K occurring in the J-vN-W classi cation. They are obtained from the Cayley-Dickson process by choosing all the ingredients to be i = 1 instead of i = 1. It is an important fact that composition algebras over a eld are either division algebras or split: as soon as the quadratic norm form is the least bit isotropic (some nonzero element has norm zero) then it is split as a quadratic form, and the algebra has proper idempotents and splits entirely. Split Definition
N anisotropic () C division, N isotropic () C split. This dichotomy for composition algebras, of being entirely anisotropic (division algebra) or entirely isotropic (split), does not hold for quadratic forms in general, or for other algebraic systems. In Jordan algebras there is a trichotomy: an algebra can be anisotropic (\division algebra"), reduced (has nonzero idempotents but coordinate ring a division algebra), or split (nonzero idempotents and coordinate ring the ground eld). The split Albert algebra Alb() over is the 27dimensional Jordan algebra of 3 3 hermitian matrices over the split octonion algebra (with standard involution), Alb() := H3 (O()) (splitAlbert): As we will see in the next paragraph, over an algebraically closed eld this is the only exceptional Jordan algebra. But over general elds we
72
Jordan Algebras in the Algebraic Renaissance
can have reduced Albert algebras H3 (O) for non-split octonion algebras, and (as rst shown by Albert) we can even have Albert division algebras (though these can't be represented in the form of 3 3 matrices, which would always have a non-invertible idempotent E11 ).
10. Classi cation We now return from our long digression on general linear algebras, and consider the development of Jordan theory during the Algebraic Renaissance, whose crowning achievement was the classi cation of simple Jordan algebras over an arbitrary algebraically closed eld (of characteristic not 2, of course!). As in the J-vN-W Theorem, the classi cation of simple Jordan algebras proceeds according to \degree", where the degree is the maximal number of supplementary orthogonal idempotents (analogous to the matrix units Eii ). From another point of view, the degree is the degree of the generic minimum polynomial of the algebra, the \generic" polynomial mx () = n m1 (x)n 1 + : : : +( 1)nmn (x) (mi : J ! homogeneous of degree i) of minimal degree satis ed by all x; mx (x) = 0. Degree 1 algebras are just the 1-dimensional + ; the degree 2 algebras are the JS pinn ; the degree n 3 algebras are all Jordan matrix algebras Hn(C) where the coordinate -algebras C are precisely the split composition algebras over with their standard involutions. This leads immediately to the basic classi cation of nite-dimensional Jordan algebras over an algebraically closed eld. Renaissance Structure Theorem. Let J be a nite-dimensional Jordan algebra over an algebraically closed eld of characteristic 6= 2. (1) The radical of J is the maximal ideal of nilpotent elements, and the quotient J=Rad(J) is semisimple. (2) J is semisimple i it is a nite direct sum of simple ideals. In this case, J has a unit element, and the simple decomposition is unique: the simple summands are precisely all minimal ideals of J. (3) Every simple J is automatically central-simple over . (4) J is simple i it is isomorphic to exactly one of: (I) + of degree 1, (II) JS pinn () of degree 2, for n 2; (III) Hn (C()) of degree n 3 for split composition C(); hence either special (IIIa) Hn (), (IIIb) Hn (B()) = Mn ()+ for B () the split binarions, (IIIc) Hn (Q()) = Hn (M2 ()) for Q() the split quaternions,
10. CLASSIFICATION
73
or exceptional (IIId) Alb() = H3 (O()) for O() the split octonions. Once more, the only exceptional algebra in the list is the 27-dimensional split Albert algebra (IId). Note that the 1-dimensional algebra JSpin0 is the same as (I), the 2-dimensional JSpin1 = B () is not simple when is algebraically closed, so only JS pinn for n 2 contribute new simple algebras. We are beginning to conceptually isolate the Albert algebras; even though the split Albert algebra and the real Albert algebra discovered by Jordan, von Neumann, and Wigner appear to t into the family of Jordan matrix algebras, we will see in the next chapter that their non-reduced forms really come via a completely dierent construction out of a cubic form.
CHAPTER 3
Jordan Algebras in the Enlightenment Finite-dimensional Jordan algebras over general elds 1. Forms of Algebras Life over an algebraically closed eld is split. When we move to non-algebraically closed elds we encounter modi cations or \twistings" in the split algebras which produce new kinds of simple algebras. There may be several non-isomorphic algebras A over which are not themselves split algebras S(), but \become" S( ) over the algebraic closure when we extend the scalars: A = S( ). We call such A's forms of the split algebra S; in some Platonic sense they are incipient S's, and are only prevented from revealing their true S-ness by de ciencies in the scalars: once they are released from the constraining eld they can burst forth1 and reveal their true split personality. Scalar Extension Definition. If is a unital commutative associative algebra over (we call it, by abuse of language, an extension of the ring of scalars ), the scalar extension A of a linear algebra A over is de ned to be the tensor product as module with the natural induced multiplication: A = A; (!1 x1 )(!2 x2 ) = !1 !2 x1 x2 : Thus A consists of \formal -linear combinations" of elements from A. It is always a linear algebra over , and there is a natural homomorphism A ! A of -algebras via x ! 1 x. If A or is free as a -module (e.g. if is a eld), this natural map is a monomorphism, but in general we can't view either as a subalgebra of or A as a subalgebra of A . Form Definition. If is an extension of , we say a linear algebra A over is a -form of an algebra A0 over if A = A0 .
1The analogy of the alien creature released from its spacesuit in the movie
\Independence Day" is apt, though not attering.
75
76
Jordan Algebras in the Enlightenment
The general line of attack on the structure theory over general elds is a \top-down" method: we start from the known simple structures S over the algebraic closure and try to classify all possible forms of a given S( ). In the Jordan case, we need to know which simple Jordan algebras over will grow up to be Albert algebras over , which ones will grow up to be of spin factors, and which ones will grow into hermitian algebras. In Lie theory, one says a Lie algebra L is of type S (G2 ; F4 , or whatever) if L is the (unique simple) Lie algebra S( )(G2 ( ); F4 ( ), or whatever), so the problem becomes that of classifying simple Lie algebras of a given type. The exact classi cation of all the possible forms of a given type usually depends very delicately on the arithmetic nature of the eld (e.g. is trivial for the complex numbers, easy for the real numbers, but hard for algebraic number elds). Octonion Example. The only octonion algebra over an algebraically closed eld is the split O ( ). Over a non-algebraicallyclosed eld of characteristic 6= 2, every octonion algebra O can be obtained via the Cayley-Dickson Construction using a triple of nonzero scalars i starting from ; O = C (; 1 ; 2 ; 3 ), but it is a delicate question when two such triples of scalars produce isomorphic octonion algebras (or, what turns out to be the same thing, equivalent quadratic norm forms), so producing a precise description of the distinct isomorphism classes of octonion algebras C (; 1 ; 2 ; 3 ) over is diÆcult. Nevertheless, all C (; 1 ; 2 ; 3) are forms of O( ): if we pass to the algebraic closure of , the octonion algebra splits, C (; 1; 2; 3) = O( ).
2. Inverses and Isotopes The most important method of twisting Jordan algebras is to take isotopes by invertible elements. It is helpful to think of passing to an isotope as changing the unit of the Jordan algebra. Inverse Definition. An element x of a unital Jordan algebra is invertible if it has an inverse y satisfying the Jordan inverse
condition
x y = 1; x2 y = x: A unital Jordan algebra is a division algebra if every nonzero element is invertible. For elements of associative algebras, Jordan invertibility is exactly the same as ordinary invertibility. We don't even need the entire algebra
2. INVERSES AND ISOTOPES
77
to be associative, as long as the invertible element u is associative (i.e. lies in the nucleus). Full Inverse Proposition. If x is a nuclear element of a unital linear algebra A over a ring of scalars containing 12 , then x has inverse with respect to the Jordan product i it has ordinary inverse with respect to multiplication, x y = 1; x2 y = x () xy = yx = 1; in which case the element y is uniquely determined by x, and is again nuclear with inverse x. We write y = x 1 : In particular, A+ is a Jordan division algebra i A is an associative division algebra. A
. If u is an arbitrary element of a Jordan algebra J then the Jordan u-homotope J(u) is the Jordan algebra with product x u y := x (u y ) + (x u) y u (x y ): If J is unital and u is an invertible, then the u-homotope is again a unital Jordan algebra, with unit 1(u) := u 1 ; and we call it the Jordan u-isotope of J. Two Jordan algebras are isotopic if one is isomorphic to an isotope of the other. Isotopy is an equivalence relation, more general than isomorphism, since we have Isotope Re exivity, Transitivity, and Symmetry Homotope Definition
J
(1)
= J;
J
(u)
(v)
= J(uv u) ;
J
=
(u)
J
(u 2 )
:
It is a long and painful process to work with inverses and verify that J(u) is indeed a Jordan algebra; when one learned to use the U -operators and Jordan triple products in the Classical period (described in the next chapter), this became almost a triviality. The original motivation for homotopes comes from special algebras, where they have a thoroughly natural explanation: we obtain a new bilinear product by sticking a u in the middle of the old associative product2. We can even take u-homotopes in nonassociative algebras as 2Thus the parameter in a homotope is the inserted element, and the new unit is the inverse of this element. In the Survey we found it convenient 1to emphasize the new unit as the parameter in subscripted notation: J[u] := J(u ) has inverse 1[u] = (u 1 ) 1 = u. Since homotopes as well as isotopes play a role in Jordan theory, we will stick to the superscript version and consider isotopes as particular cases of homotopes.
78
Jordan Algebras in the Enlightenment
long as the element u is nuclear; this will allow us to form isotopes of the exceptional H3 (O). Full Isotope Proposition. Let A be a unital linear algebra, and let u be an invertible element in the nucleus of A. Then we obtain a new unital linear algebra, the nuclear u-isotope Au with the same linear structure but new unit 1u = u 1 and new multiplication xu y = xuy := (xu)y = x(uy ): Notice that by nuclearity of u this product is unambiguous. If A is associative, so is Au . We distinguish the nuclear and Jordan concepts of isotope by using plain subscripts u for nuclear isotopes, and parenthesized superscripts (u) for Jordan isotopes. For elements u which happen to be nuclear, the nuclear and Jordan isotopes are closely connected: the Jordan isotope of the plus algebra of A is just the plus algebra of the nuclear isotope: + (u) = (A )+ : x y = 1 (xuy + yux) (u 2 Nuc(A)); A u u 2 hence the plus algebras are isomorphic: for J = A+ every Jordan isotope J(u) by a nuclear element is isomorphic to J. The analogous recipe uxy for a \left isotope" of an associative algebra produces a highly nonassociative, non-unital algebra; only \middle isotopes" produce associative algebras again.
3. Twisted Hermitian Algebras As far as the entire linear algebra goes, Au produces just another copy of A: the nuclear isotope Au is isomorphic as a linear algebra to A under the bijection '(x) = ux, which is why the concept of isotopy is largely ignored in associative theory.
3.1. Twisted involutions. Isotopy does produce something new when we consider algebras with involution because '(x) = ux usually does not map H(A; ) to itself, and the twisted involution u need not
be algebraically equivalent to the original involution . Twisted Hermitian Proposition. Let A be a unital linear algebra with involution over a ring of scalars containing 12 , H = H(A; ) A+ the subalgebra of hermitian elements under the Jordan product, and let u be an invertible hermitian element in the nucleus of A. Then remains an involution on the linear algebra Au whose hermitian part is just the Jordan isotope of H(u) : H(A; )(u) = H(Au; ):
3. TWISTED HERMITIAN ALGEBRAS
79
This isotope coincides with H as linear space, but not necessarily as algebra under the Jordan product. A better way to view the situation is to keep the original linear structure but form the isotopic involution u , the u-conjugate xu = ux u 1: This is again an involution on A, with new hermitian elements H(A; u) = uH(A; ): The map Lu is now a -isomorphism (Au ; ) ! (A; u ) of -algebras, so induces an isomorphism of hermitian elements under the Jordan product: H(A; )(u) = H(Au; ) = H(A; u ) = uH ( u ) Thus H is really a copy of the u-translate uH, not of H. Thus the Jordan isotope H(A; )(u) can be viewed either as keeping the involution but twisting the product via u, or (via the identi cation map Lu ) keeping the product but twisting the involution via u. An easy example where the algebraic structure of H(u) is not the same as that of H is when A = Mn (R ) is the real n n matrices with transpose involution , and u = diag ( 1; 1; 1; : : : ; 1). Here the original involution is \positive de nite" and the Jordan algebra H = Hn (R ) is formallyreal, but the isotope u is inde nite and the isotope H(u) has nilpotent elements (x = E11 + E22 E12 E21 has x u x = 0), so H(u) cannot be algebraically isomorphic to H.
3.2. Twisted matrices. If we consider A = Mn (D) with the conjugate transpose involution, any diagonal matrix u = with invertible hermitian nuclear elements of D down the diagonal is invertible hermitian nuclear in A and can be used to twist the Jordan matrix algebra. Rather than consider the isotope with twisted product, we prefer to keep within the usual Jordan matrix operations and twist the involution instead. Twisted Matrix Example. For an arbitrary linear -algebra and diagonal matrix = diag ( 1; : : : ; n) D with involution d ! d whose entries i are invertible hermitian elements in the nucleus of = X tr 1 is an D, the twisted conjugate transpose mapping X involution on the algebra Mn (D) of all n n matrices with entries from D under the usual matrix product XY . The space Hn (D; ) := H(Mn(D); ) of all hermitian matrices X = X with respect to this new involution forms a Jordan algebra under the usual matrix operation X Y = 21 (XY + Y X ). The Jordan and nuclear isotope
80
Jordan Algebras in the Enlightenment
Hn(D)( ) = Hn(D) Mn(D) matrix algebra Hn (D; ).
is isomorphic under L to this twisted
Note that no parentheses are needed in these products since the i lie in the nucleus of D.
4. Spin and Quadratic Factors The nal Jordan algebras we consider are generalizations of the \split" spin factors JS pinn () determined by the dot product.
4.1. Spin factors. Over non-algebraically-closed elds we must consider consider general symmetric bilinear forms. We do not demand that our spaces are nite-dimensional, nor that our forms are nondegenerate. The construction of spin factors works for arbitrary forms on modules. . If V is a -module with symmetric bilinear form : V V ! , then we can de ne a linear -algebra structure JSpin(V; ) on 1 V by having 1 act as unit element and de ning the product of vectors v; w in V to be a scalar multiple of the unit, given by the bilinear form: Spin Factors Definition
v w := (v; w)1: Since x2 is a linear combination of 1 and x; Lx2 commutes with Lx , and as usual the resulting algebra JSpin(V; ) is a Jordan algebra. If V = n consists of column vectors with the standard dot product (~v; w~ ) = ~v w; ~ then the resulting JSpin(V; ) is just JS pinn () as de ned previously:
JS pinn() = JSpin(n ; ): If we try to twist these algebras, we regret having chosen a particular splitting of the algebra into unit and vectors, with the unit having a multiplication rule all to itself. When we change to a new unit element which is half-unit, half-vector, the calculations become clumsy. It is better (both practically and conceptually) to give a global description of the operations. At the same time we will pass to a quadratic instead of a bilinear form, because the quadratic norm form explicitly encodes important information about the algebra.
4. SPIN AND QUADRATIC FACTORS
81
4.2. Quadratic factors. Since we are working in spaces having
scalar 12 , there is no real dierence between bilinear forms and quadratic forms. Recall that a quadratic form Q on a space V is a quadratic mapping from V to , i.e. it is homogeneous of degree 2 (Q(x) = 2 Q(x) for all 2 ; x 2 V ) and its linearization Q(x; y ) := Q(x + y ) Q(x) Q(y ) is bilinear in x and y (hence a symmetric bilinear form on V ). Note that by de nition Q(x; x) = Q(2x) Q(x) Q(x) = 4Q(x) 2Q(x) = 2Q(x), so Q can only be recovered from the bilinear form Q(; ) with the help of a scalar 12 , in which case the correspondences Q $ given by (x; y ) := 12 Q(x; y ); Q(x) := (x; x) give an isomorphism between the categories of quadratic forms and symmetric bilinear forms. If we wished to study Jordan algebras where 12 is not available, we would have to use a description in terms of quadratic rather than bilinear forms. A basepoint for a quadratic form Q is a point c with norm 1, Q(c) = 1. We can form the associated linear trace form T (x) := Q(x; c), which automatically has T (c) = Q(c; c) = 2. In the presence of 12 we can always decompose our space as V = c V0 for V0 := fv j T (v ) = 0g the \orthogonal complement" c?, but instead of stressing the unit and its splitting, we will emphasize the equality of vectors and the unity of V It turns out that reformulation in terms of quadratic forms and basepoints is the \proper" way to think of these Jordan algebras. Quadratic Factors Example. (1) If Q is a quadratic norm form3 on a space W with basespoint c over containing 21 , we obtain a Jordan algebra Jord(Q; c) on W with unit 1 = c and product 1 (T (x) = Q(x; c)): x y = T (x)y + T (y )x Q(x; y )1 2 3This use of the term \norm" has nothing to do with the metric concept,
such as the norm in a Banach space; the trace and norm are analogues of trace and determinant of matrices, in particular are always polynomial functions. There is a general notion of \generic norm" for n-dimensional unital power associative algebras A over a eld : in the scalar extension A , for := [t1 ; : : : ; tn ] the polynomial ring in n indeterminates ti , the element x := t1 x1 + : : : + tn xn is a \generic element" of A in the sense that every actual element of A arises from x through specialization ti ! i of the indeterminates to particular scalars in . Then x satis es a generic minimum polynomial resembling the characteristic polynomial for matrices: xm T (x)xm 1 + : : : + ( 1)m N (x)1 = 0. The constant term N (x) is the generic norm; it is a homogeneous polynomial function of t1 ; : : : ; tn of degree m:
82
Jordan Algebras in the Enlightenment
Taking y = x shows that every element x satsi es the degree 2 equation x2 T (x)x + Q(x)1 = 0 (we say that Jord(Q; c) has \degree 2"). The norm determines invertibility: an element in Jord(Q; c) is invertible i its norm is an invertible scalar, x invertible in Jord(Q; c) () Q(x) invertible in ; in which case the inverse is a scalar multiple of x := T (x)1 x; 4 x 1 = Q(x) 1 x : (2) Over a eld, the norm and basepoint completely determine the Jordan algebra: if we de ne two quadratic forms with basepoint to be equivalent (written (Q; c) = (Q0 ; c0 )) if there is a linear isomorphism ' : W ! W 0 which preserves norms and basepoint, Q0 ('(x)) = Q(x) for all x 2 W and '(c) = c0 , then Jord(Q; c) = Jord(Q0; c0) () (Q; c) = (Q0 ; c0 ): (3) All isotopes are again quadratic factors, Jord(Q; c)(u) = Jord(Q(u); c(u) ) ( where Q(u) = Q(u)Q and c(u) = u 1 ): Thus isotopy just changes basepoint and scales the norm form. It is messier to describe in terms of bilinear forms: in JSpin(V; )(u) , the new (u) is related in a complicated manner to the old , since it lives half on V and half o, and we are led ineluctably to the global Q: JSpin(V; )(u) = JSpin(W; (u) ) for W := fx 2 1 V j Q(x; u ) = 0g; (u) (w; z ) := 21 Q(u)Q(w; z ): All nondegenerate quadratic forms (equivalently, all symmetric bilinear) forms Q of dimension n + 1 over an algebraically closed eld
of characteristic 6= 2 are equivalent: they can be represented, relative to a suitable basis (starting with the basepoint c), as Q(~v ) = h~v; ~v i, hence Q(~v ; w~ ) = 2h~v ; w~ i = 2~v tr w~ for column vectors ~v ; w~ in n+1 . x 1 = det(x) 1 adj (x) in 2 2 matrices for the inverse as a scalar multiple of the adjoint. In the case of associative or alternative algebras the norm form, like the determinant, \permits composition" N (ab) = N (a)N (b) for all elements a; b 2 A. In the case of a Jordan algebra the norm permits \Jordan composition" N (Ua b) = N (a)N (b)N (a) in terms of the U operator discussed in the Survey and introduced in the next chapter. The crucial property in both cases is that a is invertible i its norm N (a) is nonzero (i.e. invertible) in . 4This is completely analogous to the recipe
5. CUBIC FACTORS
83
Therefore all the corresponding Jordan algebras Jord(Q; c) are isomorphic to good-old JS pinn ( ). Over a non-algebraically-closed eld , every nondegenerate Q can be represented as a sum of squares by a diagonal matrix diag(1 ; 2 ; : : : ; n ) for nonzero i , but it is a delicate question when two such diagonal matrices produce equivalent quadratic forms. Nevertheless, all quadratic factors Jord(Q; c) of dimension n +1 over a eld are forms of a spin factor JS pinn ( ) : Jord(Q; c) = JS pinn( ): We consistently talk of Jordan algebras constructed from a bilinear form as spin factors JSpin(V; ), and those constructed from a quadratic form with basepoint as quadratic factors Jord(Q; c).
5. Cubic Factors Unlike the case of quadratic forms with basepoint, only certain very special cubic forms with basepoint can be used to construct Jordan algebras. A cubic form N on a space V over is a map V ! which is homogeneous of degree 3 [N (x) = 3 N (x)] and which extends to arbitrary scalar extensions V by X X X X N ( !ixi )= !i3N (xi )+ !i2 !j N (xi ; xj )+ !i !j !k N (xi ; xj ; xk ) i
i
i6=j
i;j;k6=
where the linearization N (x; y ) is quadratic in x and linear in y , while N (x; y ; z ) is symmetric and trilinear.
5.1. Jordan cubics. The test for admission to the elite circle of Jordan cubics is the existence of a unit having well-behaved trace and adjoint. A basepoint for N is a point c 2 V with N (c) = 1. We have an associated linear trace form T (x) := N (c; x) and a quadratic spur form S (x) := N (x; c) whose linearization is just S (x; y ) := S (x + y ) S (x) S (y ) = N (x; y ; c). Automatically N (c) = 1; S (c) = T (c) = 3: Jordan Cubic Definition. A nite-dimensional cubic form with basepoint (N; c) over a eld of characteristic 6= 2 is de ned to be a Jordan cubic5 if (1) N is nondegenerate at the basepoint c, in the sense that the
trace bilinear form T (x; y ) := T (x)T (y ) S (x; y )
5In the literature this is also called an admissible cubic, but this gives no clue
as to what it is admitted for, whereas the term Jordan makes it clear that the cubic is to be used for building Jordan algebras.
84
Jordan Algebras in the Enlightenment
is nondegenerate; (2) The quadratic adjoint (or sharp) map, de ned uniquely by T (x# ; y ) = N (x; y ), strictly satis es the adjoint identity (x# )# = N (x)x:
. (1) From every Jordan cubic form with basepoint we obtain a Jordan algebra Jord(N; c) with unit 1 := c and product determined from the linearization x#y := (x+y )# x# y # of the sharp mapping by the formula 1 x y := x#y + T (x)y + T (y )x S (x; y )1 : 2 This algebra has degree 3; x3 T (x)x2 + S (x)x N (x)1 = 0; with sharp mapping x# = x2 T (x)x + S (x)1: (2) An element is invertible i its norm is nonzero, in which case the inverse is a multiple of the adjoint (just as in matrix algebras): u 2 J is invertible i N (u) 6= 0; in which case u 1 = N (u) 1 u#: (3) For invertible elements the isotope is obtained (as in quadratic factors) by scaling the norm and shifting the unit, Jord(N; c)(u) = Jord(N (u) ; c(u)) for c(u) = u 1 ; N (u) (x) = N (u)N (x): Cubic Factors Theorem
5.2. Reduction. Over a eld, the quadratic and cubic factors are either division algebras [where the norm Q or N is anisotropic], or reduced [where the norm is isotropic, equivalently the algebra has proper idempotents]. Simple reduced Jord(N; c)'s are always isomorphic to Jordan matrix algebras H3 (D; ) for a composition algebra D, and are split if the coordinate algebra is a split D = C n () (in which case we can take all i = 1; = 1n the identity matrix, H3(D; ) = H3(C n())). We de ne an Albert algebra over a eld to be an algebra Jord(N; c) determined by a Jordan cubic form of dimension 27; these come in 3 avors, Albert division algebras, reduced Albert algebras H3 (O; ) for an octonion division algebra O with standard involution, and split Albert algebras H3 (O()) = Alb().
6. CLASSIFICATION
85
The cubic factor construction was originally introduced by H. Freudenthal for 3 3 hermitian matrices, where we have a very concrete representation of the norm. Reduced Cubic Factors Example. If D is an alternative algebra with involution such that H(D) = 1, then for diagonal = diag ( 1; 2 ; 3 ) with entries invertible scalars, the twisted matrix algebra H3(D; ) is a cubic factor Jord(N; c) with basepoint c := e1 + e2 + e3 and norm given by X
N (x) := 1 2 3
cyclic
!
i j k n(ai ) + 1 2 3 t(a1 a2 a3 );
(summed over all cyclic permutations (i; j; k) of (1; 2; 3)) where we have
T (x) =
X i
i ; S (x) =
T (x; y ) =
X cyclic
X
cyclic
(j k
j k n(ai ))
(i i + j k t(ai bi )) ;
when the elements x; y are given for i ; i 2 ; ai ; bi 2 D by X X x= (i ei + ai [jk]) ; y= ( i ei + bi [jk]) : cyclic
cyclic
6. Classi cation We now have all the ingredients to be able to state the basic classi cation of nite-dimensional Jordan algebras over an arbitrary eld of characteristic 6= 2, the crowning achievement of the Age of Enlightenment. Besides twisting, a new phenomenon that arises only over nonalgebraically-closed elds is division algebras. For associative algebras, the only nite-dimensional division algebra over an algebraically closed eld (like the complexes) is the eld itself, but over the reals we have R (of course), as well as C of dimension 2 (which is not central-simple), but also the central-simple algebra of Hamilton's quaternions H of dimension 4. In the same way, the only nite-dimensional Jordan division algebra over an algebraically closed eld is itself, but over a general eld there may be others. We have already noted that A+ is a Jordan division algebra i A is an associative division algebra, Jord(Q; c) is a division algebra i the quadratic form Q is anisotropic, and Jord(N; c) is a division algebra i the cubic form N is anisotropic. This leads to the nal classi cation of nite-dimensional algebras over a general
86
Jordan Algebras in the Enlightenment
eld. If J is nite-dimensional simple over , its center is a nite extension eld of , and J is nite-dimensional central-simple over , so it suÆces to classify all central-simple algebras. Enlightened Structure Theorem. Let J be a nite- dimensional Jordan algebra over a eld of characteristic 6= 2. (1) The radical of J is the maximal ideal of nilpotent elements, and the quotient J=Rad(J) is semisimple. (2) J is semisimple i it is a nite direct sum of simple ideals. In this case, J has a unit element, and the simple decomposition is unique: the simple summands are precisely all minimal ideals of J. (3) Every simple unital J is central-simple over its center, which is a eld. (4) J is central-simple over i it is isomorphic to exactly one of : (0) Division Algebra: a nite-dimensional central Jordan division algebra over ; (I) Quadratic Factor: Jord(Q; c) for an isotropic nondegenerate quadratic form with basepoint of nite dimension 3 over ; (II) Hermitian Factor: Hn (D; ) : (IIa) Hn (; ) for a nite-dimensional central associative division algebra with involution over for n 2; (IIb) Hn (E x()) = Mn ()+ for a nite-dimensional central associative division algebra over for n 2; (IIc) Hn (Q()) for Q() the split quaternion algebra over with standard involution for n 3 : (III) Albert Algebra: Jord(N; c) = H3 (O; ) of dimension 27 for O an octonion algebra over with standard involution, only for n = 3.
Once more, the only exceptional algebras in the list are the 27-dimensional Albert algebras (III) and, possibly, some exceptional division algebras (0). Note that in (I) the Jord(Q; c) of dimension 1 are division algebras + as in (0), those of dimension 2 are either division algebras or non-simple split bionions + + . In (IIa-b) for n = 1, H1 () = H() and M1 ()+ = + fall under (0). In (IIc), note that if the coordinate quaternion algebra is not split it is a division algebra, hence falls under (IIa), and for n = 1; 2 it always falls under (I). In (III) the non-reduced algebras of cubic forms are division algebras and again fall under (0). (IIa) for n = 2 falls under (I) i is a composition algebra with standard involution, (IIb) for n = 2 falls under (I) i = ; outside of these cases, the listed types of simple algebras are non-overlapping.
6. CLASSIFICATION
87
At this stage of development there was no way to classify the division algebras, especially to decide if there were any exceptional ones which weren't Albert algebras determined by anisotropic cubic forms. In fact, to this very day there is no general classi cation of all nitedimensional associative division algebras : there is a general construction (crossed products) which yields all the division algebras over many important elds (including all algebraic number elds), but in 1972 Amitsur gave the rst construction of a NON-crossed-product division algebra, and there is as yet no general characterization of these.
CHAPTER 4
The Classical Theory Jordan algebras with minimum condition
In the 60's surprising new connections were found between Euclidean Jordan algebras and homogeneous cones, symmetric spaces, and bounded homogeneous domains in real and complex dierential geometry by Max Koecher and his students. These connections demanded a broadening of the concept of Jordan systems to include triple and pairs, which were at the same time arising spontaneously from the mists of Lie algebras through the Tits-Kantor-Koecher construction.1 A leading role in these investigations was played by Jordan triple products rather than binary products, especially by the U -operator and its inverse Hx = Ux 1 , which was a geometrically important transformation. The same U -operator was also cropping up in purely algebraic investigations of N. Jacobson and his students in connection with inverses, isotopies, and generic norms.
1. U -Operators The classical theory of Jordan algebras cannot be understood without clearly understanding Jacobson's U -operators Ux , the corresponding Jordan triple product fx; y; z g obtained from it by linearization, and the related operators Ux;y ; Vx;y ; Vx : Ux := 2L2x Lx2 ; Ux;z := Ux+z Ux Uz Vx;y (z ) := fx; y; z g := Ux;z (y ); Vx (y ) := fx; y g; Vx = Ux;1 = Vx;1 = V1;x: Note that fx; y; xg = 2Ux y: In your heart, you should always think of the U operator as \outer" left-and-right multiplication ,and the V operator as left-plus-right multiplication, leading to a Meta-principle: Ux y xyx; fx; y; z g xyz + zyx; fx; y g xy + yx: 1To avoid embarrasing American mispronunciations, the three distinguished
European mathematicians' names are zhahk teets, iss-eye kahn-tor, and (approximately) mahks kurr-ssher (the German ch- should be pronounced like that in Loch (not Lock!) Ness, de nitely not the ch- in church).
89
90
The Classical Theory
This is exactly what they amount to in special algebras2. Basic U Examples. (1) (Special U ) In any special Jordan algebra + J A the U -operator and its relatives are given by Ux y = xyx; Vx;y z = Ux;z y = fx; y; z g = xyz + zyx; Vxy = fx; y g = xy + yx: (2) (Quadratic Factor U ) In the Jordan algebra Jord(Q; c) determined by a quadratic form with basepoint, the U -operators take the form Ux y = Q(x; y )x Q(x)y for y = T (y )c y: (3) (Cubic Factor U ) In the Jordan algebra Jord(N; c) determined by a Jordan cubic form N with basepoint over , the U -operators take the form Ux y = T (x; y )x x# #y:
2. Quadratic Axioms Jacobson conjectured, and in 1958 I.G. Macdonald rst proved, the
Fundamental Formula
UUx y = Ux Uy Ux for arbitrary Jordan algebras. Analytic and geometric considerations led Max Koecher to these same operators and the same formula (culminating in his book Jordan-Algebren with Hel Braun in 1966). Several notions which had been cumbersome using the L-operators became easy to clarify with U -operators. These operators will appear on almost every page in the rest of this book. 2.1. The quadratic program. The emphasis began to switch from the Jordan product x y = 21 (xy + yx), which was inherently limited to characteristic 6= 2, to the product xyx, which was ringtheoretic in nature and made sense for arbitrary scalars. If a description of Jordan algebras in quadratic terms could be obtained, it would not only ll in the gap of characteristic 2, but would also open the way to study arithmetic properties of Jordan rings (where the ring of scalars was the integers Z, which certainly did not contain 21 ). In analogy with 2WARNING: in the linear theory it is more common to use 1 fxyz g as the triple 2 product, so that fx; y; xg = Uxy; you must always be careful to determine which convention is being used. We will always use the above convention that makes no reference to 12 . Thus the triple product in special algebras becomes the 3-tad fx1 ; x2 ; x3 g = x1 x2 x3 + x3 x2 x1 :
2. QUADRATIC AXIOMS
91
Jordan's search for Jordan axioms, it was natural to seek a quadratic axiomatization which: (1) agreed with the usual one over scalars containing 12 , (2) admitted all 3 Basic Types (Hermitian, Quadratic, Cubic) of simple algebras in characteristic 2, (3) admitted essentially nothing new in the way of simple algebras. It was well-known that in the theory of Lie algebras the passage from the classical characteristic 0 theory to characteristic p produces a raft of new simple algebras (only classi ed in characteristics p > 7 during the 80's). In contrast, Jordan algebras behave exactly the same in characteristic p 6= 2 as they do in characteristic 0, and it was hoped this would continue to characteristic 2 as well, making the theory completely uniform.
2.2. The axioms. A student of Emil Artin, Hans-Peter Lorenzen, attempted in 1965 an axiomatization based entirely on the Fundamental Formula, but was not quite able to get a satisfactory theory. It turned out that, in the presence of a unit element, one other axiom is needed. The nal form, which I gave in 1967, goes as follows. Quadratic Jordan Definition. A unital quadratic Jordan algebra J consists of a linear space on which a product Ux y is de ned which is linear in the variable y and quadratic in x (i.e. U : x ! Ux is a quadratic mapping of J into End (J)), together with a choice of unit element 1, such that the following operator identities hold strictly: (QJAx1) U1 = 1J ; (QJAx2) Vx;y Ux = Ux Vy;x (Vx;y z := fx; y; z g := Ux;z y ); (QJAx3) UUx y = Ux Uy Ux .3 The strictness condition means these identities hold not only for all elements x; y in J, they also continue to hold in all scalar extensions J ; it suÆces if they hold in the polynomial extension J[t] = J[t] for an indeterminate t. This turns out to be equivalent to the condition that all formal linearizations of these identities remain valid on J itself. (Notice that (QJAx2) is of degree 3, and (QJAx3) of degree 4, in 3In a 1966 announcement I used in place of (QJAx2) the simpler axiom (QJAx20): Vx;x = Vx2 or fx; x; yg = fx2 ; yg, but in proving that homotopes
remained Jordan it became clear that one could get the job done quicker by assuming (QJAx2) as the axiom. [Note that multiplying (QJAx2) on the right by Uy leads immediately to Vx(y)Ux(y) = Ux(y) Vx(y), which is just the Jordan identity in the y-homotope]. Further evidence that this is the \correct" axiomatization is that Kurt Meyberg showed in 1972 that quadratic Jordan triples, and Ottmar Loos in 1975 that quadratic Jordan pairs, could be axiomatized by (QJAx2), (QJAx3), and (QJAx4): VUx y;y = Vx;Uy x [whereas (QJAx20) only made sense in algebras].
92
The Classical Theory
x, hence do not automatically linearize [unless there are suÆciently many invertible scalars, e.g. a eld with at least 4 elements], but they do automatically linearize in y , since they are respectively linear and quadratic in y ): Nonunital Jordan algebras could be intrinsically axiomatized in terms of two products, Ux y and x2 (which would result from applying Ux to the absent unit element). However, the resulting axioms are too messy to remember, and it is much easier to to de ne nonunital algebras as those whose unital hull bJ, under U1x ( 1 y ) := 2 1 [2 y + 2 x + fx; y g + x2 + Ux y ], satis es the 3 easy-tograsp identities (QJAx1-3)4.
2.3. Justi cation. We have already indicated why these axioms
meet criterion (1) of the progrom, that quadratic and linear Jordan algebras are the same thing in the presence of 21 . It is not hard to show that these axioms also meet criterion (2), that the U -operators of the 3 Basic Examples do satisfy these axioms (though the cubic factors provide some tough slogging). For example, in special algebras J A+ the operations xyx and xyz + zyx satisfy (QJAx1) 1z 1 = z; (QJAx2) xy (xzx) + (xzx)yx = x(yxz + zxy )x; (QJAx3) (xyx)z (xyx) = x y (xzx)y x; and the same remains true in the polynomial extension since it remains special, J[t] A[t]+ : With considerable eort it can be shown that the criterion (3) is met, that these are (up to some characteristic 2 \wrinkles") the only simple unital quadratic Jordan algebras. Thus the quadratic approach provides a uniform way of describing Jordan algebras in all characteristics.
3. Inverses One of the rst concepts to be simpli ed using U -operators was the notion of inverses. In fact, inverses and U -operators were introduced and the Fundamental Formula conjectured by Jacobson in 1956 to ght his way through algebras of \degree 1", but until the Fundamental 4For any masochists in the audience, the requisite identities for non-unital algebras are (QJ1) Vx;x = Vx2 , (QJ2) Ux Vx = Vx Ux , (QJ3) Ux (x2 ) = (x2 )2 , (QJ4) Ux2 = Ux2 , (QJ5) UxUy (x2 ) = (Uxy)2 , (QJ6) UUx (y) = UxUy Ux. For some purposes
it is easier to work with axioms involving only one element x, replacing (QJ5-6) by (CJ50) (x2 )3 = (x3 )2 , (CJ60) Ux3 = Ux3 :
3. INVERSES
93
Formula was approved for use following I.G. Macdonald's proof in 1958, it was ghting with one hand tied behind one's back. . The following conditions on an element x of a unital Jordan algebra are equivalent: (i) x has a Jordan inverse y : x y = 1; x2 y = x; (ii) x has a quadratic Jordan inverse y : Ux y = x; Ux (y 2) = 1; (iii) the U -operator Ux is an invertible operator on J: In this case the inverse y is unique; if we denote it by x 1 , we have 1 1 x = Ux x and it satis es Ux 1 = Ux 1 . Quadratic Inverse Proposition
In general, the operator Lx is not invertible if x is, and even when it is invertible we do not generally have Lx 1 = Lx 1 . For example, the real quaternion algebra is a division algebra (both as an associative algebra H and as a Jordan algebra H + ), yet the invertible elements i and j satisfy i j = 0, so they are \divisors of zero" with respect to the Jordan product, and the operators Li and Lj are not invertible. More generally, in an algebra Jord(Q; c) determined by a quadratic form with basepoint two invertible elements x; y (Q(x); Q(y ) 6= 0) may well have x y = 0 (if they are orthogonal and traceless, Q(x; y ) = T (x) = T (y ) = 0): . (1) (Special Inverses) In any special Jordan algebra J an element x 2 J is invertible in J i it is invertible in A and the associative inverse x 1 2 A falls in J. (2); (3) (Quadratic and Cubic Inverses) In a quadratic or cubic factor Jord(Q; c) or Jord(N; c) determined by a quadratic form Q or cubic form N over ; x is invertible i Q(x) or N (x) is an invertible scalar (over a eld this just means Q(x) 6= 0 or N (x) 6= 0):5 Basic Inverse Examples
+ A ,
Once we have a notion of inverse, a division algebra is an algebra in which all nonzero elements are invertible. . (1); (2)(Full and Hermitian Division) If is an associative division algebra, then + is a Jordan division algebra; if has an involution, then H(; ) is a Jordan division algebra (note that the inverse of a hermitian element is again hermitian). (3); (4) (Quadratic and Cubic Division) Jord(Q; c) or Jord(N; c) determined by a quadratic or cubic form over a eld is a division algebra i Q or N is anisotropic (Q(x) = 0 or N (x) = 0 implies x = 0): Basic Division Examples
5This result holds in any Jordan algebra with a \generic norm N ", the analogue
of the determinant [cf. footnote [4] in 3.4.2].
94
The Classical Theory
4. Isotopes Another concept that is considerably simpli ed by adopting the U viewpoint is that of isotopes; recall from the Homotope De nition in 3.2 that for any element u the homotope J(u) was the Jordan algebra with product x u y = x (u y ) + (x u) y u (x y ): Though the left multiplication operator is messy to describe, L(xu) = Lxu + [Lx ; Lu ] = 12 Vx;u, the U -operator and square have crisp formulations. Quadratic Homotope Definition. (1) If u is any element of a Jordan algebra J, the u-homotope J(u) has shifted products (u) : x(2;u) = U u; x y = 1 fxuy g = 1 U u; U (u) = U U : J x u x u x 2 2 x;y The homotope is again a Jordan algebra, as can easily be checked using the quadratic axioms for a Jordan algebra. If J A+ is special so is any homotope, J(u) Au + : (2) The homotope is unital i the original algebra is unital and u invertible, in which case the unit is 1(u) = u 1 , and we speak of the u-isotope. Homotopy is re exive and transitive, and isotopy is symmetric (hence an equivalence relation among algebras):
(u2 )
(u) (v) = J(Uu v) ; = J; J J (u) = J: In general homotopy is not an equivalence relation: if u is not invertible we cannot recover J from J(u) , and information is lost in the passage to the homotope. J
(1)
5. Inner Ideals The greatest single advantage of looking at things from the U rather than the non-U point of view is that it leads naturally to one-sided ideals. Linear Jordan algebras, or any linear algebra with a commutative or anticommutative product, will have no notion of one-sided ideal: every left or right ideal is automatically a two-sided ideal. Now the product xyx doesn't have a left and right side, it has an in side and an out side: we multiply x on the inside by y , and y on the outside by x. Just as a left ideal in a linear algebra A is a subspace B invariant b B, and a right ideal is under multiplication by A on the left, AB b B, it is natural to invariant under multiplication on the right, BA de ne an inner ideal to be invariant under multiplication on the inside.
6. NONDEGENERACY
95
The introduction of inner ideals is one of the most important steps towards the modern structure theories of Jacobson and Zel'manov. Inner Ideal Definition. A subspace B J is called an inner ideal if it is closed under multiplication on the inside by Jb : UB bJ 2 B (or, equivalently, UB J B and B B). . The Fundamental Formula shows that any element b determines an inner ideal (b] = Ub Jb = b2 + Ub J; called the principal inner ideal determined by b. Principal Example
. (1) (Full) In the Jordan algebra A+ for associative A, any left or right ideal L or R of A is an inner ideal, hence also their intersection L \ R, as well as any subspace aAb. (2) (Hermitian) In the Jordan algebra H(A; ) for associative algebra A, any subspace aHa for a in A is an inner ideal. (3)(Quadratic) In a quadratic factor Jord(Q; c) over , any totally isotropic -subspace B (a -subspace consisting entirely of isotropic vectors, i.e. on which the quadratic form is totally trivial, Q(B) = 0) forms an inner ideal: Ub J = Q(b; J )b Q(b)J = Q(b; J )b b B. Notice that if B is a totally isotropic subspace, so are all subspaces of B, hence all subspaces of B are again inner ideals. If Q is nondegenerate over a eld , these totally isotropic subspaces are the only inner ideals other than J. Indeed, the principal inner ideals are (b] = J if Q(b) 6= 0 (since then b is invertible), and (b] = b if Q(b) = 0 (since if b 6= 0 then by nondegeneracy Q(b) = 0 6= Q(b; J ) implies Q(b; J ) = since a eld, so Ub J = Q(b; J )b = b): (4) (Cubic) Analogously, in a cubic factor Jord(N; c) any sharpless -subspace B J (ie. subspace on which the sharp mapping vanishes, # # B = 0) forms an inner ideal: Ub J = T (b; J)b b J = T (b; J)b 0 b B: Basic Inner Examples
6. Nondegeneracy Another concept which requires U -operators for its formulation is
the crucial \semi-simplicity" concept for Jordan algebras, discovered by Jacobson: a Jordan algebra is nondegenerate if it has no nonzero trivial elements, where an element z 2 J is trivial if its U -operator is trivial on the unital hull (equivalently, its principal inner ideal vanishes): (z ] = Uz Jb = 0; i.e. Uz J = 0; z 2 = 0:
96
The Classical Theory
Notice that we never have nonzero elements with Lz trivial on the unital hull. 6.1. Trivial elements. Nondegeneracy, the absence of trivial elements, is the useful Jordan version of the associative concept of semiprimeness, the absence of trivial ideals BB = 0. For associative algeb zA b since bras, trivial elements z are the same as trivial ideals B = A b b b b b z Az = 0 () BB = (Az A(Az A) = 0. A major diÆculty in Jordan theory is that there is no convenient characterization of the Jordan ideal generated by a single element z . Because element conditions are much easier to work with than ideal conditions, the elemental notion of nondegeneracy has proven much more useful than semiprimeness. + Basic Trivial Examples. (1)(Full) An element z of A is trivial b zA b . In particular, the i it generates a trivial two-sided ideal B = A + Jordan algebra A is nondegenerate i the associative algebra A is semiprime. (2) (Hermitian) If H(A; ) has trivial elements, then so does A. In particular, if A is semiprime with involution then H(A; ) is nondegenerate. (3) (Quadratic) An element of a quadratic factor Jord(Q; c) determined by a quadratic form with basepoint over a eld is trivial i it belongs to Rad(Q). In particular, Jord(Q; c) is nondegenerate i the quadratic form Q is nondegenerate, Rad(Q) = 0: (4)(Cubic) A cubic factor Jord(N; c) determined by a Jordan cubic form with basepoint over a eld is always nondegenerate.
6.2. Radical remarks. In associative theory there are many different radicals designed to remove dierent sorts of pathology and create dierent sorts of \niceness". The more diÆcult a \pathology" is to remove, the larger the corresponding radical will have to be. The usual \niceness" condition on an associative algebra A is semi-primeness (no trivial ideals, equivalently no nilpotent ideals), which is equivalent to the vanishing of the obstacle to semprimeness, the prime radical P rime(A) (also called the semiprime or Baer radical). This obstacle is the smallest ideal P such that A=P is semiprime, and is the intersection of all prime ideals (those P such that A=P is a \prime algebra" in the sense of having no orthogonal ideals). For commutative associative rings, prime means integral domain, semi-prime means no nilpotent elements x2 = 0, and simple means eld; every semiprime commutative ring is a subdirect product of prime rings, and every prime ring imbeds in a simple ring (its eld of fractions). It is often helpful to think of the relationship of prime to simple for general rings as analogous to
6. NONDEGENERACY
97
that between a domain and its eld of fractions. In noncommutative associative algebras, the matrix algebras Mn (D) are prime if D is a domain, and imbed in a simple \ring of central quotients" Mn (F) for F the eld of fractions of D. The more restrictive notion of semiprimitivity (no \quasi-invertible" ideals) is equivalent to the vanishing of the obstacle to semiprimitivity, the larger primitive radical Rad(A) (also called the semiprimitive or Jacobson. or simply THE) radical). This is the smallest ideal R such that A=R is semiprimitive, and is the intersection of all primitive ideals (those Q such that A=Q is a \primitive algebra," i.e. has a faithful irreducible representation).6 For artinian algebras these two important radicals coincide. There are analogous radicals for Jordan algebras. Radical Definitions. (1) Two ideals I; K / J in a Jordan algebra are orthogonal if UI K = 0. A Jordan algebra is prime if it has no orthogonal ideals, UI K = 0 =) I = 0 or K = 0 (J prime); and is semi-prime if it has no self-orthogonal ideals, UI I = 0 =) I = 0 (J semiprime): This latter is equivalent to the absence of trivial ideals UI I = I2 = 0, or nilpotent ideals In = 0 [where the power In is de ned as the span of all homogeneous Jordan products of degree n when expressed in terms of products x y (thus Ux y and fx; y; z g have degree 3; fx; y g and x2 have degree 2)], or solvable ideals I(n) = 0 [where the derived ideal I(n) is de ned recursively by I(0) = I; I(n+1) = UI(n) I(n) ]: (2) The obstacle to semiprimeness is the prime (or semiprime or Baer) radical P rime(J), the smallest ideal whose quotient is semiprime, which is the intersection of all ideals whose quotient is prime; J is semiprime i P rime(J) = 0 i J be a subdirect product of prime algebras. (3) A Jordan algebra is semiprimitive if it has no properly quasiinvertible elements [elements z with ^1 z invertible in all homotope hulls 6Because it is the quotients A=P; A=Q that are prime or primitive, it can be
confusing to call the ideals P; Q prime or primitive (they are not prime or primitive as algebras in their own right), and it would be more natural to call them co-prime and co-primitive ideals. An algebra turns out to be semi prime i it is a subdirect product of prime algebras, and semiprimitive i it is a subdirect product of primitive algebras. In general, for any property P of algebras, semi-P means \subdirect product of P -algebras." The only exception is semisimple, which has become fossilized in its nite-dimensional meaning \direct sum of simples".
98
The Classical Theory
\ (u) ]. The obstacle to semiprimitivity is the primitive (or semiprimJ
itive or Jacobson) radical Rad(J), the smallest ideal whose quotient is semiprimitive; J is semiprimitive i Rad(J) = 0:[We will see in Part IV that Zel'manov discovered the correct Jordan analogue of primitivity, such that Rad(J) is the intersection of all ideals whose quotients are primitive, and J is semiprimitive i it is a subdirect product of primitive algebras.] The archetypal example of a prime algebra is a simple algebra, and the archetypal example of a non-prime algebra is a direct sum J1 J2 . The archetypal example of a semiprime algebra is a semisimple algebra (direct sum of simples), and the archetypal example of a non-semiprime algebra is a direct sum N T of a nice algebra and a trivial algebra.
7. i-Special and i-exceptional It had been known for a long time that while the class of special algebras was closed under the taking of subalgebras and direct products, it was not closed under taking homomorphic images: P.M. Cohn had given an example in 1954 showing the quotient of the free special Jordan algebra on two variables x; y by the ideal generated by x2 y 2 was no longer special (cf. Example A.5). By a general result of Birkho, any class C of algebras closed under subalgebras, direct sums, and homomorphic images forms a variety de ned by a family F of identities (identical relations): A 2 C () A satis es all the identities in F (f (a1 ; : : : ; an ) = 0 for all f (x1 ; : : : ; xn ) 2 F and all elements a1 ; : : : ; an 2 A). Thus the class of special algebras was not a variety, but its \varietal closure", the slightly larger class of homomorphic images of special algebras, could be de ned by identities. These identities are called the special-identities or s-identities: they are precisely all Jordan polynomials which vanish on all special algebras (hence automatically on their homomorphic images as well), but not on all Jordan algebras (they are nonzero elements of the free Jordan algebra). The rst thing to say is that there weren't supposed to be any sidentities! Remember that Jordan's goal was to capture the algebraic behavior of hermitian operators in the Jordan axioms. The s-identities are in fact just the algebraic identities involving the Jordan product satis ed by all hermitian matrices (of arbitrary size), and in principle all of these were supposed to have been incorporated into the Jordan axioms! A non-constructive proof of the existence of s-identities was rst shown by A.A. Albert and Lowell J. Paige in 1959 (when it was far too late to change the Jordan axioms), by showing there must be nonzero Jordan polynomials f (x; y; z ) in three variables which vanish
8. ARTIN-WEDDERBURN-JACOBSON STRUCTURE THEOREM
99
on all special algebras (become zero in the free special algebra on three generators). The rst explicit s-identities G8 and G9 were discovered by Jacobson's student Charles M. Glennie in 1963: as we remarked in the Survey, these could not possibly have been discovered without the newly-minted notions of the Jordan triple product and U -operators, and indeed even in their U -form no mortal other than Glennie has been able to remember them for more than 15 minutes.7 It is known that there are no s-identities of degree 7, but to this day we do not know exactly what all the s-identities are, or even if they are nitely generated. We call an algebra identity-special or i-special if it satis es all sidentities, i.e. belongs to the varietal closure of the special algebras. An algebra is identity-exceptional or i-exceptional if it is not i-special, i.e. does not satisfy all s-identities. Since the class of i-special algebras is slightly larger than that of special algebras, the class of i-exceptional algebras is slightly smaller than that of exceptional algebras. To be i-exceptional means that not only is the algebra exceptional, it doesn't even look special as far as its identities go: we can tell it apart from the special algebras just by examining the identities it satis es, not by all its possible imbeddings in associative algebras. The arguments of Albert, Paige, and Glennie showed that the Albert algebra was in fact i-exceptional. Notice again that according to Jordan's philosophy the iexceptional algebras were uninteresting (with respect to the s-identities they didn't behave like hermitian operators), only exceptional-but-ispecial algebras could provide an alternative setting for quantum mechanics. It was a happy accident that Jordan didn't know about Glennie's identities when he set up his axioms, or else the Albert algebra might never have been born.
8. Artin-Wedderburn-Jacobson Structure Theorem Inner ideals were rst introduced by David M. Topping in his 1965 Memoir on Jordan algebras of self-adjoint operators; he called them quadratic ideals, and explicitly motivated them as analogues of onesided ideals in associative operator algebras. Jacobson was quick to 7At the rst Oberwolfach conference on Jordan algebras in 1967 Charles Glennie
sat down and explained to me the procedure he followed to discover G8 and G9 . After 15 minutes it was clear to me that he had a systematic rational method to discover the identities. On the other hand, 15 minutes after his explanation it was also clear that there was no systematic procedure for me to remember the method. Thedy's identity T10 , since it was so compactly expressed in terms of a ctitious commutator acting like it belonged to the (at that time highly fashionable) structure group, t smoothly into the human brain's Jordan receptor cells.
100
The Classical Theory
realize the signi cance of this concept, and in 1966 used it to de ne artinian Jordan algebras in analogy with artinian associative algebras, and to obtain for them a beautiful Artin-Wedderburn structure theory in the paper \Structure theory for a class of Jordan algebras".8 Later on he proposed the terms \inner" and \outer ideal", which won immediate acceptance. Artinian Definition. A Jordan algebra J is artinian if it has minimum condition on inner ideals: every collection fBi g of inner ideals of J has a minimal element (a Bk not properly containing any other Bi of the collection). This is, as usual, equivalent to the descending chain condition (d.c.c.) on inner ideals: any strictly descending chain B1 > B2 > : : : of inner ideals must stop after a nite number of terms (there is no in nite such chain). . (1) Degeneracy can be localized in the nondegenerate M(J): there is a smallest ideal M(J) such that J=M(J) is nondegenerate; in general this is related to the semiprime and the semiprimitive radicals by P rime(J) M(J) Rad(J), and when J has minimum condition M(J) = Rad(J): (2) J is nondegenerate with minimum condition i it is a nite direct sum of ideals which are simple nondegenerate with minimum condition; in this case J has a unit, and the decomposition into simple summands is unique. (3) J is simple nondegenerate with minimum condition i it is isomorphic to one of the following: (0) Division Type: a Jordan division algebra: (I) Quadratic Type: a quadratic factor Jord(Q; c) determined by a nondegenerate quadratic form Q with basepoint c over a eld , such that Q is not split of dimension 2 and has no in nite-dimensional totally isotropic subspaces; (II) Hermitian Type: an algebra H(A; ) for a -simple artinian associative algebra A; Artin-Wedderburn-Jacobson Structure Theorem
radical9
8My own interest in quadratic Jordan algebras began when I read this paper,
which showed that an entire structure theory could be based on the U -operator { to paraphrase Archimedes, \Give me a Fundamental Formula and I will move the world." 9This is sometimes called the McCrimmon radical in the Russian literature; although Jacobson introduced nondegeneracy as the \correct" notion of semisimplicity, he did not explicitly collect it into a radical. Zelmanov showed M could be characterized in terms of m-sequences (analogous to the semiprime radical in associative theory).
8. ARTIN-WEDDERBURN-JACOBSON STRUCTURE THEOREM
101
(III) Albert Type: an exceptional Albert algebra Jord(N; c) of dimension 27 over a eld , given by a Jordan cubic form N . In more detail, there are 3 algebras of Hermitian Type (II), the standard Jordan matrix algebras coming from the 3 standard types of involutions on simple artinian algebras: (Exchange Type) Mn ()+ for an associative division ring (when op under the exchange A is -simple but not simple, A = E x(B) = B B involution for a simple artinian algebra B = Mn ()); (Orthogonal Type) Hn (; ) for an associative division ring with involution (A = Mn () simple artinian with involution of orthogonal type); (Symplectic Type) Hn (Q; ) for a quaternion algebra Q over a eld
with standard involution (A = Mn (Q) simple artinian with involution of symplectic type): Notice the two caveats in Quadratic Type (I). First, we must rule out the split 2-dimensional case because it is not simple, merely semisimple: in dimension 2 the quadratic form is either anisotropic, and the Jordan algebra a division algebra, or the quadratic form is reduced and the Jordan algebra has two orthogonal idempotents, hence J = e1 e2 = splits into a direct sum of two copies of the ground eld, and is not simple. As soon as the dimension is 3 the e1 ; e2 are tied back together by other elements v into one simple algebra. The second, more annoying, caveat concerns the d.c.c., not central simplicity: past dimension 2 the nondegenerate Quadratic Types are always central-simple and have at most two orthogonal idempotents, but in certain in nite-dimensional situations they might still have an in nite descending chain of inner ideals: by the Basic Inner Examples any totally-isotropic subspace B (where every vector is isotropic, Q(B ) = 0) is an inner ideal, and any subspace of B is also a totally isotropic inner ideal, so if Q has an in nite-dimensional totally isotropic B with basis v1 ; v2 ; : : : then it will have an in nite shrinking chain of inner ideals B(k) = Span(fvk ; vk+1; : : :g). Loos has shown that such a J always has d.c.c. on principal inner ideals, so it just barely misses being artinian. These poor Jord(Q; c)'s are left outside while their siblings party inside with the Artinian simple algebras. The nal classical formulation in the next section will revise the entrance requirements, allowing these to join the party too. The question of the structure of Jordan division algebras remained open: since they had no proper idempotents e 6= 0; 1 and no proper inner ideals, the classical techniques were powerless to make a dent in their structure. The nature of the radical also remained open. From the
102
The Classical Theory
associative theory one expected M(J) to coincide with P rime(J) and be nilpotent in algebras with minimum condition, but a proof seemed exasperatingly elusive. At this point a hypertext version of this book would include a melody from the musical \Oklahoma": Everything's up to date in Jordan structure theory They've gone about as fer as they can go They went and proved a structure theorem for rings with dcc, About as fer as a theorem ought to go (Yes, sir !)
CHAPTER 5
The Final Classical Formulation Algebras with Capacity
In 1983 Jacobson reformulated his structure theory in terms of algebras with capacity. This proved serendipitous, for it was precisely the algebras with capacity, not the (slightly more restrictive) algebras with minimum condition, that arose naturally in Zel'manov's study of arbitrary in nite-dimensional algebras.
1. Capacity The key to this approach lies in division idempotents. Division Idempotent Definition. An element e of a Jordan algebra J is called an idempotent if e2 = e (then all its powers are equal to itself, hence the name \same-powered"). In this case the principal inner ideals (e) = (e] = [e] coincide and form a unital subalgebra Ue J. A division idempotent is one such that this subalgebra Ue J is a Jordan division algebra. . Two idempotents e; f in J are orthogonal, written e ? f , if e f = 0, in which case their sum e + f is again idempotent; an orthogonal family fe g is a family of pairwise orthogonal idempotents (e ? e for all 6= ). A nite orthogonal Pn family is supplementary if the idempotents sum to the unit, i=1 ei = 1. Orthogonal Idempotent Definition
Connection Definition. Two orthogonal idempotents ei ; ej in a Jordan algebra are connected if there is a connecting element uij 2 Uei ;ej J which is invertible in the subalgebra Uei +ej J. If the element uij can be chosen so that u2ij = ei + ej , then we say uij is a strongly connecting element and ei ; ej are strongly connected. Connection Examples. (1)(Full, Hermitian) If is an associative division algebra with involution, then in the Jordan matrix algebra + or J = H () the diagonal idempotents E are supJ = Mn () n ii plementary orthogonal division idempotents (with UEii (J) = Eii or
103
104
The Final Formulation
H()Eii respectively) strongly connected by the elements uij = Eij +
Eji. (2) (Twisted Hermitian) In the twisted hermitian matrix algebra J = Hn (; ), the diagonal idempotents Eii are again orthogonal division idempotents (with UEii (J) = i H()Eii ) connected by the elements uij = 1[ij ] with u2ij = j [ii] + i [jj ] . But in general they cannot be strongly connected by any element vij = a[ij ] since vij2 = (a j a)[ii] + (a i a)[jj ] = ( i a j a)Eii + ( j a i a)Ejj ; for example, if = the reals, complexes, or quaternions with standard involution and i = 1; j = 1 then we never have i a j a = 1 (ie. never aa = 1). (3) (Upper Triangular) If J = A+ for A = T n () the uppertriangular n n matrices over , then again the Eii are orthogonal division idempotents, but they are not connected: UEii ;Ejj (J) = Eij for i < j and so consists entirely of nilpotent elements u2ij = 0. Capacity Definition. A Jordan algebra has capacity n if it has a unit 1 which can be written as a nite sum of n orthogonal division idempotents: 1 = e1 + : : : + en ; it has connected capacity if each pair ei ; ej for i 6= j is connected. It has nite capacity (or simply capacity) if it has capacity n for some n. It is not immediately clear that capacity is an invariant { that all decompositions of 1 into orthogonal sums of division idempotents have the same length. This is true, but was proven much later by Holger Petersson.
2. Classi cation To analyze an arbitrary algebra of nite capacity, Jacobson broke it into its simple building blocks, analyzed the simple blocks of capacity 1,2, n 3 in succession, then described the resulting coordinate algebras, nally reaching the pinnacle of the classical structure theory in his 1983 Arkansas Lecture Notes. Jacobson Capacity Theorem. (1) A nondegenerate Jordan algebra J with minimum condition on inner ideals has nite capacity. (2) J is nondegenerate with nite capacity i it is a nite direct sum of algebras with nite connected capacity. (3) J is nondegenerate with nite connected capacity i it is isomorphic to one of the following (in which case it is simple) : (0) Division Type: a Jordan division algebra; (I) Quadratic Type: a quadratic factor Jord(Q; c) determined by
2. CLASSIFICATION
105
a nondegenerate quadratic form Q with basepoint c over a eld (not split of dimension 2); (II) Hermitian Type: an algebra H(A; ) for a -simple artinian associative algebra A; (III) Albert Type: an exceptional Albert algebra Jord(N; c) of dimension 27 over a eld , determined by a Jordan cubic form N: In more detail, the algebras of Hermitian Type (II) are twisted Jordan matrix algebras: (IIa) Exchange Type: Mn ()+ for an associative division ring (A is -simple but not simple, A = E x(B) with exchange involution for a simple artinian algebra B = Mn ()); (IIb) Orthogonal Type: Hn (D; ) for an associative division ring with involution (A = Mn () simple artinian with involution of twisted orthogonal type); (IIc) Symplectic Type: Hn (Q; ) for a quaternion algebra Q over a eld with standard involution (A = Mn (Q) simple artinian with involution of twisted symplectic type). Note that we have not striven to make the Types non-overlapping: Types I, II, III may include division algebras of Type 0 (our nal goal will be to divide up Type 0 and distribute the pieces among Types I-III). Finite capacity essentially means \ nite-dimensional over a division ring" , which may well be in nite-dimensional over its center, though the quadratic factors Jord(Q; c)'s can have arbitrary dimension over
right from the start. Note that all these simple algebras actually have minimum condition on inner ideals, with the lone exception of Quadratic Type: by the 4 Basic Inner Examples Jord(Q; c) always has minimum condition on principal inner ideals [the principal inner ideals are (b] = J or (b] = b], but has minimum condition on all inner ideals i it has no in nite-dimensional totally isotropic subspaces. Irving Kaplansky has argued that imposing the d.c.c. instead of nite-dimensionality is an unnatural act, that the minimum condition does not arise \naturally" in mathematics (in contrast to the maximum condition, which for example arises naturally in rings of functions attached to nite-dimensional varieties in algebraic geometry, or in Goldie's theory of orders in associative rings). In Jordan theory the d.c.c. is additionally unnatural in that it excludes the quadratic factors with in nite-dimensional totally isotropic subspaces. Zel'manov's structure theory shows that, in contrast, imposing nite-capacity is a natural act: nite-capacity grows automatically out of the nite degree of any non-vanishing s-identity in a prime Jordan algebra.
CHAPTER 6
The Classical Methods Cherchez les division idempotents 1. Peirce Decompositions We will now give an outline of how one establishes this classical structure theory. The method is to nd as many idempotents as possible, preferably as tiny as possible, and use their Peirce1 decompositions to re ne the structure. Peirce decompositions were the key tool in the classical approach to the Artin-Wedderburn P theory of associative rings. In associative rings a decomposition 1 = ni=1 ei of the unitL into pairwise orthogonal idempotents produces a decomposition A = ni;j =1 Aij of the algebra into \chunks" (Peirce spaces) with multiplication rules like those for matrix units, Aij Ak` Æjk Ai` . We can recover the structure of the entire algebra by analyzing the structure of the individual pieces and how they are reassembled (humpty-dumpty-wise) to form A. Jordan algebras have similar Peirce decompositions, the only wrinkle being that (as in the archetypal example of hermitian matrix algebras) the \o-diagonal" Peirce spaces Jij (i 6= j ) behave like the sum Aij + Aji of two associative Peirce spaces, so that we have symmetry Jij = Jji . When n = 2 the decomposition is determined by a single idempotent e = e1 with its complement e0 := 1 e, and can be described more simply. Peirce Decomposition Theorem. An idempotent e in a Jordan algebra J determines a Peirce decomposition J
= J2 J1 J0
(Ji := Ji (e) := fx j Ve x = ixg)
of the algebra into the direct sum of Peirce subspaces, where the diagonal Peirce subalgebras J2 = Ue J; J0 = U1 e J are principal inner ideals. 1Not pronounced \pierce" (as in ear, or the American president), but \purse"
(as in sow's ear, or the American mathematician).
107
108
The Classical Methods
More generally, a decomposition 1 = e1 + : : : + en of the unit of a Jordan algebra J into a sum of n supplementary orthogonal idempotents leads to a Peirce decomposition M J = Jij ij
of the algebra into the direct sum of Peirce subspaces, where the diagonal Peirce subalgebras are the inner ideals Jii := Uei (J) = fx 2 J j ei x = xg; and the o-diagonal Peirce spaces are 1 Jij = Jji := Uei ;ej (J) = fx 2 J j ei x = ej x = xg (i 6= j ): 2 Because of the symmetry Jij = Jji the multiplication rules for these subspaces are a bit less uniform to describe, but basically the product of two Peirce spaces vanishes unless two of the indices can be linked, in which case it falls in the Peirce space of the remaining indices: fJii; Jik g Jik ; fJij ; Jjk g Jik (i; j; k 6=) fJij ; Jij g Jii + Jjj (i 6= j ); fJij ; Jk`g = 0 (fi; j g \ fk; `g = ;): For example, in a Jordan matrix algebra J = Hn (D; ) the matrix units E11 ; : : : ; Enn form a supplementary family of orthogonal idempotents, with Peirce decomposition J = ij Jij for Jii = H(D)[ii]; Jij = D[ij ] (i 6= j ) using the box notation [ii] = Eii for 2 H(D); a[ij ] = aEij + aEji = a[ji] for a 2 D.
2. Coordinatization The classical Wedderburn Coordinatization Theorem says that if an associative algebra P has a supplementary family of n n associative matrix units eij ( i eii = 1; eij ek` = Æjk ei` ), then it is itself a matrix algebra, A = Mn (D) coordinatized by the Peirce space D = A11 .
2.1. The coordinatization. Jacobson found an important analogue for Jordan algebras: a Jordan algebra with enough Jordan matrix units is a Jordan matrix algebra Hn (D) coordinatized by D = J12 . Here we cannot recover the coordinate algebra from the diagonal Peirce spaces Jii = H(D)[ii] coordinatized only by hermitian elements, we must use instead the o-diagonal Peirce spaces Jij = D[ij ] (in the Albert algebra J11 = R [11] is 1-dimensional, while J[12] = K [12] is 8-dimensional). We also need n 3 in order to recover the product
2. COORDINATIZATION
109
in the coordinate algebra D via fa[12]; b[23]g = ab[13] (note how the brace products fx; y g xy + yx interact more naturally with the coordinates than the bullet products). Often in mathematics, in situations with low degrees of freedom we may have many \sporadic" objects, but once we get enough degrees of freedom to maneuver we reach a very stable situation. A good example would be projective geometry. Projective n-spaces for n = 1; 2 (lines and planes) are bewilderingly varied; one important projective plane, the Moufang plane discovered by Ruth Moufang, has points (b; c) coordinatized by the octonions. But as soon as you get past 2, projective n-spaces for n 3 automatically satisfy Desargues' Axiom, and any n-space for n 2 which satis es Desargues' Axiom is coordinatized by an associative division ring. The situation is the same in Jordan algebras: degrees 1 and 2 are complicated (see Osborn's Degree 2 Theorem below), but once we reach degree n = 3 things are coordinatized by alternative algebras, and once n 4 by associative algebras (recall the Alternative and Associative Coordinates Theorems of 2.6). When the idempotents are strongly connected we get a Jordan matrix algebra, while if they are merely connected we obtain an isotope (a twisted matrix algebra Hn (D; )). Note that no \niceness" conditions such as nondegeneracy are needed here: the result is a completely general structural result. Jacobson Coordinatization Theorem. (1) If a Jordan algebra J has a supplementary family of n 3 Jordan matrix units (1 = e1 + : : : + en for orthogonal idempotents ei strongly connected to e1 via u1i ), then J is isomorphic to a Jordan matrix algebra Hn (D) under an isomorphism
! Hn(D) via ei ! Eii; u1j ! 1[1j ] = E1j + Ej1: The coordinate -algebra D with unit and involution is given by J
:= J12 = Ue1 ;e2 J; 1 := u12 ; x := Uu12 (x); xy := ffx; fu12 ; u13 gg; fu13 ; y gg (x; y 2 J12 ): D
Here D must be associative if n 4, and if n = 3 then D must be alternative with hermitian elements in the nucleus. (2) A Jordan algebra whose unit is a sum of n 3 orthogonal connected idempotents is isomorphic to a twisted Jordan matrix algebra: if the ei are merely connected to e1 via the u1i (u21i = u(11i) + uii ), then there is an isotope J(u) relative to a diagonal element u = e1 + u22 + :2: : + unn which is strongly connected, J(u) = Hn (D), and J = (J(u) )(u ) = Hn(D)( ) = Hn(D; ).
110
The Classical Methods
The key in the strongly connected case (1) is to construct the \matrix symmetries", automorphisms ' permuting the idempotents ' (ei ) = e(1) . These are generated by the \transpositions" 'ij = Ucij given by cij = 1 ei ej + uij . [The u1j 2 J1j are given, and from these we construct the other uij = fu1i ; u1j g 2 Jij with u2ij = ei + ej ; fuij ; ujk g = uik for distinct indices i; j; k:] Directly from the Fundamental Formula we see that any element c with c2 = 1 determines an automorphism Uc of J of period 2, and here u2ij = ei + ej implies c2ij = 1.
2.2. The coordinates. Now we must nd what the possible coordinate algebras are for an algebra with capacity. Here Ue1 J = H(D)E11
must be a division algebra, and D must be nondegenerate if J is. These algebras are completely described by the H-K-O Theorem due to I.N. Herstein, Erwin Kleinfeld, and J. Marshall Osborn. Note that there are no explicit simplicity or niteness conditions imposed, only nondegeneracy, nevertheless the only possibilities are associative division rings and nite-dimensional composition algebras. . If D is a nondegenerate alternative -algebra whose hermitian elements are invertible and nuclear, then D is isomorphic to one of (I) the exchange algebra E x() of a noncommutative associative division algebra ; (II) an associative division -algebra ; (III) a composition -algebra of dimension 1; 2; 4, or 8 over a eld
(with central standard involution): the ground eld, a quadratic extension, a quaternion algebra, or an octonion algebra. In particular, D is automatically -simple, and is associative unless it is an octonion algebra. We can list the possibilities in another way: D is either (I0 ) the direct sum op of an associative division algebra and its opposite, under the exchange involution; (II0 ) an associative division algebra with involution; (III0 ) a split quaternion algebra of dimension 4 over its center
with standard involution; equivalently, 2 2 matrices M2 ( ) under the symplectic involution xsp := sxtr s 1 for symplectic s = ( 01 10 ) ; (IV0 ) an octonion algebra O of dimension 8 over its center with standard involution. Herstein-Kleinfeld-Osborn Theorem
3. MINIMUM CONDITION
111
3. Minimum Condition Our next job is to show that the structure theory for artinian algebras can be subsumed under that for algebras with capacity: a nondegenerate Jordan algebra with minimum condition on inner ideals automatically has a nite capacity.
3.1. Capacity ows from minimal inner ideals. To get a ca-
pacity you rst need to catch a division idempotent. Luckily, the nondegeneracy and the minimum condition on inner ideals guarantees lots of division idempotents, because minimal inner ideals come from trivial elements or division idempotents. Minimal Inner Ideal Theorem. The minimal inner ideals B in a Jordan algebra J over are precisely the following : Trivial Type: B = z for an irreducible -module spanned by a trivial element z (Uz bJ = 0); Idempotent Type: B = (e] = Ue J for a division idempotent e; Nilpotent Type: B = (b] = Ub J for b2 = 0 paired with a minimal inner ideal E = Ue J of Idempotent Type (Ub : E ! B; Ue : B ! E are inverse bijections). In this case B2 = UB B = 0, but there is also an isotope J(u) in which B itself becomes of Idempotent Type, b(2;u) = b. Basic Minimal Examples. (1) (Full, Hermitian Idempotent) In the Jordan matrix algebra Mn ()+ or Hn () for any associative division ring with involution, the diagonal matrix unit e = Eii is a division idempotent and the principal inner ideal (e] = Eii or H()Eii is a minimal inner ideal of Idempotent Type. (2) (Full,Hermitian Nilpotent) In Mn ()+ the o-diagonal matrix unit b = Eij is nilpotent and its principal inner ideal (b] = Eij is a minimal inner ideal of Nilpotent Type, paired with (e] for the division idempotent e = 12 (Eii + Eij + Eji + Ejj ). In general a hermitian algebra Hn () need not contain inner ideals of Nilpotent Type [for example, Hn(R ) is formally-real and has no nilpotent elements at all ], but if contains an element of norm = 1, then in Hn () the principal inner ideal (b] for b = Eii + Eij + Eji Ejj is minimal of Nilpotent Type (paired with (e] for the division idempotent e = Eii ): (3) (Triangular Trivial) In A+ for A = T n () the upper-triangular n n matrices over , the o-diagonal matrix units Eij (i < j ) are trivial elements, and the inner ideal Eij is a minimal inner ideal of Trivial Type when is a sub eld of . (4) (Division) If J is a division algebra, then J itself is a minimal inner ideal of Idempotent Type (e = 1).
112
The Classical Methods
(5)(Quadratic Factor) In the quadratic factor Jord(Q; c) for a nondegenerate isotropic quadratic form over a eld , the minimal inner ideals are the (b] = b determined by nonzero isotropic vectors (Q(b) = 0); if T (b) 6= 0 then b is a scalar multiple of a division idempotent, and (b] = e = (e] is of Idempotent Type, while if T (b) = 0 then (b] is of Nilpotent Type, connected to some (e] = e: [Imbed b in a hyperbolic pair Q(d) = Q(b) = 0; Q(d; b) = 1; if T (d) = 6= 0 then take e = 1 d, otherwise if T (d) = 0 take e = d + 41 b + 12 1]. (6) (Cubic Factor) Similarly, in the cubic factor Jord(N; c) for an isotropic Jordan cubic form, the minimal inner ideals are the (b] = b for nonzero sharpless vectors b# = 0; again, if T (b) 6= 0 then b is a scalar multiple of a division idempotent, and (b] is of Idempotent Type, while if T (b) = 0 then (b] is of Nilpotent Type (though it is more complicated to describe an inner ideal of Idempotent Type connected with it). Once we have division idempotents, we can assemble them carefully into a capacity. . A nondegenerate Jordan algebra with minimum condition on inner ideals has a nite capacity. Capacity Existence Theorem
The idea is to nd a division idempotent e1 in J, then consider the orthogonal Peirce subalgebra J0 (e1 ) = U1 e1 J. This inherits nondegeneracy and minimum condition from J (inner ideals or trivial elements in J0 (e) are actually inner or trivial in all of J), so we can repeat to nd a division idempotent e2 in U1 e1 J, then e3 in U1 (e1 +e2 ) J, etc. This chain of \pseudo-principal" inner ideals (remember we don't yet know there is a real element 1 in J) J > U1 e1 J > U1 (e1 +e2 ) J > : : : eventually terminates in U1 (e1 +:::+en ) J = 0 by the descending chain condition, from which one shows by nondegeneracy that e1 + : : : + en is a unit for J and J has capacity n.
3.2. Capacity classi cation. Once we have obtained capacity, we can forget about the minimum condition. First we break the algebra up into connected components. . An algebra with capacity is semiprimitive i it is nondegenerate: Rad(J) = M(J). A nondegenerate Jordan algebra with capacity splits into the direct sum J = J1 : : : Jn of a nite number of nondegenerate ideals Jk having connected capacity. Connected Capacity Theorem
3. MINIMUM CONDITION
113
The idea is that if ei ; ej are not connected then by nondegeneracy we can show they are \totally disconnected" in the sense that the connecting Peirce space Jij = 0 vanishes entirely; connectivity is an equivalence relation among the ei 's, so if we break them into connectivity classes and let the fk be the class sums, then again Ufj ;fk J = 0 for j 6= k so that J = k Ufk J = k Jk is a direct sum of subalgebras Jk = Ufk J (which are then automatically ideals) having unit fk with connected capacity. An easy argument using Peirce decompositions shows that . Any nondegenerate algebra with
Simple Capacity Theorem
connected capacity is simple.
Now we start to analyze the simple pieces according to their capacity. Straight from the de nitions we have Capacity 1 Theorem. A Jordan algebra has capacity 1 i it is a division algebra.
At this stage the classical approach can't say anything more about capacity 1: the whole method is to use idempotents to break the algebra down, and Jordan division algebras have no proper idempotents and cannot be broken down further. Capacity 2 turns out, surprisingly, to be technically the most diÆcult part of the structure theory. Osborn's Capacity 2 Theorem. A Jordan algebra is nondegenerate with connected capacity 2 i it is isomorphic to one of
(I)
H2(E x()) = M2()+ for a noncommutative associative
division algebra ; ) for an associative division algebra with non-central involution; (III) a quadratic factor Jord(Q; c) for a nondegenerate isotropic quadratic form Q with basepoint over a eld :
(II)
H2(;
[Note the quadratic form Q has to be isotropic in (III), for if it were anisotropic then by Basic Division Examples the algebra Jord(Q; c) would be a division algebra of capacity 1.] Putting the coordinate algebras D of the H-K-O Theorem into the Coordinatization Theorem gives the algebras of Hermitian Type.
114
The Classical Methods
3 Theorem. A Jordan algebra is non-degenerate with connected capacity n 3 i it is isomorphic to one of the following: (I) Hn (E x()) = Mn ()+ for an associative division algebra ; (II) Hn (; ) for an associative division -algebra ; (III) Hn (Q; ) for a quaternion -algebra Q over a eld; (IV) H3 (O; ) for an octonion -algebra O over a eld: Crudely put, the reason the nice Jordan algebras of degree n 3 are what they are is because Jordan algebras are naturally coordinatized by alternative algebras, and the only nice alternative algebras are associative or octonion algebras. With the attainment of the Classical Structure Theory, Jordan algebraists sang a (slightly) dierent tune: Everything's up to date in Jordan structure theory They've gone about as fer as they can go They went and proved a structure theorem for rings with capacity, About as fer as a theorem ought to go (Yes, sir !) At the very time this tune was reverberating in nonassociative circles throughout the West, a whole new song without idempotents was being composed in far-o Novosibirsk. Capacity
CHAPTER 7
The Russian Revolution: 1977-1983 1. The Lull Before the Storm At the end of the year 1977 the classical theory was essentially complete, but held little promise for a general structure theory. One indication that in nite-dimensional simple exceptional algebras are inherently \degree 3" was the result that a simple algebra with 4 orthogonal idempotents is necessarily special. A more startling indication was the Gelfand-Naimark Theorem for Jordan Algebras obtained by Erik M. Alfsen, Frederic W. Schultz, and Erling Strmer that a Jordan C -algebra (a Banach space with Jordan product satisfying kx y k kxk ky k; kx2 k = kxk2 , and a formal-reality condition kx2k kx2 + y2k) was built out of special subalgebras of associative C -algebras and Albert algebras. Here the test whether an algebra was special or Albert was whether it satis ed Glennie's Identity G8 or not. Few nonassociative algebraists appreciated the clue this gave { the argument was extremely long and ingenious, and depended on subtle results from functional analysis. The idea was to imbed the C -algebra in its Arens double dual, which was a Jordan W -algebra with lots of idempotents; speciality comes easily in the presence of idempotents. Tantalizing as these results were, they and the classical methods relied so unavoidably on niteness or idempotents that there seemed no point of attack on the structure of in nite-dimensional algebras: they oered no hope for a Life After Idempotents. In particular, the following Frequently Asked Questions on the structure of general Jordan algebras seemed completely intractable: (FAQ1) Is the nondegenerate radical nilpotent in the presence of the minimum condition on inner ideals (so M(J) = P rime(J))? Do the trivial elements generate a locally nilpotent ideal (equivalently, does every nite set of trivial elements in a Jordan algebra generate a nilpotent subalgebra)? Is M(J) Loc(J) for the locally nilpotent or Levitzki radical (the smallest ideal L such that J=L has no locally nilpotent ideals, an algebra being locally nilpotent if every nitely-generated subalgebra is nilpotent )? 115
116
The Russian Revolution
(FAQ2) Do there exist simple exceptional algebras which are not Albert algebras of disappointing dimension 27 over their center ? (FAQ3) Do there exist special algebras which are simple or division algebras but are not of the classical types Jord(Q; c) or H(A; )? (FAQ4) Can one develop a theory of Jordan PI algebras (those strictly satisfying a polynomial identity, a free Jordan polynomial which is monic in the sense that its image in the free special Jordan algebra has a monic term of highest degree )? Is the universal special envelope of a nitely generated Jordan PI-algebra an associative PI algebra ? Are the Jordan algebras Jord(Q; c) for in nite-dimensional Q the only simple PI-algebras whose envelopes are not PI and not nite-dimensional over their centers ? (FAQ5) Is the free Jordan algebra (cf. Appendix B) on 3 or more generators a domain which can be imbedded in a division algebra (as is the case for free associative or Lie algebras), necessarily exceptional and in nite-dimensional, or does it have zero divisors or trivial elements (as is the case for free alternative algebras )? (FAQ6) What are the s-identities which separate special algebras and their homomorphic images from the truly exceptional algebras ? Are there in nitely many essentially dierent s-identities, or are they all consequences of G8 or G9 ? Yet within the next 6 years all of these FAQ's became settled FAQTs.
2. The First Tremors The rst warnings of the imminent eruption reached the West in 1978. Rumors from visitors to Novosibirsk1, and a brief mention at an Oberwolfach confererence attended only by associative ring theorists, claimed that Arkady M. Slin'ko and E m I. Zel'manov had in 1977 settled the rst part of FAQ 1: the radical is nilpotent when J has minimum condition. Slin'ko had shown this for special Jordan algebras, and jointly with Zel'manov he extended this to arbitrary algebras. A crucial role was played by the concept of the annihilator of a set X 1It is hard for students to believe how diÆcult mathematical communication
was before the opening up of the Soviet Union. It was diÆcult for many Soviet mathematicians, or even their preprints, to get out to the West. The stories of Zel'manov and his wonderful theorems seemed to belong to the Invisible City of Kitezh, though soon handwritten letters started appearing with tantalizing hints of the methods used. It was not until 1982 that Western Jordan algebraists caught a glimpse of Zel'manov in person, at a conference on (not of) radicals in Eger, Hungary.
2. THE FIRST TREMORS
117
in a linear Jordan algebra J, AnnJ (X ) = fz 2 J j fz; X; Jbg = 0g: This annihilator is always inner for any set X , and is an ideal if X is an ideal. In a special algebra J A+ where J generates A as associative algebra, an element z belongs to AnnJ (X ) i for all x 2 X the element zx = xz lies in the center of A with square zero, so for semiprime A this reduces to the usual two-sided associative annihilator fz j zX = Xz = 0g: Annihilation is an order-reversing operation (the bigger a set the smaller its annihilator), so a decreasing chain of inner ideals has an increasing chain of annihilator inner ideals, and vice versa. Zel'manov used this to simultaneously handle both the minimum and the maximum condition. The nil radical Nil(J) is the largest nil ideal, equivalently the smallest ideal whose quotient is free of nil ideals, where an ideal is nil if all its elements are nilpotent. Zel'manov's Nilpotence Theorem. If J has minimum or maximum condition on inner ideals inside a nil ideal N, then N is nilpotent. In particular, if J has minimum or maximum condition inside the nil radical Nil(J) then Nil(J) = Loc(J) = M(J) = P rime(J): He then went on to settle the rest of (FAQ1) for completely general algebras. Zel'manov's Local Nilpotence Theorem. In any Jordan algebra, the trivial elements generate a locally nilpotent ideal, M(J) Loc(J); equivalently, any nite set of trivial elements generates a nilpotent subalgebra. The methods used were extremely involved and \Lie-theoretic", based on the notion of \thin sandwiches" (the Lie analogue of trivial elements), developed originally by Kostrikin to attack the Burnside Problem and extended by Zel'manov in his Fields-medal conquest of that problem. At the same time, Zel'manov was able to characterize the radical for PI-algebras. PI Radical Theorem. If a Jordan algebra over a eld satis es a polynomial identity, then M(J) = Loc(J) = Nil(J). In particular, if J is nil of bounded index then it is locally nilpotent: J = Loc(J):
118
The Russian Revolution
3. The Main Quake In 1979 Zel'manov abbergasted the Jordan community by proving that there were no new exceptional algebras in in nite dimensions, settling once and for all FAQ 2 that motivated the original investigation of Jordan algebras by physicists in the 1930's, and showing that there is no way to avoid an invisible associative structure behind quantum mechanics. . There are no simple exceptional Jordan algebras but the Albert algebras: any simple exceptional Jordan algebra is an Albert algebra of dimension 27 over its center. Indeed, any prime exceptional Jordan algebra is a form of an Albert algebra: its central closure is a simple Albert algebra. Zel'manov's Exceptional Theorem
Equally abbergasting was his complete classi cation of Jordan division algebras: there was nothing new under the sun in this region either, answering the division algebra part of (FAQ3). . The Jordan division algebras
Zel'manov's Division Theorem
are precisely those of classical type: (I) Quadratic Type: Jord(Q; c) for an anisotropic quadratic form Q over a eld; (IIa) Full Associative Type: + for an associative division algebra ; (IIb) Hermitian Type: H(; ) for an associative division algebra with involution ; (III) Albert Type: Jord(N; c) for an anisotropic Jordan cubic form N in 27 dimensions.
As a coup-de-grace, administered in 1983, he classi ed all possible simple Jordan algebras in arbitrary dimensions, settling (FAQ3). . The simple Jordan algebras are
Zel'manov's Simple Theorem
precisely those of classical type: (I) Quadratic Type: Jord(Q; c) for a nondegenerate quadratic form Q over a eld; (II) Hermitian Type: H(B; ) for a -simple associative algebra B with involution , (hence either A+ for a simple A, or H(A; ) for a simple A with involution ); (III) Albert Type: Jord(N; c) for a Jordan cubic form N in 27 dimensions.
As if this weren't enough, he actually classi ed the prime algebras (recall the de nition in 4.6.2).
4. AFTERSHOCKS
119
. The prime nondegenerate Jor-
Zel'manov's Prime Theorem
dan algebras are precisely : (I) Quadratic Forms: special algebras with central closure a simple quadratic factor Jord(Q; c) (Q a nondegenerate quadratic form with basepoint over a eld ); (II) Hermitian Forms: special algebras J of hermitian elements squeezed between two full hermitian algebras,
H(A; ) /
J
H(Q(A); )
for a -prime associative algebra A with involution and its Martindale ring of symmetric quotients Q(A); (III) Albert Forms: exceptional algebras with central closure a simple Albert algebra Jord(N; c) (N a Jordan cubic form in 27 dimensions).
This can be considered the ne plus ultra, or Mother of All Classi cation Theorems. Notice the nal tripartite division of the Jordan landscape into Quadratic, Hermitian, and Albert Type. The original Jordan-von Neumann-Wigner classi cation considered the Hermitian and Albert types together because they were represented by hermitian matrices, but we now know this is a misleading feature (due to the \reduced" nature of the Euclidean algebras). The nal 3 types represent genetically dierent strains of Jordan algebras: they are distinguished among themselves by the sorts of identities they do or do not satisfy: Albert fails to satisfy the s-identities (Glennie's or Thedy's), Hermitian satis es the s-identities but not Zel'manov's eater identity, and Quadratic satis es s-identities as well as Zel'manov's identity. The restriction to nondegenerate algebras is important; while all simple algebras are automatically nondegenerate, the same is not true of prime algebras. Sergei Pchelintsev was the rst to construct prime special Jordan algebras which have trivial elements (and therefore cannot be quadratic or Hermitian forms); in his honor, such algebras are now called Pchelintsev monsters.
4. Aftershocks Once one had such an immensely powerful tool as this theorem, it could be used to bludgeon most FAQs into submission. We rst start with (FAQ4). Zel'manov's PI Theorem. Each nonzero ideal of a nondegenerate Jordan PI-algebra has nonzero intersection with the center (so if the center is a eld, the algebra is simple). The central closure of a
120
The Russian Revolution
prime nondegenerate PI-algebra is central-simple. Any primitive PIalgebra is simple. Each simple PI-algebra is either nite-dimensional, or a quadratic factor Jord(Q; c) over its center. Next, Ivan Shestakov settled a question raised by Jacobson in analogy with the associative PI theory. Shestakov's PI Theorem. If J is a special Jordan PI-algebra, then its universal special envelope is locally nite (so if J is nitely generated its envelope is an associative PI-algebra). The example of an in nite-dimensional simple Jord(Q; c), which as a degree 2 algebra satis es lots of polynomial identities yet has the in nite-dimensional simple Cliord algebra C li(Q; c) as its universal special envelope, shows that we cannot expect the envelope to be globally nite. The structure theory has surprising consequences for the free algebra, settling FAQ 5: Free Consequences Theorem. The free Jordan algebra on 3 or more generators has trivial elements. Indeed, note that the free Jordan algebra FJ[X ] on jX j 3 generators (cf. Appendix B) is i-exceptional: if it were i-special, all its homomorphic images would be too, but instead it has the i-exceptional Albert algebra as a homomorphic image, since the Albert algebra can be generated by three elements 1[12]; 1[23]; i[21] + j [13] + `[32]. Now FJ[X ] is not itself an Albert form: the polynomial p(x; y; z ) := S4 Vx3 ;y ; Vx2 ;y ; Vx1 ;y ; V1;y (z ) for the alternating \standard identity"
S4 (x1 ; x2 ; x3 ; x4 ) :=
X
( 1) x(1) x(2) x(3) x(4)
summed over all permutations on 4 letters, vanishes identically on any Albert form J H3 (O( )) [because x3 is a -linear combination of x2 ; x; 1], but p(x; y; z ) does not vanish on FJ[x; y; z ] since it doesn't vanish on the homomorphic image FJ[x; y; z ]+ (for FA[x; y; z ] the free associative algebra on 3 generators) [because if we order the generators x > y > z then p(x; y; z ) has monic lexicographically leading term x3 yx2 yxyyz ]. Thus by Zel'manov's Prime Theorem FJ[X ] is not prime nondegenerate: either it is degenerate (has trivial elements), or is not prime (has orthogonal ideals). In fact, Yuri Medvedev explicitly exhibited trivial elements.
4. AFTERSHOCKS
121
These radical identities are the most degenerate kind of s-identities possible: they are nonzero Jordan expressions which produce only trivial elements in any Jordan algebra, and vanish identically in special algebras. These results also essentially answer FAQ 6: just as Alfsen-SchultzStrmer indicated, the Glennie Identity G8 is the \only" s-identity, or put another way, all s-identities which survive the Albert algebra are \equivalent". i-Speciality Theorem. All semiprimitive i-special algebras are special, so for primitive algebras i-speciality is equivalent to speciality. If f is any particular s-identity which does not vanish on the Albert algebra (such as G8 ; G9 , or T11 ), then a semiprimitive algebra will be special as soon as it satis es the one particular identity f : all such f do equally well at separating special algebras from exceptional algebras. As a consequence, the ideal of all s-identities in the free algebra on an in nite number of variables is quasi-invertible modulo the T -ideal generated by the single identity f . The reason for the equivalence is that a semiprimitive algebra is a subdirect product of primitive (hence prime nondegenerate) algebras, which by Zel'manov's Prime Theorem are either special or Albert forms; if the semiprimitive algebra is i-special it must satisfy f , and as soon as it satis es f all its primitive factors do likewise, so (by hypothesis on f ) none of these factors can be an Albert algebra, and the original algebra is actually special as a subdirect product of special algebras. The reason for the consequence is that if K denotes the quasi-invertible closure of the T -ideal K generated by f in the free Jordan algebra FJ[X ] (so K=K is the Jacobson radical of FJ[X ]=K), then by construction the quotient FJ[X ]=K is semiprimitive and satis es f (here it is crucial that K is the T -ideal, not just the ordinary ideal, generated by f (x1 ; : : : ; xn ): this guarantees not only that f (x1 ; : : : ; xn ) is 0 in the quotient, since it lies in K, but that any substitution f (y1; : : : ; yn) also falls in K and hence vanishes in the quotient). By the rst part of the theorem the quotient is special. But then all s-identities vanish on the special algebra FJ[X ]=K, so all their values on FJ[X ] fall into K : K contains all s-identities, so the ideal generated by all s-identities is contained in K and hence it too is quasi-invertible mod K.
CHAPTER 8
Zel'manov's Exceptional Methods Here we will sketch the methods used by Zel'manov in his proof of his Exceptional Theorem. This rst third of his trilogy was completed in 1979. Zel'manov in fact classi ed the primitive i-exceptional algebras, then waved a few magic wands to obtain the classi cation of prime and simple i-exceptional algebras. His methods involved ordinary associative algebra concepts and results which could be transferred, although seldom in an obvious way, to Jordan algebras. (The second and third parts of his trilogy involved the notion of tetrad eater, whose origins go back to the theory of alternative algebras). We warn readers to tighten their seatbelts before entering this chapter, because the the going gets rougher: the arguments are intrinsically more diÆcult, and we get closer to these turbulent arguments by giving sketches of how the main results are proved.
1. I -Finiteness First we characterize abstractly those algebras that are rich, but not too rich, in idempotents. On the way to showing that simple i-exceptional algebras are 27-dimensional, generation of idempotents comes from algebraicness, while niteness of families of orthogonal idempotents comes from the nite-degree of a non-vanishing s-identity. I -Genic Definition. An algebra J is I -genic (idempotentgenerating) if every non-nilpotent element b generates a nonzero idempotent e 2 (b] = Ub bJ. Every non-nilpotent algebraic element in a [not necessarily unital] power-associative algebra over a eld generates an idempotent, providing the most important source of I -genic algebras. Recall that an element b is algebraic over if it satis es a monic polynomial p() = n + n 1 n 1 + : : : + 1 1 with zero constant term, i.e. some power bn can be expressed as a -linear combination of lower powers, so the [non-unital] subalgebra 0 [b] generated by b is a nite-dimensional commutative associative algebra. Algebraicness is thus a condition of \local" niteness at each individual element. 123
124
Zel'manov's Exceptional Methods
. Every algebraic algebra over a
Algebraic I-gene Theorem
eld is I -genic.
Indeed, if b is not nilpotent the subalgebra 0 [b] is not nil, and hence by associative theory contains an idempotent e 2 0 [b], so e = Ue (e) 2 Ub (J) (b]. I -Finite Proposition. J is said to be I - nite (idempotent nite) if it has no in nite family e1 ; e2 ; : : : of nonzero orthogonal idempotents. I - niteness is equivalent to the a.c.c. on idempotents: (1) there is no in nite strictly increasing chain of idempotents, f1 < f2 < : : :, where e > f means that f 2 J2 (e) lives in the Peirce subalgebra governed by e, in which case g := e f is an idempotent and e = f + g for orthogonal idempotents f; g 2 J2 (e). The a.c.c. always implies (and for unital algebras, is equivalent to) the d.c.c. on idempotents: (2) there is no in nite strictly decreasing chain of idempotents g1 > g2 > : : :. In general, the d.c.c. is equivalent to the condition that J have no in nite family fei g of nonzero orthogonal idempotents which is bounded above in the sense that there is an idempotent g with all ei < g . Note that if J is unital then all families of idempotents are bounded by 1, so the two chain conditions are equivalent: fi " () (1 fi ) #. Since an element x = 1 e1 +: : :+n en has 1 ; : : : ; n as its eigenvalues, a global bound on the number of eigenvalues an individual element can have forces a global bound on the number of orthogonal idempotents the algebra can have. (3) Eigenvalues Bound Idempotents: if all elements x 2 J have at most N eigenvalues in an in nite eld, then J has at most N nonzero orthogonal idempotents, hence is I - nite. The reason for the a.c.c. is that a strictly ascending chain would give rise to a family of nonzero orthogonal idempotents ei+1 = fi+1 fi 2 J0 (fi ) \ J2 (fi+1 ), and conversely, any orthogonal family of nonzero idempotents gives rise to a strictly increasing chain f1 < f2 < : : : for f1 = e1 ; fi+1 = fi + ei+1 = e1 + : : : + ei+1 . For the d.c.c., a decreasing chain g1 > g2 > : : : would give rise to an orthogonal family ei+1 = gi gi+1 2 J2 (gi )\J0 (gi+1 ) bounded by g = g1 , conversely an orthogonal family of ei bounded by g would give rise to a descending chain g > g e1 > g (e1 + e2 ) > : : : > g (e1 + : : : en) > : : :. For eigenvalues bounding idempotents, if e1 ; : : : ; en are nonzero orthogonal idempotents then for distinct P i 2 (which exist if is in nite) the element x = ni=1 i ei has eigenvalues 1 ; : : : ; n , forcing n N . Proof Sketch:
2. ABSORBERS
125
As soon as we have the right number of idempotents we have a capacity. I -Finite Capacity Theorem. A semiprimitive algebra which is I -genic and I - nite necessarily has a capacity. Proof Sketch: It is not hard to guess how the proof goes. From the d.c.c. on idempotents we get minimal idempotents, which (after a struggle) turn out to be division idempotents. Once we get division idempotents, we build an orthogonal family of them reaching up to 1: from the a.c.c. we get an idempotent maximal among all nite sums of division idempotents, and it turns out (after a longer struggle) this must be 1, so we have reached our capacity. Indeed, this maximal e has Peirce subalgebra J0 (e) containing by maximality no further division idempotents, hence by d.c.c. no idempotents at all, so by I -generation it must be entirely nil. We then show that radicality of J0 (e) would infect J [J0 (e) Rad(J)], so semiprimitivity of J forces J0 (e) to be zero, and once J0 (e) = 0 mere nondegeneracy forces J1 (e) = 0 too (after another struggle), and e is the unit for J = J2 (e):
2. Absorbers A key new concept having associative roots is that of outer absorber of an inner ideal, analogous to the right absorber r(L) = fz 2 L j z A Lg of a left ideal L in an associative algebra A. In the associative case the absorber is an ideal; in the Jordan case the square of the absorber (the quadratic absorber) is close to an ideal. Absorbers Theorem. The linear absorber `(B) of an inner ideal B in a Jordan algebra J absorbs linear multiplication by J into B : `(B) := fz 2 B j VJz Bg = fz 2 B j Vz J Bg: The quadratic absorber q (B) absorbs quadratic multiplications by J into B, and coincides with the second linear absorber: q (B) := fz 2 B j VJ;bJz + UJ z Bg := `(`(B)): The linear and quadratic absorbers of an inner ideal B in a Jordan algebra J are again inner ideals in J, and ideals in B. We denote the set of s-identities in a countable set X = fx1 ; x2 ; : : :g of variables [all the Jordan polynomials that vanish on special algebras] by s-id(X ), and by s-id(J) the ideal of all their values attained on an algebra J. It might seem natural to denote these by special(J), but this is not the \special part" of J, on the contrary it is the \anti-special part", the obstacle whose removal creates i-speciality. To make it clear that we are creating i-speciality, we call this obstacle the i-specializer of J; it is the smallest ideal of J whose quotient is i-special.
126
Zel'manov's Exceptional Methods
. The linear absorber of an inner ideal B absorbs the values of s-identities, and the quadratic absorber absorbs cubes of s-identities : s-id (B) `(B); s-id (B)3 q (B). When J is nondegenerate, an absorberless B is i-special (strictly satis es all s-identities) : q (B) = 0 =) s-id (B) = 0 (J nondegenerate): Specializer Absorption Theorem
The beautiful idea behind Specializer Absorption is that for an inner ideal B the Jordan algebra B=`(B) is manifestly special, since the map b ! Vb = 2Lb is a Jordan homomorphism of B into the special Jordan algebra End (J=B)+ (note J=B is merely a -module, without any particular algebraic structure) with kernel precisely `(B), therefore induces an imbedding of B=`(B) in End (J=B)+ , thanks to the Specialization Formulas Vb2 = Vb2 2Ub ; VUb c = Vb Vc Vb Ub;c Vb Ub Vc together with Ub J + Ub;c J B for b; c 2 B by de nition of innerness. For example, if B = Ue J = J2 (e) is the Peirce inner ideal determined by an idempotent in a Jordan algebra, the linear and quadratic absorbers are `(B) = q (B) = fz 2 Bjfz; J1 (e)g = 0g, which is just the kernel of the Peirce specialization x ! Vx of B in End (J1 (e))+ , so B=`(B) = V (J2 (e))jJ1 (e) is a Jordan algebra of linear transformations on J1 (e): The associative absorber is an ideal, and the Jordan quadratic absorber is only a few steps removed from being an ideal: the key (and diÆcult) result is Zel'manov's Absorber Nilness Theorem. The ideal in J generated by the quadratic absorber q (B) of an inner ideal B is nil mod q (B) (hence nil mod B).
3. The Primitive Heart In his work with the radical Zel'manov had already introduced the notion of modular inner ideal and primitive Jordan algebra. In the associative case a left ideal L is modular if it has a modulus c, an element which acts like a right unit (\modulus" in the older literature) for A modulo L : ac a modulo L for all a 2 A, i.e. A(1 c) L. If A is unital then all left ideals are modular with modulus c = 1. The concept of modularity was invented for the Jacobson radical in non-unital algebras: in the unital case Rad(A) is the intersection of all maximal left ideals, in the non-unital case it is the intersection of all maximal modular left ideals. Any translate c + b (b 2 L) is another
3. THE PRIMITIVE HEART
127
modulus, and as soon as L contains one of its moduli then it must be all of A. It turns out that the obvious Jordan condition U1 c J B for an inner ideal B isn't quite enough to get an analogous theory. . An inner ideal B in a Jordan algebra J is modular with modulus c if Modular Definition
(1) U1
cJ
B;
(2) c
c2 2 B; (3) f1 c; bJ; Bg B:
Condition (3) can be expanded to f1 c; J; Bg + fc; Bg B; it guarantees that any translate c + b for b 2 B of a modulus c is again a modulus; condition (2) then guarantees that any power cn of a modulus remains a modulus, since it is merely a translate c cn 2 B for all n 1). Modulus Exclusion. A proper inner ideal cannot contain its modulus: we have the Modulus Exclusion Principle if B has modulus c 2 B, then B = J. Even more, by Absorber Nilness we have a Strong Modulus Exclu-
sion Principle
if B has modulus c 2 I J (q (B)), then B = J.
The reason for Modulus Exclusion is that then 1J = U(1 c)+c = U1 c + U1 c;c + Uc maps J into B by (1); (3), and innerness. Strong Modulus Exclusion holds because c 2 I J (q(B)) implies some power cn 2 B by Absorber Nilness, which remains a modulus for B by the above, so B = J by ordinary Modulus Exclusion. Proof Sketch:
An associative algebras A is primitive if it has a faithful irreducible representation, or in more concrete terms if there exists a left ideal such that A=L is a faithful irreducible left A-module. Irreducibility means L is maximal modular, while faithfulness means the core of L (the maximal ideal of A contained in L, which is just its right absorber fz j zAb Lg) vanishes; this core condition that no nonzero ideal I is contained in L means I + L > L, hence in the presence of maximality means I + L = A, and L complements all nonzero ideals. Once a modular L0 has this property, it can always be enlarged to a maximal modular left ideal L which is even more complementary. In the Jordan case A=L is not going to provide a representation anyway, so there is no need to work hard to get the maximal L (the \irreducible" representation), any complementary L0 will do.
128
Zel'manov's Exceptional Methods
. A Jordan algebra is primitive if it has a primitizer P, a proper modular inner ideal P 6= J which complements all nonzero ideals: it has the Complementation Property (1) I + P = J for all nonzero ideals I of J. Another way to express this is the Ideal Modulus Property (2) every nonzero ideal I contains a modulus for P. Primitive Definition
. Although quadratic absorber and core do not in general coincide for Jordan algebras, besides a zero core the primitizer has zero absorber, and primitizers are always i-special: we have the Absorberless Primitizer Principle q (P) = 0; hence s-id(P) = 0: Absorberless Primitizer Proposition
Proof Sketch:
(1) implies (2) since if i + p = c then I contains the translated
modulus c0 = c p, and (2) implies (1) since if I contains a modulus for P then the inner ideal I + P contains its own modulus and must equal J by Modulus Exclusion. For Absorberless Primitizer, note P proper implies by Strong Modulus Exclusion that I = I J (q(P)) contains no modulus for P, so by Ideal Modulus (2) it must be 0, and q(P) = 0:
Even for associative algebras there is a subtle dierence between the concepts of simplicity and primitivity. A primitive algebra is one which has a faithful representation as linear transformations acting irreducibly on a module. The archetypal example would be the algebra E nd(V ) of all linear transformations on an in nite-dimensional vector space V over a division ring . This algebra is far from simple; an important proper ideal is the socle, consisting of all transformations of nite rank. On the other hand, most of the familiar simple rings are primitive: a simple ring will be primitive as soon as it is semiprimitive (a subdirect product of primitive quotients P = A=I , since in the simple case these nonzero quotients can only be P = A itself), and semiprimitivity is equivalent to vanishing of the Jacobson radical, so a simple algebra is primitive unless it coincides with its Jacobson radical. But Sasiada's complicated example of a simple radical ring shows that not all simple algebras are primitive. The next concept is a straightforward translation of a very simple associative notion: the heart of a Jordan algebra is the minimal nonzero ideal, just as in the associative case. Heart Principles. The heart of a Jordan algebra is de ned to be its smallest nonzero ideal, if such exists: 0 6= ~(J) = \fI j J . I 6= 0g:
4. THE BIG PRIMITIVE CLASSIFICATION
129
We have a Unital Heart Principle if ~(J) has a unit element then J = ~(J) is simple and all heart, and a Capacious Heart Principle if ~(J) has a capacity then J = ~(J) is also simple with capacity. Proof Sketch: For Unital Heart, by a Peirce argument an ideal I which has a unit e is automatically a direct summand J = I J0 (e) since the glue J1 (e) disappears: J1 = e J1 I (ideal) J2 (unit for I) forces J1 = 0; I = J2 ; J = J2 J0 . But if J2 = I = ~(J) is the heart it must contain the ideal J0 , forcing J0 = 0 and J = I = ~(J). Capacious Heart follows since algebras with capacity are by de nition unital.
Simple algebras are all heart, and Unital Heart shows unital heartiness leads to simplicity. Examples of heartless algebras are associative matrix algebras like Mn (Z); Mn ([X ]) coordinatized by UFDs which are not elds. Zel'manov opened up primitive i-exceptional algebras and made the amazing anatomical discovery that they all had hearts. Primitive Exceptional Heart Theorem. A primitive i-exceptional Jordan algebra has heart ~(J) = s-id(J) consisting of all values on J of all s-identities. := s-id(J) is an ideal, and since J is i-exceptional, it does not satisfy all s-identities, so S 6= 0 is a nonzero ideal. We need to show that each nonzero ideal I contains S; but s-id(J) I i s-id(J=I) = 0 in J=I = (I + P)=I = P=P \ I by the Complementation Property of the primitizer and the Third Homomorphism Theorem; but s-id(P=P \ I) = s-id(P)=I vanishes by the Absorberless Primitizer Principle s-id(P) = 0: Proof Sketch: S
4. The Big Primitive Classi cation The key to creating niteness is (as suggested by Eigenvalues bounding Idempotents) to bound spectra. The spectrum of any element z is the set of scalars for which 1 z is singular in some sense. This depends on the set of allowable scalars and the algebra J in which we are considering z , so we use the notation S pec;J(z ), though in everyday speech we omit reference to , J when they are understood by the context.
4.1. Spectra. There are three dierent spectra, depending on three slightly dierent notions of singularity, but for hearty elements all three are almost the same; one of these spectra is always naturally bounded
130
Zel'manov's Exceptional Methods
by the degree N of a non-vanishing polynomial, and this is where all the niteness comes in. -Spectrum Definition. Let z be an element of a Jordan algebra over a eld . We say a scalar 2 is an eigenvalue for z if 1 z is singular in the most agrant sense that it kills somebody: it has an eigenvector x 2 J (U1 z x = 0 for x 6= 0), equivalently the operator U1 z is not injective on J ; Eig;J(z ) denotes the set of eigenvalues in of z considered as an element of J: J
Eig;J(z ) = f 2 j U1 z not injective on Jg: The ordinary, everyday -spectrum S pec;J (z ) of z is the set of 2 such that the element 1 z is singular in the usual sense that it (equivalently its operator U1 z ) is not invertible in the unital hull Jb, equivalently U1 z is not surjective on J, thus the inner ideal U1 z J can be distinguished as a set from J. S pec;J(z) = f 2 j 1 z not invertibleg = f 2 j U1 z not invertibleg = f 2 j U1 z not surjective, U1 z J < Jg: f -Spectrum Definition. If J is i-exceptional and f 2 s-id(X ) an s-identity which does not vanish strictly on J, the f-spectrum is the set f -S pec;J (z ) of scalars such that 1 z is singular in the sense that it gives rise to an inner ideal where f does vanish strictly, thus U1 z J can be distinguished from J using f: f S pec;J(z ) = f 2 j f (U1 z J) 0g ( while f (J) 6 0): Absorber Spectrum Definition. The absorber spectrum of an element z is the set AbsS pec;J (z ) of scalars such that 1 z is singular in the sense that it gives rise to an absorberless inner ideal, thus U1 z J can be distinguished from J using the absorber.
AbsS pec;J(z) = f 2 j q(U1
z J)
= 0g (while q (J) = J):
These are closely related by
f
. If J is i-exceptional and s-id(X ) does not vanish strictly on J, then we have the Spectral
Spectral Relations Proposition
2
4. THE BIG PRIMITIVE CLASSIFICATION
131
Relations AbsS pec;J(z) f S pec;J(z) S pec;J(z) Eig;J(z) for all f 2 s-id(X ) ( for all f 2 s-id(X )3 if J is degenerate) which don't vanish strictly on J: If z is an element of the heart ~(J), then its absorber spectrum almost coincides with its spectrum: we have the Hearty Spectral Relations S pec;J(z) AbsS pec;J(z) [ f0g S pec;J(z) [ f0g (for z 2 ~(J)): Proof Sketch: If 0 = 6 2 S pec(z ) for z 2 ~(J) then w := 1 z 2 ~ and B := U1 w J = U1 z J < J proper with modulus c := 2w w2 2 ~; but then q (B) must vanish, otherwise I J (q(B)) would contain ~ and hence c, contrary to Strong
Modulus Exclusion.
Zel'manov gave a beautiful combinatorial argument, mixing polynomial identities and inner ideals to show that a non-vanishing polynomial f puts a bound on at least that part of the spectrum where f vanishes strictly, the f -spectrum. This turns out to be the crucial niteness condition which dooms exceptional algebras to a 27-dimensional life : the nite degree of the polynomial puts a nite bound on this spectrum. f -Spectral Bound Proposition. If a polynomial f of degree N does not vanish strictly on a Jordan algebra J, then J can contain at most 2N inner ideals Bk where f does vanish strictly and which are relatively prime in the sense that X J = Ci;j for Ci;j = \k 6=i;j Bk : i;j
In particular, if is a eld there is always a uniform f -spectral bound 2N on the size of f -spectra for such J : jf S pec;J(z)j 2N: If f 2 s-id(X ) (or 2 s-id(X )3 if J is degenerate) and z is an element of the heart ~(J) then we have the Hearty Spectral Size Relations jS pec;J(z)j jAbsS pec;J(z) [ f0gj jf-S pec;J(z) [ f0gj 2N + 1: Since f does not vanish strictly on J, some linearizations f 0 of f is nonvanishing. We can assume f is a homogeneous function of N variables of total degree N , so its linearizations f 0 (x1 ; : : : ; xN ) still have total degree N . By relative primeness we have X X X 0 6= f 0 (J; : : : ; J) = f 0 ( Cij ; : : : ; Cij ) = f 00 (Ci1 j1 ; : : : ; CiN jN ) Proof Sketch:
132
Zel'manov's Exceptional Methods
summed over further linearizations f 00 of f 0 , so again some f 00 (Ci1 j1 ; : : : ; CiN jN ) 6= 0. Consider such a collection fCi1 j1 ; : : : ; CiN jN g; each Cij avoids at most 2 indices i; j (hence lies in Bk for all others), so N of the Cij 's avoid at most 2N indices, so when there are more than 2N of the Bk 's then at least one index k is unavoided, Ci1 j1 ; : : : ; CiN jN lie in this Bk , and f 00 (Ci1 j1 ; : : : ; CiN jN ) f 00 (Bk ; : : : ; Bk ) = 0 by the hypothesis that f vanishes strictly on each individual Bk , which contradicts our choice of the collection fCi1 j1 ; : : : ; CiN jN g. Thus there cannot be more than 2N of the Bk 's. In particular, there are at most 2N distinct 's for which the inner ideals B := U1 z (J) satisfy f strictly, since such aQfamily fBk g is automatically relatively P prime: the scalar polynomials ki (t) = j=6 i (j t)=(j P ki (t) = 1, i ) have hence substituting z for t yields J = U1 (J) = UP ki (z) (J) = i;j Jij where Jii := Uki (z) (J) \k=6 i UP Cii , and Jij = Uki (z);kj (z) (J) \k= 6 i;j Uk 1 z (J) = k 1 z (J) =P Cij , therefore J = J C : i;j ij i;j ij The size inequalities come from the Hearty Spectral Relations and the f Spectral Bound.
4.2. Resolvent. The complement of the spectrum is the -resolvent Resolvent;J (z ) of z , the set of so that 1 z is invertible in the unital hull. In a division algebra the spectrum of z contains exactly one element if z = 1 2 1, otherwise it is empty, so the resolvent contains all but at most one element of . Good things happen when the resolvent is big enough. The saying \many scalars make light work" indicates that some of life's problems are due to a scalar de ciency, and taking a scalar supplement may make diÆculties go away of their own accord. Big Definition. If J is a Jordan algebra over a eld , a set of scalars 0 will be called big (relative to J) if 0 is in nite, and j0 j > dim J: We will be particularly interested in the case where J is a Jordan algebra over a big eld (in the sense that itself is big relative to J). Note that every algebra over a eld can be imbedded in an algebra J over a big eld (take any in nite with j j> dim J, since dim J = dim J). Now bigness isn't aected by nite modi cations, due to the in nitude of 0 , so if the resolvent is big in the above sense then we also have jResolvent;J (z )j 1 > dim Jb, which guarantees that there are too many elements (1 z ) 1 in the unital hull bJ for nonzero in the resolvent for them all to be linearly independent. But clearing denomP inators from a linear dependence relation i (i 1 z ) 1 = 0 yields
5. SEMIPRIMITIVE IMBEDDING
133
an algebraic dependence relation p(z ) = 0 for z , so z is algebraic over . 1 (1) If J is a Jordan algebra Amitsur's Big Resolvent Trick. over a eld , an element z 2 J will be algebraic over if it has a big resolvent (eg. if is big and z has a small spectrum jS pecJ (z )j < jj): (2) If J is a Jordan division algebra over a big algebraically closed eld , then J = 1 is split. (3) If J is a Jordan algebra over a big eld then the Jacobson radical is properly nil (in the sense that it remains nil in all homotopes); Rad(J) = PNil(J). This leads to the main structural result. Big Primitive Exceptional Classification. A primitive iexceptional Jordan algebra over a big algebraically closed eld is a simple split Albert algebra Alb(). Proof Sketch: The heart ~(J) = s-id(J) 6= 0 exists by Primitive Exceptional Heart, using primitivity and i-exceptionality, so there exists an f 2 s-id(X ) of some nite degree N which does not vanish strictly on J, hence by the Hearty Spectral Size Relations there is a uniform bound 2N + 1 on spectra of hearty elements. Once the heart has a global bound on spectra over a big eld , by Amitsur's Big Resolvent Trick (1) it is algebraic. Then it is I -genic, and by Eigenvalues Bound Idempotents spectral boundedness yields I - niteness, so by I - nite Capacity the heart ~(J) has capacity. But then by the Capacious Heart Principle J = ~(J) is simple with capacity, and by the Classical Structure Theorem the only i-exceptional simple algebra it can be is an Albert algebra over its center . Over a big algebraically closed eld we must have = by Amitsur's Big Resolvent Trick (2), and J must be split.
5. Semiprimitive Imbedding The nal step is to analyze the structure of prime algebras. Note that we will go directly from primitive to prime without passing simple: the ultra lter argument works in the setting of prime nondegenerate algebras, going from nondegenerate to semiprimitive to primitive over big elds. Even if we started with a simple algebra the simplicity would be destroyed in the following passage from nondegenerate to semiprimitive (even in the associative theory there are simple radical rings). 1Though not quite like the apple dropping on Newton's head, Amitsur attrib-
uted the discovery of his Trick to having to teach a course on dierential equations, where he realized that spectra and resolvents made sense in algebraic settings.
134
Zel'manov's Exceptional Methods
. Every prime nondegenerate Jordan algebra Q ~ can be imbedded in a semiprimitive Jordan algebra J = x2X Jx for primitive algebras Jx over a big algebraically closed eld , in such a way that J and J~ satisfy exactly the same strict identities (in particular, ~ is). J is i-exceptional i J Imbedding Theorem
Proof Sketch: The centroid of a prime algebra J is always an integral domain acting faithfully on J [cf. III.1.12], and J is imbedded in the algebra of fractions J1 = 1 J over the eld of fractions 0 = 1 . Nondegeneracy of J guarantees [after some hard work] that there are no elements z 6= 0 of J which are either (1) strictly properly nilpotent of bounded index in the 0 -algebra J2 = J1 [T ] of polynomials in a countable set of indeterminates T (in the sense that there exists n2 = n2 (z ) with z (n2;y2 ) = 0 for all y2 2 J2 ); (2) properly nilpotent in the 0 -algebra J3 = Seq (J2 ) of all sequences from J2 , with J2 imbedded as constant sequences (in the sense that for each y3 2 J3 there is n3 = n3 (z; y3) with z (n3;y3 ) = 0); (3) properly quasi-invertible in the -algebra J4 =
0 J3 for a big algebraically closed eld with j j > dim 0 J3 = dim J4 (in the sense that z is in Rad(J4 ) = PNil(J4 ) by Amitsur's Big Resolvent (3)). This guarantees J \ Rad(J4 ) = 0, so J is imbedded in the semisimple -algebra J~ = J4 =Rad(J4 ). Moreover, the scalar extension J1 inherits all strict identities from J and the scalar extension J2 inherits all strict identities from J1 , the direct product J3 inherits all identities from J2 , the scalar extension J4 inherits all strict identities from J3 , and the quotient J~ inherits all identities from J4 . Thus J~ inherits all strict identities from J, and conversely the subalgebra J inherits all identities from J~, so they have exactly the same strict Q identities. Semiprimitive algebras are subdirect sums J~ = x2X Jx for primitive algebras Jx = J~=Kx for -ideals Kx in J~ with \x2X Kx = 0. Note the algebraically closed eld remains big for each Jx : j j > dim J4 dim J~ dim Jx :
6. The Ultraproduct Argument
Q
We have gone a long way from J up to the direct product x Jx , and we have to form an ultraproduct to get back down to something resembling J. This will require a basic understanding of lters and ultra lters. Filter Definition. A nonempty collection F P (X ) of subsets of a given set X is called a lter on X if it is (F1) closed under nite intersections: Yi 2 F ) Y1 \ : : : \ Yn 2 F ; (F2) closed under enlargement: Y 2 F ; Y Z ) Z 2 F ; (F3) proper: ; 62 F : If F is a lter on a set X , we obtain the restriction lter FjY of F to any set Y in the lter: (RF) FjY = fZ 2 F j Z Y g (Y 2 F ).
6. THE ULTRAPRODUCT ARGUMENT
135
An ultra lter is a maximal lter; maximality is equivalent to the extremely strong condition (UF) for all Y , either Y or its complement X nY belongs to F . By Zorn's Lemma, any lter can be enlarged to an ultra lter. If F is an ultra lter on X , the restriction lter FjY is an ultra lter on Y for any Y 2 F . Filters should be thought of as \neighborhoods of a (perhaps ideal) point". Q Zero Set Definition. For an element a in the direct product A = x2X Ax of linear algebraic systems (abelian groups with additional structure), we de ne the zero and the support sets of a to be Zer(a) := fx 2 X j a(x) = 0g; Supp(a) := fx 2 X j a(x) 6= 0g: We are interested in lters only to build ultra lters, and we need ultra lters only to build ultraproducts. Q Ultraproduct Definition. If A = x2X Ax is the direct product of linear algebraic systems and F is an ultra lter on X , the ultraproduct A=F is the quotient A= F of A by the congruence a F b i a agrees with b on some Y 2 F (a(x) = b(x) for all x 2 Y ); or, equivalently, the quotient A=I(F ) of A by the ideal I(F ) = fa 2 A j a F 0g = fa 2 A j Zer (a) 2 Fg: Intuitively, the ultraproduct consists of \germs" of functions (as in the theory of varieties or manifolds), where we identify two functions if they agree \locally" on some \neighborhood" Y belonging to the ultra lter. While direct products inherit all indentities satis ed by their factors, they fail miserably with other proerties (such as being integral domains, or simple). Unlike direct products, ultraproducts preserve \most" algebraic properties. Basic Fact of Ultraproducts. Any elementary property true Q of almost all factors Ax is inherited by any ultraproduct ( Ax )=F . Thus: (1) Any ultraproduct of division algebras is a division algebra; any ultraproduct of elds (or algebraically closed elds) is a eld (or algebraically closed eld ). (2) Any ultraproduct of special (or i-special ) algebras is special (or i-special ). (3) Any ultraproduct of split Albert algebras over (resp. algebraically
136
Zel'manov's Exceptional Methods
closed) elds is a split Albert algebra over a (resp. algebraically closed ) eld. (4) For any particular Q Y 2 F we canQdisregard all factors Ax coming from outside Y : ( x2X Ax )=F = ( y2Y Ay )=(FjY ). Elementary is here a technical term from mathematical logic. Roughly, it means a property which can be described (using universal quanti ers) in terms of a nite number of elements of the system. For example, algebraic closure of a eld requires that each nonconstant polynomial have a root in the eld, and this is elementary [forPeach xed n > 1 and xed 0 ; : : : ; n in F there exists a 2 with ni=0 i i = 0]. However, simplicity of an algebra makes a requirement on sets of elements (ideals), or on existence of a nite number n of elements without any bound on n [for each xed a 6= 0 and b in A there Pn exists an n and a set c1 ; : : : ; cn; d1 ; : : : ; dn of 2n elements with b = i=1 ci adi ]. The trouble with such a condition is that as x ranges over X the numbers n(x) may tend to in nity so that there is P no nite set of elements ci (x); di (x) in the direct product with b(x) = ni=1 ci (x)a(x)di (x) for all x. Finite Dichotomy Principle. If each factor Ax is one of a Q nite number of types fT1 ; : : : ; Tn g, then any ultraproduct ( x2X Ax )=F Q is isomorphic to a homogeneous ultra-product ( x2Xi Ax)=(FjXi ) of factors of Type Ti for some i = 1; 2; : : : ; n (where Xi := fx 2 X j Ax has type ig). In particular, if each Ax isQi-special or a split Albert algebra over a eld, then any ultraproduct ( x2X Ax )=F is either i-special or it is a split Albert algebra over a eld. By hypothesis X = X1 [ : : : [ Xn is a nite union (for any x the factor Ax has some type Ti , so x 2 Xi ) with complement (X nX1 ) \ : : : \ (X nXn) = ; 62 F by (F3), so by the nite intersection property (F1) some X nXi 62 F , hence by the ultra lter property (UF) Xi 2 F , and then by Basic Fact Q Q (4) ( x2X Ax )=F = ( x2Xi Ax)=(FjXi ): Proof Sketch:
. If a linear algebraic system is prime in the sense that the product of two nonzero ideals is again nonzero, then every two nonzero ideals haveQnonzero intersection. If A0 6= 0 is a prime nonzero subsystem of A = x2X Ax , the support lter of A0 is that generated by the supports of all the nonzero elements of A0 , F (A0) = fZ X jZ Supp(a0) for some 0 6= a0 2 A0g: Q A0 remains imbedded in the ultraproduct ( Ax )=F for any ultra lter F containing F (A0 ). Prime Example
Here F (A0 ) is a nonempty collection [since A0 6= 0] of nonempty subsets [since a0 6= 0 implies Supp(a0 ) 6= ; ] as in (F3), which is clearly Proof Sketch:
6. THE ULTRAPRODUCT ARGUMENT
137
closed under enlargement as in (F2), and is closed under intersections as in (F1) since if a1 ; a2 6= 0 in A0 then BY PRIMENESS OF A0 we have IA0 (a1 )\IA0 (a2 ) 6= 0 contains some a3 6= 0, and Supp(a3) Supp(a1 ) \ Supp(a2 ): A0 remains imbedded since if a0 6= 0 disappears in A=F then Zer(a0 ) 2 F as well as Supp(a0) 2 F (A0 ) F , so by (F1) Zer(a0 ) \ Supp(a0 ) = ; 2 F , contrary to (F3).
Now we can put all the pieces together, and wave the ultra wand. Prime Dichotomy Theorem. Every prime nondegenerate Jordan algebra of characteristic 6= 2 is either i-special or a form of a split Albert algebra. Every simple Jordan algebra of characteristic 6= 2 is either i-special or an Albert algebra Jord(N; c).
Proof Sketch: By Imbedding a prime nondegenerate J imbeds in a direct Q product J~ = x Jx for primitive algebras Jx over big algebraically closed elds
x, which by the Big Primitive Exceptional Classi cation are either i-special or split Albert algebras Alb( x); then by the Prime Example J imbeds in an ULTRAQ PRODUCT ( x2X Ax )=F , which by Finite Dichotomy is itself either i-special or a split Albert Alb( ) over a eld . In the second case we claim that J1 = J is all of Alb( ), so J is indeed a form of a split Albert algebra. Otherwise J1 would have dimension < 27 over , as would the semisimple algebra J2 = J1 =Rad(J1 ) and all of its simple summands; then by the nite-dimensional theory these summands must be SPECIAL, so J2 is too, yet J remains imbedded in J2 since J1 \ Rad(J1 ) = 0 [J is nondegenerate and the nite-dimensional Rad(J1 ) is nilpotent], contrary to the assumed i-exceptionality of J. If J is simple i-exceptional we will show it is already 27-dimensional over its centroid , which is a eld, and therefore again by the nite-dimensional theory J = Jord(N; c). Now J is also prime and nondegenerate, so applying the prime case (taking as scalars = ) gives J = Alb( ) with a natural epimorphism J = J ! J = Alb( ). IN CHARACTERISTIC 6= 2 the scalar extension J = J remains simple over [the Jacobson Density theorem can be used to show that central simple linear algebras are strictly simple in the sense that all scalar extensions by a eld remain simple], so this epimorphism must be an isomorphism, and dim (J) = dim (J ) = dim (Alb( )) = 27:
Thus the magical wand has banished forever the possibility of a prime i-exceptional algebraic setting for quantum mechanics.
Part III The Classical Theory
In this Part we give a self-contained treatment of the Classical Structure Theory for Jordan algebras with capacity, as surveyed in Chapters 5 and 6 of Part II. Throughout this part we will work with linear Jordan algebras over a xed (unital, commutative, associative) ring of scalars containing 21 . All spaces and algebras will be assumed to be - modules, all ideals and subalgebras will be assumed to be - submodules, and all maps will be assumed to be -linear. This ring of scalars will remain pretty dormant throughout our discussion: except for the scalar 12 , we almost never care what the scalars are, and we have few occasions to change scalars (except to aid the discussion of \strictness" of identities and radicals). Our approach to the structure theory is \ring-theoretic" and \intrinsic", rather than \linear" and \formal": we analyze the structure of the algebras directly over the given ring of scalars, rather than rst analyzing nite-dimensional algebras over an algebraically closed elds and then analyzing their possible forms over general elds. We assume that anyone who voluntarily enters Part II, with its its detailed treatment of results, does so with the intention of learning at least some of the classical methods and techniques of nonassociative algebra, perhaps with an eye to conducting their own research in the area. To this end, we include a series of exercises, problems, and questions. The exercises are included in the body of the text, and are meant to provide the reader with routine practice in the concepts and techniques of a particular result (often involving alternate proofs). The problems are listed at the end of each chapter, and are of broader scope (often involving results and concepts which are extensions of those in the text, or which go in a completely dierent directions), and are meant to provide general practice in creating proofs. The questions, at the very end of each chapter, provide even more useful practice for a budding research mathematician: they involve studying (sometimes even formulating) a concept or problem without any hint of what sort of an answer to expect. For some of these exercises, problems, questions there are hints given at the end of the book, but these should be consulted as a last resort - - the point of the problems is the experience of creating a proof, not the proof itself.
First Phase:
Back to Basics
In this phase we review the foundations of our subject, the various categories of algebraic structures that we will encounter, especially Jordan and alternative algebras. (We assume the reader already knows the fundamental facts about associative rings.) We give the basic examples, the three most important special Jordan algebras: the full plus-algebras, the hermitian algebras, and the quadratic factors. Two basic labor-saving devices, the Macdonald and Shirshov-Cohn Principles, allow us to prove that certain formulas will be valid in all Jordan algebras as soon as we verify that they are valid in all associative algebras. From these we quickly get the fundamental identities, mostly involving the U -operator, which are the basic tools in dealing with Jordan algebras. With these tools it is a breeze to construct two related edi ces, the theories of invertibility and isotopy in Jordan algebras.
CHAPTER 1
The Category of Jordan Algebras In this chapter we introduce the hero of our story, the Jordan algebra, and in the next chapter his trusty sidekick, the alternative algebra. As we have mentioned several times heuristically, and will see in detail in Chapter 15, alternative algebras arise naturally as coordinates of Jordan algebras with three or more connected orthogonal idempotents.
1. Categories We will often state results informally in the language of categories and functors. This is a useful language that you should be uent in at the colloquial level. Recall that a category C consists of a collection of objects and a set Mor(X; Y ) of morphisms for each pair of objects X; Y , such that these morphisms can be patched together in a monoid-like way: we have a \composition" Mor(Y; Z ) Mor(X; Y ) ! Mor(X; Z ) which is \associative" and has a \unit" 1X for each object X : (h Æ g ) Æ f = h Æ (g Æ f ) and f Æ 1X = 1Y Æ f = f for all f 2 Mor(X; Y ); g 2 Mor(Y; Z ); h 2 Mor(Z; W ). In distinction to mappings encountered set-theoretic aspects of calculus and elsewhere, the choice of the codomain or, more intuitively, target Y for f 2 Mor(X; Y ) is just as important as its domain X [and must be carefully distinguished from the range or image im(f ), the set of values actually taken on if f happens to be a concrete settheoretic mapping], so that composition f Æ g is not de ned unless the domain of f and the codomain of g match up. An analogous situation occurs in multiplication of an m n matrix A and p q matrix B , which is not de ned unless the middle indices n = p coincide, whereas at a vector-space level the corresponding maps : Z W; : V U can be composed as long as W im( ). While we seldom need to make this distinction, it is crucial in algebraic topology and in abstract categorical settings. You should always think intuitively of the objects as sets with some sort of structure, and morphisms as the set-theoretic mappings which preserve that structure (giving rise to the usual categories of sets, monoids, groups, rings, -modules, linear -algebras, etc.). In 143
144
The Category of Jordan Algebras
algebraic situations, the morphisms are called homomorphisms. However, it is important that category is really an abstract notion. A close parallel is the case of projective geometry, where we think of a line as a set of points, but we adopt an abstract de nition where points and lines are unde ned concepts, which makes available the powerful principle of duality between points and lines. A functor F : C ! C 0 from one category to another takes objects to objects and morphisms to morphisms in an arrow-preserving manner: F (f ) f each map A ! B of C -objects induces a map F (A) ! F (B ) of C 0-objects in a \homomorphic" manner, F (1X ) = 1F (X ) ; F (g Æ f ) = F (g) Æ F (f ). We will always think of a functor as a construction of C 0-objects out of C -objects, whose recipe involves only ingredients from C , so that a morphism of C -objects preserves all the ingredients and therefore induces a morphism of the constructed C 0 -objects.1 Standard examples of functors are \forgetful functors" which forget part of the structure (eg. from algebraic objects to the underlying sets), or constructions of group algebras, polynomial rings, tensor algebras, homology groups, etc. A more complicated example is the construction of a \free gadget" in a category, a functor from sets to the given category satisfying a universal property (we will come back to this in Appendix A). We always have a trivial identity functor 1C : C ! C which does nothing to objects or morphisms, and if we have functors F : C ! C 0; G : C 0 ! C 00 then we can form the obvious composite functor G Æ F : C ! C 00 sending objects X ! G (F (X )) and morphisms f ! G (F (f )). Intuitively, the composite functor constructs new objects in a two-stage process. We will frequently encounter functors which \commute", F ÆG = G ÆF , which means it doesn't matter what order we perform the two constructions, the end result is the same.
2. The Category of Linear Algebras There are certain concepts which apply quite generally to all linear algebras, associative or not, though for algebraic systems with quadratic operations (such as quadratic Jordan algebras) they must be modi ed. Recall the terminology introduced in II Chapter 2 Section 1. 1A reader with a basic functorial literacy will observe that we deal strictly
with covariant functors, never the dual notion of a contravariant functor F which F (f ) f acts in an arrow-reversing manner on maps A ! B to produce B A in an \anti-homomorphic" manner, F (g Æ f ) = F (f ) Æ F (g). Important examples of contravariant functors are the duality functor V ! V from vector spaces to vector spaces, and cohomology from topological spaces to groups (which Hilton and Wylie cogently, but hopelessly, argued should be called \contra-homology").
2. THE CATEGORY OF LINEAR ALGEBRAS
145
1.1 (Category of Linear Algebras). (1) The category of linear -algebras has as objects the linear -algebras and as morphisms the -algebra homomorphisms. Here a linear -algebra is a -module A equipped with a bilinear multiplication or product A A ! 2 A, which we will write as (x; y ) 7! x y . Bilinearity is equivalent to the left distributive law (x + y ) z = x z + y z , the right distributive law z (x + y ) = z x + z y , plus the scalar condition (x y ) = (x) y = x (y ). A particularly important product derived from the given one is the square x2 := x x: (2) A homomorphism ' : A ! A0 between two such linear algebras is a -linear map of modules which preserves multiplication, '(x y ) = '(x) 0 '(y ): When dealing with commutative algebras in the presence of 12 , it suÆces if ' preserves squares : 1 ' homomorphism () '(x2 ) = '(x)2 (A commutative, 2 ); 2 2 since then linearization gives (1) : 2'(x y ) = ' ((x + y ) x2 y 2 ) = '(x + y )2 '(x)2 '(y )2 = 2'(x) 0 '(y ). We have the usual notions of monomorphism and epimorphism. An isomorphism is a homomorphism that has an inverse, equivalently, that is bijective as a map of sets; an automorphism ' : A ! A is an isomorphism of an algebra with itself. The set of automorphisms ' of A under composition forms the automorphism group Aut(A). We have the usual notions of monomorphism and epimorphism. (3) The in nitesimal analogue of an automorphism is a derivation: a derivation Æ of A is a linear transformation Æ : A ! A satisfying the product rule Æ (xy ) = Æ (x)y + xÆ (y ): A Lie algebra (cf. II.2.4) is a linear algebra with a product (always denoted [x; y ]) which is anticommutative [x; y ] = [y; x] and satis es the Jacobi identity [x; [y; z ]] + [y; [z; x]] + [z; [x; y ]] = 0. The set of derivations of any linear algebra A forms a Lie algebra of operators Der(A), the derivation algebra, under the Lie bracket [Æ; Æ0] = Æ Æ Æ0 Æ0 Æ Æ. Definition
2The product in linear algebras is usually denoted simply by juxtaposition, xy ,
and we will use this notation for products in associative algebras, or in alternative algebras we are using as \coordinates" for a Jordan algebra. The symbol x y is a generic one-symbol- ts-all-algebras notation, but has the advantage over juxtaposition that it at least provides a place to hang a tag labelling the product, such as x A y or x 0 y as below, reminding us that the second product is dierent from the rst.
146
The Category of Jordan Algebras
(1) Verify that Aut(A) is indeed a group of linear transformations, and Der(A) a Lie algebra of transformations. (2) Show that Æ is a derivation of A i ' := 1A + "Æ is an automorphism of the algebra of dual nnumbers A["] := ["] (with de ning relation "2 = 0). Thinking of " as an \in nitesimal", this means Æ is the \in nitesimal part" of an \in nitesimal" automorphism (one in nitesimally close to the identity automorphism). (3) Show that Æ is a derivation P n n of A i exp(Æ) := 1 n=0 t Æ =n! is a automorphism of the formal power series algebra A[[t]] whenever this makes sense (e.g. for algebras over the rationals, or over scalars of chacteristic p when Æp = 0 is nilpotent of index p). Exercise 1.1
1.2 (New Objects). A subalgebra B A of a linear algebra A is a -submodule3 closed under multiplication: B B B. An ideal K / A is a -submodule closed under multiplication by A : A K + K A K. A proper ideal is one dierent from the improper ideals A; 0. A linear algebra is simple if it has no proper ideals and is nontrivial (i.e. A A 6= 0). Any ideal determines a quotient algebra A = A=I and a canonical projection : A ! A in the usual manner. Q As in II.2.3 the direct product I Ai of a family of algebras Ai is the cartesian L product under the componentwise operations, and the direct sum I Ai is the subalgebra of all tuples with only a nite number of nonzero entries; i denotes the canonical projectionQonto = the ith -component Ai . An algebra is a subdirect product A s I Ai of algebras if it is imbedded in the direct product in such a way that for each i 2 I the canonical projection i (A) = Ai maps onto all of Ai , equivalently Ai = A=Ki for ideals Ki / A with \I Ki = 0: There is a standard procedure for extending the set of allowable scalars, which is frequently used to pass to an algebraically closed eld where the existence of roots of equations simpli es the structure theory. Definition 1.3 (Scalar Extension Functor). A ring of scalars is a unital, commutative, associative ring. An extension of is a ring of scalars which is a -algebra; note that it need not contain [the image 1 need not be a faithful copy of ]: For any linear -algebra A and extension of , the scalar extension A is the tensor product A := A with its natural -linear structure and with the natural induced product ( x) ( y ) := x y: We obtain a scalar extension functor from the category of -algebras to the category of -algebras, sending objects A to A and -morphisms Definition
3We have restrained our natural impulse to call these (linear) subspaces, since
some authors reserve the term space for vector spaces over a eld.
2. THE CATEGORY OF LINEAR ALGEBRAS ' : A ! A0 to -morphisms ' = 1 ' : A ! A0 by
147
' ( x) = '(x): Although we have de ned the product and the map ' only for the basic tensors x y and x, the fact that the recipes are -bilinear guarantees that it extends to a well-de ned product on all sums by the universal property of tensor products4. Intuitively, A just consists of all formal -linear combinations of elements of A, multiplied in the natural way. If = Z; = Zn (integers mod n), A = Q (rationals), show that = 0, so 6 and A 6 A .
Exercise 1.3 A
If we work in a \variety" of algebras, de ned as all algebras satisfying a collection of identical relations f (x; y; : : :) = g (x; y; : : :) (such as the associative law, commutative law, Jacobi identity, Jordan identity, alternative laws, etc.), then a scalar extension will stay within the variety only if the original algebra satis es all \linearizations" of the de ning identities. Satisfaction is automatic for homogeneous identities which are linear or quadratic in all variables, but for higher identities (such as the Jordan identity of degree 3 in x) it is automatic only if the original scalars are \big enough". (We will see that existence of 21 is enough to guarantee that cubic identities extend.) For non-linear identities, such as the Boolean law x2 = x, scalar extensions almost never inherit the identity; indeed, if we take
to be the polynomial ring [t] in one indeterminate, then replacing tx in an identical relation f (x; y; : : :) = g (x; y; : : :) leads to P xi by (i) (x; y; : : :) = P ti g (i) (x; y; : : :), and this relation holds in A t f [t] i i for all x; y; : : : 2 A i (identifying coeÆcients of ti on both sides) f (i) (x; y; : : :) = g (i) (x; y; : : :) hold in A for each homogeneous component of f; g of degree i in x. 4Somewhat rashly, we assume the reader is familiar with the most basic proper-
ties of tensor products over general rings of scalars, in particular that the elements of a tensor product X Y are nite sums of basic tensors x y (the latter are called \indecomposable tensors" because they aren't broken down into a sum of terms), and that it does not always suÆce to check a property of the tensor product only on these basic generators. The purpose in life of the tensor product of -modules f f~ X Y is to convert bilinear maps X Y ! Z into linear maps X Y ! Z factoring through the canonical bilinear X Y ! X Y , and its Univeral Property is that it always succeeds in life: to de ne a linear map on the tensor product it DOES always suÆce to de ne it on the basic generators x y, as long as this de nition is bilinear as a function of x and y.
148
The Category of Jordan Algebras
3. The Category of Unital Algebras The modern fashion is to demand that algebras have unit elements (this requirement is of course waived for Lie algebras!). Once one adopts such a convention, one is compelled to demand that homomorphisms preserve units as well. Definition 1.4 (Category of Unital Algebras). A unital algebra is one which has a unit element 1, an element which is neutral with respect to multiplication: 1x =x = x1 for all elements x in A. As always, the unit element is unique once it exists. A unital homomorphism is a homomorphism ' : A ! A0 of linear algebras which preserves the unit element, '(1) = 10 : The category of unital linear -algebras has as its objects the unital linear -algebras, and as its morphisms the unital homomorphisms.5 In this category, the unit element is considered part of the structure, so all subalgebras B A are required to contain the unit: 1 2 B. In particular, ideals do not count as subalgebras in this category. In dealing with unital algebras one must be careful which category one is working in. If A and A0 are unital linear algebras and ' : A ! A0 is a homomorphism in the category of linear algebras, then in the category of unital algebras it need not be a homomorphism, and its image need not be a subalgebra. For example, we cannot think of the 2 2 matrices as a unital subring of the 3 3 matrices, nor is the natural imbedding of 22 matrices as the northwest corner of 33 matrices a homomorphism from the unital point of view. (1) Verify that the northwest corner imbedding M2 () - M3 () is always a homomorphism of associative algebras. (2) Use traces to show that Exercise 1.4*
5Note that again the notation 1 for the unit is a generic one-term- ts-all-
algebras notation; if we wish to be pedantic, or clear (\Will the real unit please stand up?"), we write 1A to make clear whose unit it is. Note in the homomorphism condition how 10 reminds us this is not the same as the previous 1. The unit element is often called the identity element. We will try to avoid this term, since we often talk about an algebra \with an identity" in the sense of \identical relation" (the Jacobi identity, the Jordan identity, etc.). Of course, in commutative associative rings the term \unit" also is ambiguous, usually meaning \invertible element" (the group of units, etc.), but already in noncommutative ring theory the term is not used this way, and we prefer this lesser of two ambiguities. It is a good idea to think of the unit as a neutral element for multiplication, just as 0 is the neutral element for addition.
4. UNITALIZATION if is a eld there is no homomorphism algebras.
149
M2 () ! M3 () of unital associative
Notice that in this category an algebra A is simple i it has no proper ideals and is nonzero, because nonzero guarantees nontrivial: AA A1 = A 6= 0.
4. Unitalization A unit element is a nice thing to have around, and by federal law each algebra is entitled to a unit: as in associative algebras, there is a standard way of formally adjoining a unit element if you don't have one already. Definition 1.5 (Unital Hull). In general a unital hull of a linear -algebra A is any extension A1 = 1 + A A obtained by adjoining a unit element. Such a hull always contains A as an ideal (hence a non-unital subalgebra). Any linear algebra is imbedded in its formal unital hull, consisting of the module A with product extending that of A and having (1; 0) act as unit, b : A (; x) ( ; y ) = ( ; y + x + x y ): b the unital hull and denote the elements (1; 0); (0; a) simply We call A by 1; a, so 1 is a faithful copy of and we write the product as (1 x) ( 1 y ) = 1 (y + x + x y ): Notice that this unitalization process depends on what we are countb to explicitly ining as the ring of scalars, so we could denote it by A clude the scalars in the notation { it would remind us exactly what we are tacking on to the original A. If A is also an algebra over a larger , b so the unital version would also be then it would be natural to form A an -algebra. But we will usually stick to the one-size- ts-all hat for the unital hull, trusting the context to make clear which unitalization we are performing. We obtain a unitalization functor from the category of linear b , and morphisms algebras to unital -algebras, sending objects A to A 0 0 c b ! A de ned as the natural ' : A ! A to the unital morphism 'b : A unital extension 'b(1 x) = 1 '(x). It is easyto verify that this
6 c = A b functor commutes with scalar extensions: A :
6The alert reader { so annoying to an author { will notice that we don't really
have two functors U ; which commute, but rather a whole family of unitalization functors U (one for each category of -algebras) and two scalar extension functors
; (1) (one for linear algebras and one for unital algebras), which intertwine:
150
The Category of Jordan Algebras
Notice that you can have too much of a good thing: if A already has a unit e, tacking on a formal unit will demote the old unit to a mere idempotent, and the new algebra will have a copy of attached as an excrescence rather than an organic outgrowth: b = A e0 A (e0 := 1 e; e unit for A): Indeed, orthogonality e0 A = 0 = Ae0 is built-in, since 1 acts as unit by decree, and e already acted as unit on A, so the two competing claimants for unit cancel each other out; we have (e0 )2 = 1 2e + e2 = 1 2e + e = e0 , so e0 is another faithful copy of . See Problem 3 for a tighter unitalization.
5. The Category of Algebras with Involution The story of Jordan algebras is to some extent the saga of selfadjoint elements in algebras with involutions, just as the story of Lie algebras is the story of skew elements. Definition 1.6 (Category of -Algebras). A linear -algebra (star-algebra, or algebra with involution) (A; ) consists of a linear algebra A together with an involution , an anti-automorphism of period 2: a linear mapping of A to itself satisfying (x y ) = y x ; (x ) = x: A -homomorphism (A; ) ! (A0 ; 0) of -algebras is an algebra homomorphism ' which preserves involutions, '(x ) = '(x)0 : In any -algebra we denote by H(A; ) the space of hermitian elements x = x, and by S k(A; ) the space of skew elements x = x. Any -homomorphism will preserve symmetric or skew elements. The category of --algebras has as objects the -algebras and as morphisms the -homomorphisms. Their kernels are precisely the -ideals (ideals I / A invariant under the involution, hence satisfying I = I): Show that if x = x; y = y (; = 1) for an involution in a nonassociative algebra A, then (x y) = x y and [x; y] = [x; y], so in particular H(A; ) is always closed under x y and Sk(A; ) under [x; y] (though for general nonassociative A these won't be Jordan or Lie algebras).
Exercise 1.6A
(1) Show that a trivial linear algebra (A with AA = 0) has no proper ideals i it is a 1-dimensional vector space with trivial multiplication over a eld = =M for a maximal ideal M of . (2) Show that a trivial linear algebra Exercise 1.6B
(1) Æ U = U Æ . By abuse of language we will continue to talk of these functors or constructions as commuting.
6. THE CATEGORY OF JORDAN ALGEBRAS
151
with involution has no proper -ideals i it is a 1-dimensional vector space with trivial multiplication over a eld = =M for a maximal ideal M of and the involution is trivial ( = 1A ).
Though not written into the constitution, there is a little-known federal program to endow needy algebras with a , using the useful concept of opposite algebra. op of any Definition 1.7 (Opposite). The opposite algebra A linear algebra A is just the module A with the multiplication turned around : op := A under x op y := y x: A In these terms an anti-homomorphism A ! A0 is nothing but a homomorphism into (or out of ) the opposite algebra A ! A0 op (or Aop ! 0 A ): Verify the assertion that an anti-homomorphism in nothing but a homomorphism into or out of the opposite algebra. Exercise 1.7
1.8 (Exchange Involution). Every linear algebra A can be imbedded as a subalgebra of a -algebra, its exchange algebra with exchange involution E x(A) := (A Aop; ex) where (a; b)ex := (b; a): The symmetric and skew elements under the exchange involution are isomorphic to A as modules, but not as linear algebras: H(E x(A)) = f(a; a) j a 2 Ag; S k(E x(A)) = f(a; a) j a 2 Ag: In this category we again have unitalization and scalar extension functors, which commute with the exchange functor from algebras to -algebras sending objects A ! E x(A) and morphisms ' ! E x(') where the -homomorphism is de ned by E x(')(a; b) := ('(a); '(b)). Proposition
Show that for any linear algebra A, the algebra A+ is isomorphic to H(E x(A)) via x ! (x; x), and A is isomorphic to Sk(E x(A)) via x ! (x; x) (cf. II.1.3). Exercise 1.8
6. The Category of Jordan Algebras As mentioned in II Sections 1.3-4, for Jordan and alternative algebras it is convenient to formulate the axioms in terms of commutators and associators, de ned in any linear algebra by [x; y ] := x y y x; [x; y; z ] := (x y ) z x (y z ):
152
The Category of Jordan Algebras
The commutator is a bilinear mapping from the A A to A, measuring how far the two elements are from commuting with each other; the associator is a trilinear mapping measuring how far the three elements are from associating (in the given order) with each other. Definition 1.9 (Category of Jordan Algebras). (1) A Jordan algebra over a ring of scalars containing 12 is a linear -algebra J equipped with a bilinear product, denoted by x y , which satis es the commutative law and the Jordan identity (JAx1) [x; y ] = 0 (commutative law); (JAx2) [x2 ; y; x] = 0 (Jordan identity): A Jordan algebra is unital if it has a unit element 1 in the usual sense. A homomorphism of Jordan algebras is just an ordinary homomorphism of linear algebras 1:1(2); '(x y ) = '(x) 0 '(y ): (2) From the basic bullet product we can construct several important auxiliary products (squares, brace products, U -products, triple products, and V -product):7 x2 = x x; fx; zg = 2x z; Ux y := 2x (x y ) x2 y fx; y; zg := Ux;z y := Ux+z Ux Uz y = 2 [x (z y ) + z (x y ) (x z ) y ] Vx (z ) = fx; z g; Vx;y (z ) = fx; y; z g: In unital Jordan algebras, setting x or y or z equal to 1 in the above shows the unit interacts with the auxiliary products by Ux 1 = x2 ; U1 y = y; fx; 1g = 2x; fx; y; 1g = fx; 1; yg = fx; yg: Homomorphisms automatically preserve all auxiliary products built out of the basic product. The category of Jordan -algebras consists of all Jordan -algebras with homomorphisms, while the category of unital Jordan -algebras consists of all unital Jordan -algebras with unital homomorphisms.
fx; yg, obtained by linearizing the square, is often denoted by a circle: x Æ y = (x + y)2 x2 y2 . Jimmie McShane once remarked that it is a strange theory where the linearization of the square is a circle ! We use the brace notation in conformity with our general usage for n-tads fx1 ; : : : ; xn g := x1 xn + xn x1 , and also since it resembles the brackets of the Lie product. 7In the Jordan literature the brace product
6. THE CATEGORY OF JORDAN ALGEBRAS
153
(3) We have the usual notions of subalgebra B and ideal I as for any linear algebra (cf. New Objects 1.2), B B B; J I I: Subalgebras and ideals remain closed under all auxiliary Jordan products as well: 2 B + UB (B) + fB; Bg + fB; B; Bg B; 2 I + UJ (I) + UI (J) + fJ; Ig + fJ; J; Ig + fJ; I; Jg I: Any ideal determines a quotient algebra J = J=I, which is again a Jordan algebra. (4) In addition, we have a new concept of \one-sided" ideal: an inner ideal of a Jordan algebra J is a -submodule B closed under inner multiplication by Jb: UB (Jb) = B2 + UB (J) B: In particular, inner ideals are always subalgebras B B B. In the unital case, a -submodule B is an inner ideal i it is a -submodule closed under inner multiplication UB (J) B:8 (1) Show that the variety of Jordan algebras is closed under homomorphisms: if ' : J ! A is a homomorphism of a Jordan algebra J into a linear algebra A, then the image '(J) is a Jordan subalgebra of A. (2) Show that that the variety of Jordan algebras is closed under quotients: if J is Jordan, so is any of its quotient algebras J=I. (3) Show that that the variety of Jordan algebras is closed under direct products: if Ji are Jordan algebras so is their direct product Q i Ji : Exercise 1.9A
Just as everyone should (as mentioned in the Survey) verify directly, once and only once in their life, anti-commutativity [x; y] = [y; x] and the Jacobi identity [x; [y; z ]] + [y; [z; x]] + [z; [x; y]] = 0 for [x; y] = xy yx in associative algebras, showing that every associative algebra A gives rise to a Lie algebra A (cf. 1.1(3)) under the commutator product [x; y] = xy yx, so should everyone show that A+ under the anti-commutator product x y = 12 (xy + yx) gives rise to a Jordan algebra, by verifying directly commutativity x y = y x and the Jordan identity (x2 y) x = x2 (y x). (No peeking at Example 3.2!!) Exercise 1.9B
(1) Show that [x; y; z ] = 0 holds in any linear algebra whenever x = e is a left unit (ea = a for all elements a 2 A). (2) The identity [x; y; x] = 0, i.e. (xy)x = x(yx), is called the exible law; show that every commutative linear algebra is automatically exible, i.e. that [x; y; z ] = 0 holds whenever z = x. Exercise 1.9C
8In the literature inner ideals are often de ned as what we would call weak inner
ideals with only UB J B. We prefer to demand that inner ideals be subalgebras as well. Of course, the two de nitions agree for unital algebras.
154
The Category of Jordan Algebras
We obtain a scalar extension functor from the category of Jordan -algebras to Jordan -algebras, and a unitalization functor J ! J(1;) from the category of linear Jordan -algebras to unital Jordan algebras, and as usual these constructions commute. 1.10 (Linearization). (1) Because we are assuming the scalar a Jordan algebra automatically sati es the linearizations of the Jordan identity : Proposition
1, 2
(JAx20 ) (JAx200 )
[x2 ; y; z ] + 2[x z; y; x] = 0 [x z; y; w] + [z w; y; x] + [w x; y; z ] = 0
for all elements x; y; z; w in J: (2) Any Jordan algebra strictly satis es the Jordan identities in the sense that all scalar extensions continue to satisfy them, and consequently every scalar extension of a Jordan algebra remains Jordan. (3) Every Jordan algebra can be imbedded in a unital one: the unital hull bJ of a Jordan algebra J is again a Jordan algebra having J as an ideal. proof. (1) To linearize the Jordan identity (JAx2) of degree 3 in the variable x, we replace x by x + z for an arbitrary scalar and subtract o the constant and 3 terms: since (JAx2) holds for x = x; z; x + z individually we must have
0 = [(x + z )2 ; y; x + z ] [x2 ; y; x] [z 2 ; y; z ] = [x2 ; y; z ] + 2[x z; y; x] + 2 [z 2 ; y; x] + 2[z x; y; z ] = f (x; y ; z ) + 2 f (z ; y ; x) for f (x; y ; z ) the left side of (JAx20 ). This relation must hold for all values of the scalar . Setting = 1 gives 0 = f (x; y ; z ) + f (z ; y ; x), so that subtracting gives 0 = 2f (x; y ; z ), and hence the existence of 1 2 (or just the absence of 2-torsion) guarantees f (x; y ; z ) = 0 as in (JAx20 ). The identity (JAx20 ) is quadratic in x (linear in z; y ), therefore automatically linearizes: replacing x by x = x; w; x + w and substracting the pure x and pure w terms gives 0 = f (x + w; y ; z ) f (x; y ; z ) 2 f (w; y ; z ) = 2 ([x w; y; z ] + [x z; y; w] + [w z; y; x]), so for = 21 we get (JAx200 ). (2) It is easy to check that any scalar extension J of a Jordan algebra J remains Jordan; certainly it remains commutative as in (JAx1), P P and for general elements x~ = i i xi ; y~ = j j yj of Je = J we
7. THE CENTROID
have
155
P 3 2 2 = j ;i i j [xi ; yj ; xi ] + j ;i6=k i k j [x2i ; yj ; xk ] P +2[xi xk ; yj ; xi ] + j ;i;k;`6= 2i k ` j [xi xk ; yj ; x` ] +[xk x` ; yj ; xi ] + [x` xi ; yj ; xk ] ;
[~x2 ; y~; x~]
P
which vanishes since each individual term vanishes by (JAx2),(JAx20 ), (JAx200 ) applied in J. (3) Clearly J is an ideal in bJ as in Unital Hull De nition 1.5, and the product on bJ is commutative with unit; it satis es the Jordan identity (JAx2) since for any elements x^; y^ of the unital hull we have [^x2 ; y^; x^] = [(1 x)2 ; 1 y; 1 x] = [2x + x2 ; y; x] [the unit associates with everything] = 2[x; y; x] + [x2 ; y; x] = 0 + 0 = 0 because the original J satis es (JAx2) and (JAx1), and as in Exercise 1.9C(1) any commutative algebra is exible: [x; y; x] = (x y ) x x (y x) = x (x y ) x (x y ) = 0. (1) Let u; v be two elements of a -module M. Show that if = 0 for all 2 then u = v = 0 if there is at least one scalar in such that := 2 is cancellable from u (in the sense u = 0 =) u = 0). In particular, conclude this happens if is a eld with at least three elements, or any ring containing 21 . (2) Letting a = f (x; y; z ); b = f (z ; y; x) as in the proof of Proposition 1.10, conclude that (JAx2) automatically implies (JAx20 ). (3) Trivially, if a map g from one abelian group M to another N vanishes on M, so does its polarization g(u; v) := g(u + v) g(u) g(v) since it is a sum of values of g. Use this to verify that (JAx20 ) always implies (JAx200 ) over any . Exercise 1.10*
u + 2 v
7. The Centroid An important concept for arbitrary linear algebras is that of the centroid, the natural \scalar multiplications" for an algebra. Definition 1.11 (Centroid). The centroid (A) of a linear algebra A is the set of all linear transformations T 2 End (A) which act as scalars with respect to multiplication (CT) T (xy ) = T (x)y = xT (y ) for all x; y 2 A, equivalently, which commute with all (left and right) multiplications, (CT0 ) T Lx = Lx T; T Ry = Ry T for all x; y 2 A, or yet again which centralize the multiplication algebra Mult(A) = hLA [ RAi generated by all left and right multiplications,
156
The Category of Jordan Algebras
(CT00 ) (A) = CentEnd(A) (Mult(A)) where for any subset S of End (A) the centralizer or commuting ring is CentEnd(A) (S ) := fT 2 End (A) j T S = ST for all S 2 Sg: We say a -algebra is centroidal if (A) = , and is centroidsimple if it is simple and centroidal. Notice that if A is trivial (all products vanish) then Lx = Ry = 0 and (A) = End(A) is highly noncommutative. In order for to form a commutative ring of scalars, we need some mild nondegeneracy conditions on A. Theorem 1.12 (Centroid). (1) The centroid is always a unital associative -subalgebra of End (A). If A2 = A; or if A? = 0 (e.g. if A is simple or semiprime or unital); then (A) is a unital commutative ring of scalars , and A is in a natural way an -algebra via ! x = ! (x): (2) If A is unital, the centroid is essentially the same as the center Cent(A) := fc 2 A j [c; A] = [c; A; A] = [A; c; A] = [A; A; c] = 0g; the center is just the centroidal action on 1, and the centroid is just the central multiplications: Cent(A) = (A)(1); (A) = LCent(A) = RCent(A) ; the maps T ! T (1) and c ! Lc = Rc are inverse isomorphisms between the unital commutative rings (A) and Cent(A). (3) If A is simple, then (A) is a eld , and A becomes a centroidsimple algebra over . If A is merely prime (has no nonzero ideals I; J with I J = 0); then (A) is an integral domain acting faithfully on A. proof. (1) The centralizer CentEnd(A) (S ) of any set S is a unital subalgebra of End (A) (closed under 1A ; T; T1 + T2 ; T1 T2 ), and (A) is just the particular case where S is LA [ RA or Mult(A). The Hiding Trick T1 T2 (xy ) = T1 (T2 (x)y ) = T2 (x)T1 (y ) = T2 (xT1 (y )) = T2 T1 (xy ) shows that the kernel of [T1 ; T2 ] contains A2 and its range is contained in A? , [T1 ; T2 ](AA) = [T1 ; T2 ](A) A = A [T1 ; T2 ](A) = 0; so [T1 ; T2 ] = 0 on A if A2 = A or A? = 0. (2) If A is unital then any centroidal T is Lc = Rc for central c = T (1): T (x) = T (1x) = T (1)x = Lc x and similarly T (x) = T (x1) = xT (1) = Rc x, where c = T (1) is central since [c; x] = [T (1); x] = T ([1; x]) = 0; [c; x; y ] = [T (1); x; y ] = T ([1; x; y ]) = 0;
7. THE CENTROID
157
similarly [x; c; y ] = [x; y; c] = 0. Conversely, if c 2 Cent(A) is central then T = Lc = Rc is centroidal by T (xy ) = c(xy ) = (cx)y = T (x)y and dually T (xy ) = (xy )c = x(yc) = xT (y ). The correspondences are inverses since T ! T (1) ! LT (1) = T (note T (1)x = T (1x) = T (x)) and c ! Lc ! Lc (1) = c. These maps are actually homomorphisms of unital commutative associative rings since T1 T2 (1) = T1 T2 (1 1) = T1 (1)T2 (1) and Lc1 c2 = Lc1 Lc2 . (3) If A is simple or prime then A? = 0 and it is commutative by (1). The kernel Ker(T ) and image Im(T ) of a centroidal T are always ideals [invariant under any multiplication M 2 Mult(A) since T (x) = 0 implies T (M (x)) = M (T (x)) = 0 and M (T (A)) = T (M (A)) T (A)]. If T 6= 0 then Ker(T ) 6= A and Im(T ) 6= 0, so when A is simple Ker(T ) = 0 [T is injective] and Im(T ) = A [T is surjective], and any nonzero T is bijective. Whenever a centroidal T has an inverse T 1 2 End(A), this inverse is also centroidal, T 1 (xy ) = T 1 (T (x0 )y ) = T 1 T (x0 y ) = x0 y = T 1 (x)y and dually, so in a simple algebra the centroid is a eld. If A is merely prime, (A) is a domain since if nonzero centroidal Ti kill each other then their nonzero image ideals Im(Ti ) are orthogonal, T1 (A)T2 (A) = T1 T2 (AA) = 0, a contradiction. Each nonzero T is faithful (injective) on A, since its kernel is orthogonal to its nonzero image: T (A) Ker(T ) = A T (Ker(T )) = 0. In the unital case centroid and center are pretty indistinguishable, and especially in the theory of nite-dimensional algebras ones talks of central-simple algebras, but in general the centroid and centroidsimplicity are the natural concepts. Unital algebras always have a nonzero center Cent(A) 1; but non-unital algebras (even simple ones) often have no center at all. For example, the algebra M1() of all 1 1 matrices having only a nite number of nonzero entries from an associative division algebra is simple, with centroid
= Cent()1A ; but has no central elements: if the matrix C = P ij Eij were central then it would be diagonal with all diagonal entries equal, contrary to niteness [ ij Eij = Eii CEjj = Eii Ejj C = 0 if i 6= j ,
iiEii = Eij EjiCEii = Eij CEjiEii = Eij ( jj Ejj )Eji = jj Eij Ejj Eji =
jj Eii ]. Let M be an arbitrary left R-module for an associative algebra R (not necessarily unital or commutative). (1) Show that if S End (M ) is any set of -linear transformations on M , then C M (S ) = fT 2 End (M ) j T S = ST for all S 2 Sg is a unital subalgebra of End (M ) which is inverse-closed and quasi-inverse-closed: if T 2 C M (S ) is invertible or quasi-invertible, then its inverse or quasi-inverse again belongs to C M (S ). (2) Prove Schur's Lemma: if M Exercise 1.12*A
158
The Category of Jordan Algebras
has no proper S -invariant subspaces, then C M (S ) is a division algebra. Deduce the usual version of Schur's Lemma: if M is an irreducible R-module (R M 6= 0 and M has no proper R-submodules) then C M (R) is a division ring. (3) If A is a linear algebra and R = Mult(A), show C M (R) = (A), and A is simple i M = A is an irreducible R-module. Show that in the de nition of center in 1.11 the condition [x; c; y] = 0 can be omitted: it follows from the other associator conditions in the presence of commutativity. Exercise 1.12*B
A -algebra A is strictly-simple if all its scalar extensions A remain simple ( a eld). Theorem 1.13 (Strict Simplicity). A simple linear -algebra A is centroid-simple over a eld i it is strictly simple. proof. If a simple A is NOT central then (A) = > , and e the scalar extension A = A is not simple because it has a none ! A of -algebras via '(! a) = injective homomorphism ' : A ! (a): Indeed, this does de ne an map on the tensor product by the universal property, since the expression is -bilinear ['(! a) = ! (a) = ! (a) = '(! a)]. The map is -linear since '(!0 !
a) = '(!0 ! a) = (!0 ! )(a) = !0 (! )(a)) = !0 '(! a). It is an algebra homomorphism since '((!1 a1 )(!2 a2 )) = '(!1 !2 a1 a2 ) = (!1 !2 )(a1 a2 ) = (!1 (a1 ))(!2 (a2 )) = '(!1 a1 )'(!2 a2 ): Choosing a basis f!iL g for as a vector space of dimension > 1 over , we know
A = i !1 A, so by independence !1 !2 (a) !2 !1 (a) is nonzero if a 6= 0, yet it lies in the kernel of ' since !1 (!2 (a)) !2 (!1 (a)) = 0. e is More diÆcult is the converse, that if A IS centroid-simple then A 2 e =
AA 6= 0; we claim A e has no proper also simple. Certainly A e e ideals. It suÆces to show any nonzero I C A contains an \irreducible tensor" (1)
0 6= ! x 2 eI
(0 6= ! 2 ; 0 6= x 2 A)
since we can get from such an element via the multiplication algebra to the entire algebra. Indeed, simplicity of A guarantees (2)
x 6= 0 in A =) Mult(A)(x) = A:
Thus for any !e 2 ; a 2 A there is a multiplication M with M (x) = a; hence !e a = (!e ! 1 M )(! x0 ) 2 Mult(e(A))(eI) eI, so adding up e =
A e e: yields A I and thus e I = A
7. THE CENTROID
159
If we are willing to bring in the big gun, the Jacobson Density Theorem (cf. Appendix D.2), we can smash the obstacle (1) to smithereens. By simplicity Mult(A) acts irreducibly on A with centralizer (A) = by centroid-simplicity, so if L fx g2S is a basis for APover then by n e = e tensor product facts A 2S x . If 0 6= i=1 !i xi 2 I is an element of minimal length n, by the Density Theorem there is M 2 Mult (A) with MP(x1 ) = x1 ; M (xi ) = 0 for i 6= 1, so instantly P (1 M )( !i xi ) = !i M (xi ) = !1 x1 6= 0 lies in eI as desired in (1). We can also chip away at the obstacle (1) one term at a time using elements of the centroid. The most eÆcient method is another minimal length argument. LetLf! g2S be a basis for P over , so by tensor e product facts A = ! A: Assume 0 6= !i xi 2 eI is an expression of minimal length. This minimality guarantees a domino P eect: P if N 2 Mult(A) kills x1 it kills every xi , since (1 N )( !i
xi ) = !i N (xi ) 2 eI is of shorter length if N (x1 ) = 0; therefore by minimality must be zero, which by independence of the !i implies each N (xi ) = 0. By (2) we can write any a 2 A as a = M (x1 ) for some M 2 Mult(A); we use this M to de ne linear transformations Ti by Ti (a) = M (xi ). These are well-de ned by dominos: a = M (x1 ) = M 0 (x1 ) =) N = M M 0 has N (x1 ) = 0; hence N (xi ) = 0 and M (xi ) = M 0 (xi ) is independent of the choice of the M which produces a. Then clearly the Ti are linear [if M; ; M 0 produce a; a0 then M + 0 M 0 produces a + 0a0 and Ti (a + 0a0 ) = (M + 0 M 0 )(xi ) = Ti (a)+ 0 Ti (a0 )] and commutes with all multiplications P 2 Mult(A) [P M produces P (a), so Ti (P (a)) = (P M )(xi ) = P (M (xi )) = P (Ti (a))], so Ti = i 2 = by centroid-simplicity. Here M = 1A produces a = x1 , so Ti (x1 ) = Ti (a) = M (xi ) = 1A (xi ) = xi and i xP1 = xi is a scalar P multiple of x1 , and the relation takes the form 0 6= !iP i x1 = !i i x1 [all i lie in and we are tensoring over ] = ( !i i) x1 = ! x1 2 eI as desired in (1). Thus either way we have our irreducible tensor (1), which suÆces to e and thus the simplicity of the scalar extension A e = A. show eI = A
In unital algebras the minimum length argument is usually run P through the center. Let A be central simple over (A) = , and 0 6= !i
xi 2 eI be an expression of minimal length (f! g a basis for over ). Use (2) of Strict Simplicity Theorem 1.13 to get a multiplication M 2 Mult(A) such that M (x1 ) = 1 (but no assumption about its eect on the other M (xi )). Apply 1 M to the relation to normalize it so that x1 = 1. Then apply La Ra ; Lab Exercise 1.13*
160
The Category of Jordan Algebras
LaLb ; [La ; Rb ]; Rab Rb Ra to the relation to conclude [a; M (xi )] = [a; b; M (xi )] = [a; M (xi ); b] = [M (xi )); a; b] = 0 for all a; b 2 A and conclude each M (xi ) = i 1 2 1 = Cent(A). Conclude the relation was 0 6= ! 1 2 eI as in (1) of 1.13, establishing strict simplicity.
8. PROBLEMS FOR CHAPTER 1
161
8. Problems for Chapter 1 I. L. Kantor could have said (but didn't) \There are no objects, there are only morphisms"; a category theorist could say (and has) \There are no objects, there are only arrows." Show that we can de ne a category to be a class M of \abstract morphisms" or \arrows", together with a partial binary operation from a subsclass of M M to M satisfying the 4 axioms: (1) (Partial associativity) if either of (fg)h or f (gh) is de ned, then so is the other and they are equal; (2) (Identity elements) for each arrow f there are identities eL; eR so that eLf = f = feR [where e 2 M is an identity if whenever ef is de ned it equals f , and whenever ge is de ned it equals g]; (3) (Composition) if e is an identity and both fe; eg are de ned then so is fg; (4) if e; e0 are identities then the class eMe0 (the class of all f = (ef )e0 = e(fe0) having e as left unit and e0 as right unit) is actually a set. To make composition more aesthetic, consider the morphisms to be arrows f so that the left (or head) identity represents 1X for the codomain X , and the right (or tail) identity represents 1Y for the domain Y , and the arrow belongs to Mor(X; Y ) = Mor(X Y ). Show that categories in this arrow-theoretic sense are equivalent to categories in the object-morphism sense. Problem 1*.
(1) Show that any variety of linear algebras, de ned by a set of identities, is closed under homomorphisms, subalgebras, and direct products. (2) A celebrated theorem of Birkho says, conversely, that every class C of linear algebras (actually, algebraic systems in general) which is closed under homomorphisms, subalgebras, and direct products is a variety. Try your hand at proving this; the key is the existence of free algebras FCC [X ] in the class for any set X , that the class consists precisely of the homomorphic images of free algebras, and hence the class is de nied by identities (the kernel of the canonical map F A[X ] ! FC [X ] from the free linear algebra (or general algebraic system under consideration) to the free C -calgebra for any countably in nite set X . Problem 2.
We noticed that if the algebra is already unital, unitalization introduces a supernumerary unit. In most applications any unital hull 1 + A would do equally well, we only need the convenience of a unit element to express things in the original algebra more concisely. But in structural studies it is important to be more careful in the choice of hull; philosophically the tight hulls are the \correct" e A is a tight extension if all the nonzero ideals ones to choose. An extension A of the extension have nonzero intersection with the original algebra, i.e. there are e; e no disjoint ideals 0 6= eI / A I \ A = 0. (Of course, two subspaces can never be disjoint, they always share the zero element 0, but nevertheless by abuse of language we call them disjoint if they are nonzero and have nothing but this one lowly element in common.) (1) Show that the formal unital hull has the universal property that every homomorphism of A into a unital algebra B extends uniquely b ! A. Show that the unital hulls A1 = 1 + A of to a unital homomorphism A Problem 3*.
162
The Category of Jordan Algebras
b =I0 for disjoint ideals I0 . (2) are, up to isomorphism, precisely all quotients A Show that there is a handy tightening process to remedy any looseness in a given b =I0 is a tight extension, namely dividing out a maximal disjoint ideal: show that A 0 extension i I is a maximal disjoint ideal. (3) A linear algebra is robust if its annihilator Ann(A) := fz 2 A j z A = Az = 0g vanishes. Show any unital algebra, simple algebra, prime algebra, or semiprime algebra is robust. [An algebra is prime if it has no orthogonal ideals (0 6= I; K / A with IK = 0), and semiprime if it has no self-orthogonal ideals (0 6= I / A with II = 0).] (4) Show that in a robust algebra there is a unique maximal disjoint ideal of the formal unital hull, namely 0 b j z 0A = Az 0 = 0g, and hence there is a unique tightenM0 := AnnA ^ (A) := fz 2 A e ing A. Thus in the robust case we may speak without ambiguity of the tight unital hull. A
(1) If A already has unit e it is robust; show the maximal disjoint e . (2) The algebra A = 2Z of ideal is M0 = (1 e), and nd its tight unital hull A even integers is robust (even a domain). Find its tight unital hull. Problem 4*.
The tight unital hull is algebraically \closer" to the original algebra: b of it inherits many important properties. (1) Show that the formal unital hull A e a simple algebra is never simple, and the tight unital hull A is simple only if the original algebra is already simple and unital. (2) Show that the tight unital hull of b =A e. a prime algebra is prime, but the formal unital hull is prime i M0 = 0 and A Give a prime example of in nite matrices where this fails: A = M1 () + 11 for
an ideal in an integral domain , with M0 = (1 11). (3) Show that the tight unital hull of a semiprime algebra is always semiprime. Show that if acts faithfully on A (A? := f 2 j A = 0g vanishes, so nonzero scalars act nontrivially) and A is semiprime, then its formal hull remains semiprime. Show this needn't hold if the action of on A is unfaithful. (4) Show that if A is robust and a eld, then 0 M = 0 and the tight unitalization coincides with the formal one. (5) Show that tightening a robust algebra always induces faithful scalar action: if 2 A? then b contained in all maximal M's, so A e = 0, so both A and 0 is an ideal of A e are algebras over the faithful ring of scalars 0 := =A? : its tight hull A Problem 5*.
(1) Show that Mnp () = Mn () Mp () as associative algebras, so there is always a homomorphism Mn () ! Mnm () of unital associative algebras for any . (2) Show that if is a eld there is a homomorphism Mn () ! Mm () of associative algebras i n m, and there is a homomorphism Mn () ! Mm () of unital associative algebras i n divides m. (3) Show that the construction of the tight unital hull is not functorial: if ' : A ! A0 is a non-unital homomorphism, 0 e ! A e (much less one extending there need not be any unital homomorphism A '). Problem 6*.
De ne the unital hull to be A itself if A is already unital, and b otherwise. Does this yield a functor from algebras to the formal unital hull A Question 1*.
8. PROBLEMS FOR CHAPTER 1
163
unital algebras? Can you see any advantages or disadvantages to adopting this intermediate de nition (between the formal and the tight extension)? Question 2*. Under what conditions is the scalar extension B of a Boolean algebra (one with x2 = x for all x) again Boolean? Question 3*. We de ned in 1.9(4) the concept of an inner ideal in a Jordan algebra. What would an outer ideal be? Why didn't we at least mention this concept? Give interesting examples of outer ideals in the quadratic Jordan algebras Hn (Z) and Hn ( ) for an imperfect eld 2 < of characteristic 2. These are precisely the new \wrinkles" which creates some modi cations of the usual simple algebras for quadratic Jordan algebras. Since we are sticking to the linear theory, we won't mention outer ideals again.
CHAPTER 2
The Category of Alternative Algebras The natural coordinates for Jordan algebras are alternative algebras with a nuclear involution. We will develop the basic facts about the variety of alternative algebras in general, and will construct in detail the most important example: the 8-dimensional octonion algebras, composition algebras obtained by gluing together two copies of a quaternion algebra in a twisted way by the Cayley-Dickson Construction. Octonion algebras with their standard involution coordinatize the simple exceptional Albert algebras. We will prove the celebrated Hurwitz Theorem, that composition algebras exist only in dimensions 1,2,4, or 8.
1. The Category of Alternative Algebras Since alternative algebras are second cousins to associative algebras, we write the product by mere juxtaposition. Recall the de nitions described in II Chapter 2.6.2. Definition 2.1 (Category of Alternative Algebras). A linear algebra D is alternative if it satis es the left and right alternative
laws
x2 y = x(xy ) (left alternative law); yx2 = (yx)x (right alternative law) for all x; y in D; in terms of associators this means [x; x; y ] = 0; [y; x; x] = 0; while in terms of operators the left and right alternative laws become Lx2 = L2x ; Rx2 = Rx2 : The category of alternative algebras consists of all alternative algebras and ordinary homomorphisms. The de ning identities are quadratic in x, so they automatically linearize and all scalar extensions of alternative algebras are again alternative: the scalar extension functor stays within the category of alternative algebras. We have a corresponding category of unital alternative algebras, and e = 1 + D of an alternative a unitalization functor: any extension D algebra D will again be alternative, since for any two of its elements x~; y~ 165
166
The Category of Alternative Algebras
we have [~x; x~; y~] = [1 + x; 1 + x; 1 + y ] = [x; x; y ] (1 is nuclear) = e is alternative. 0 (D is alternative), dually [~y; x~; x~] = [y; x; x] = 0, so D We will often write D for alternative algebras to distinguish them from mere linear algebras A. Associative algebras are obviously alternative, and rather surprisingly it turns out that nice alternative algebras come in only two basic avors: associative or octonion.1
Linearize the alternative laws in x to see that [x1 ; x2 ; x3 ] is skew in its rst and last two variables; since the transpositions (12) and (23) generate the symmetric group S3 , conclude [x(1) ; x(2) ; x(3) ] = ( 1) [x1 ; x2 ; x3 ] for any permutation of f1; 2; 3g, so the associator is a skew-symmetric trilinear function. Show it is alternating (vanishes if any two of its arguments are equal), in particular is exible. Exercise 2.1
2. Nuclear Involutions We recall the concepts of nucleus and center in general linear algebras of II Chapter 2.2. Properties of the nucleus and center in simple alternative algebras are the main key to their classi cation. Definition 2.2 (Nucleus). The nucleus of a linear algebra is the gregarious part of the algebra, the part that associates with everyone, consisting of the elements associating in all possible ways with all other elements: Nuc(A) := fn 2 A j [n; A; A] = [A; n; A] = [A; A; n] = 0:g For alternative algebras this reduces to Nuc(D) = fn 2 D j [n; D; D] = 0g (D alternative): Definition 2.3 (Center). The center of any linear algebra is the \scalar" part of the algebra which both commutes and associates with everyone, i.e. those nuclear elements commuting with all other elements: Cent(A) := fc 2 Nuc(A) j [c; A] = 0g: For alternative algebras this reduces to Cent(D) = fc 2 D j [c; D] = [c; D; D] = 0g (D alternative): 1The celebrated Bruck-Kleinfeld Theorem of 1950 proved that the only alter-
native division algebras which were not associative were octonion algebras, and in 1953 Erwin Kleinfeld proved that all simple alternative algebras were associative or octonion. This was one of the rst algebraic structure theories which required no niteness conditions; much later E m Zel'manov would provide Jordan algebras with such a theory, but even today there is no general classi cation of all simple associative algebras.
2. NUCLEAR INVOLUTIONS
167
A unital linear algebra can always be considered as an algebra over its center := Cent(A) 1. A unital -algebra is central if its center is precisely the scalar multiples 1. Central-simple algebras (those that are both central and simple) are the central building-blocks of nite-dimensional structure theory. In the category of -algebras, the natural notion of center is the center, de ned to be the set of central elements xed by ; Cent(A; ) := fc 2 Cent(A) j c = cg. (1) Show that Nuc(A) is always an associative subalgebra of A, and Cent(A) a commutative associative subalgebra of A (containing the unit if A has one); show that both are invariant (as sets, not necessarily pointwise) under all automorphisms and involutions of A. (2) If A has unit, show that 1 lies in the center for all 2 . (3) Show that if A is unital then its center = Cent(A) is a ring of scalars, and A can be considered as a linear algebra over . (4) Show that if A is a simple unital linear algebra, then its center is a eld. Exercise 2.3
We will return in Chapter 21 for more detailed information about the alternative nucleus, in order to pin down in the Herstein-KleinfeldOsborn Theorem the particular alternative algebras which coordinatize Jordan algebras with capacity. To coordinatize Jordan algebras in general we need alternative algebras with involutions which are suitably \associative", namely nuclear. Definition 2.4 (Nuclear Involutions). A nuclear or central or scalar involution on A is an involution whose self-adjoint elements lie in the nucleus or center or scalar multiples of 1 respectively,
H(A; ) Nuc(A) H(A; ) Cent(A) H(A; ) 1
( nuclear); ( central); ( scalar):
For example, all involutions of an associative algebra are nuclear, and the standard involution on a quaternion or octonion algebra is scalar.2 Octonion algebras are constructed by doubling quaternion algebras, and their standard involution is again scalar. Octonion algebras with standard involution give rise to Albert algebras. 2Indeed, as we shall see, having a scalar involution is essentially equivalent to
being a composition algebra.
168
The Category of Alternative Algebras
3. Composition Algebras The path leading up to the octonions had a long history, which we sketched in II Chapter 2.7-8. The nal stage in our modern understanding of the structure of octonions was Jacobson's 1958 proof of the Hurwitz Theorem, showing how the Cayley-Dickson doubling process is genetically programmed into composition algebras, providing an internal bootstrap operation which inexorably builds up from the 1-dimensional scalars, to a 2-dimensional quadratic extension, to a 4-dimensional quaternion algebra, and nally to an 8-dimensional octonion algebra, at which point the process stops: the construction cannot go beyond the octonions and still permit composition. We begin by formalizing, over an arbitrary ring of scalars, the concept of composition algebra discussed in II Chapter 2.8. Definition 2.5 (Forms Permitting Composition). (1) If Q : C ! is a quadratic form on a unital linear -algebra C, the form is said to permit composition if it is unital and satis es the composition
law
Q(1) = 1; Q(xy ) = Q(x)Q(y ) for all elements x; y 2 C. We refer to Q as the norm, and the trace T is de ned by T (x) := Q(x; 1); the standard involution is the linear map x := T (x)1 x: (2) A quadratic form Qis nondegenerate if the polarized bilinear form Q(x; y ) := Q(x + y ) Q(x) Q(y ) is nondegenerate in the sense that it has zero radical3
Rad(Q) := fz 2 C j Q(z; C) = 0g: A composition algebra is a unital linear algebra which carries a nondegenerate quadratic form Q permitting composition. In the presence of nondegeneracy, the Composition Law has serious algebraic consequences. Proposition 2.6 (Composition Algebra). (1) Any composition algebra C satis es the left and right adjoint formulas
Q(xy; z ) = Q(y; xz );
Q(yx; z ) = Q(y; zx)
3As remarked in II Chapter 2.8, for general quadratic forms the radical is
fz j Q(z ) = Q(z; C) = 0g, but here we can omit Q(z ) = 21 Q(z; z ) by means of the
scalar 12 .
3. COMPOSITION ALGEBRAS
169
(which say, in the language of adjoints with respect to a bilinear form, that Lx = Lx ; Rx = Rx ); and the left and right Kirmse identities x(xy ) = x(xy ) = Q(x)y; (yx)x = (yx)x = Q(x)y (which in operator terms say Lx Lx = Lx Lx = Q(x)1C = Rx Rx = Rx Rx ): (2) A composition algebra is an alternative algebra satisfying the de-
gree 2 identity
x2
T (x)x + Q(x)1 = 0
for all elements x: (3) The standard involution is an isometric algebra involution, x = x; 1 = 1; xy = y x; T (x) = T (x); Q(x) = Q(x); Q(x; y ) = T (xy) = T (xy ) proof. (3a) Since T (1) = Q(1; 1) = 2Q(1) = 2 [from Unitality 2.5(1)] we see immediately from the standard involution de nition 2.5(1) 1 = T (1)1 1 = 1; T (x) = T (x)T (1) T (x) = T (x); Q(x) = T (x)2 Q(1) T (x)Q(x; 1) + Q(x) = Q(x) [by de nition of T ] and therefore x = T (x)1 x = T (x)1 x = x, so for any unital quadratic form the standard involution is involutory and preserves traces and norms. 1 = 1; T (x) = T (x); Q(x) = Q(x); x = x: The remaining multiplicative properties in (1)-(3) work only for forms permitting composition. (1) We linearize y ! y; z and x ! x; 1 in the composition law Q(xy ) = Q(x)Q(y ) to obtain rst Q(xy; xz ) = Q(x)Q(y; z ) then Q(xy; z ) +Q(y; xz ) = T (x)Q(y; z ), so Q(xy; z ) = Q(y; T (x)z xz ) = Q(y; xz ), giving the left adjoint formula. Right adjoint follows by duality (or from the involution, once we know it is an algebra anti-automorphism in (3b)). The rst part of left Kirmse follows from nondegeneracy and the fact that for all z we have Q(z; [x(xy ) Q(x)y ]) = Q(xz; xy ) Q(x)Q(z; y ) = 0 for all z by left adjoint and the linearized composition law; the second follows by replacing x by x (since the standard involution is involutory and isometric by (3a) above), and dually for right Kirmse. (2) Applying either Kirmse identity to y = 1 yields the degree 2 identity x2 T (x)x = [x T (x)1]x = xx = Q(x)1. Then in operator form left Kirmse is equivalent to left alternativity, Lx Lx Q(x)1C = LT (x)1 x Lx Q(x)L1 = LT (x)x Q(x)1 L2x = Lx2 L2x . Dually for right alternativity. (3b) The adjoint formulas and nondegeneracy imply the standard involution is an algebra involution, since for all z we use a Hiding Trick
170
The Category of Alternative Algebras
to see Q(z; [xy y x]) = Q((xy )z; 1) Q(zx; y ) [by left adjoint for xy and right adjoint for x] = Q(xy; z ) Q(x; z y ) [by right and left adjoint for z ] = 0 [by right adjoint for y ]. The easiest composition algebras to understand are the \split" ones, described in II Chapter 2.9. As the Barbie doll said, \Gee, octonion algebra is hard!", so we will go into some detail about a concrete and elementary approach to the split octonion algebra, representing it as the Zorn vector-matrix algebra 2 2 matrices whose diagonal entries are scalars in and whose o-diagonal entries are row vectors in 3 (note that the dimensions add up: 1 + 3 + 3 + 1 = 8). Example 2.7 (Split Composition). The split composition algebras of dimensions 1; 2; 4; 8 over a ring of scalars are the following algebras with norm, trace, and involution: (I) split unarions U () := ; N () := 2 ; T () = 2; = ; (II) split binarions B () := ; N ( ) := , T ( ) = + ; : ( ) = (a direct sum of scalars with the exchange involution (; ) 7! ( ; )); (III) split quaternions Q () := M 2 (); N (a) := det(a),
T (a) = tr(a); a = for a = Æ Æ (the algebra of 2 2 matrices with determinant norm and symplectic involution a = satr str for s = 01 10 ); (IV) split octonions O() or Zorn vector-matrix algebra of all 2 2 matrices with scalar entries on the diagonal and vector entries o the diagonal:
Z orn() := fA =
~x ~y
with norm and involution N (A) := ~x ~y; and the product
j ; 2 ; ~x; ~y 2 3 g
A :=
~y
~x
;
~x1 ~y2 1~x2 + ~x1 2 ~y1 ~y2 A1 A2 = ~y +1 2 + y2 + ~x1 ~x2 1 2 + ~y1 ~x2 1 2 1~ in terms of the usual dot product ~x ~y = 1 1 + 2 2 + 3 3 and the usual cross product ~x ~y = (2 3 3 2 ; 3 1 1 3 ; 1 2 2 1 ) with the usual module operations on row vectors ~x = (1 ; 2 ; 3 ); ~y = ( 1 ; 2 ; 3 ) in 3 :
3. COMPOSITION ALGEBRAS
171
Most of the Zorn vector-matrix product is just what one would expect from formally multiplying such matrices, keeping in mind that the diagonal entries must be scalars (hence the dot product ~x1 ~y2 in the 11-entry). We somewhat arti cially allow scalar multiplication from the right and left, ~x := ( 1 ; 2 ; 3 ) = ~x , to show more clearly the products with terms from A1 on the left, A2 on the right). What is not expected is the cross-product term ~x1 ~x2 in the21-entry. From the 0 ~x2 usual associative matrix product we would expect 00 ~x01 0 0
to vanish, but instead up pops ~x1 0 ~x2 00 in the opposite corner (and the 12 entry has a minus sign, to make things even more complicated!) Recall that the cross product is de ned ONLY on 3-dimensional space, which explains why this construction works only in dimension 8: the restriction of cross products to dimension 3 parallels the restriction of non-associative composition algebras to dimension 8. It is clear in cases (I)-(III) that the split algebra is associative and N permits composition In the concrete representation (IV), it is easy to verify the alternative laws using nothing more than freshman vector analysis: using the rules ~x ~y = ~y ~x; ~x ~y = ~y ~x; ~x (~y ~z) = (~x ~z)~y (~x ~y )~z the associator of three elements Ai = ~yii ~x ii turns out to be ~ x [A1 ; A2 ; A3 ] = ~y for
(A1 ; AP x1 (~x2 ~x3 ) P ~y3 (~y1 ~y2 ); 2 ; A3 ) := ~ ~x(A1 ; A2 ; A3 ) := cyclic(~xi ~yj ~yi ~xj )~xk + cyclic(i i )~yj ~xk : Alternativity of the algebra reduces to alternativity of these 4 coordinate functions. Here is an alternating function of A1 ; A2 ; A3 since the vector triple product 2
~z (z~0 z~00 ) = det 4
~z ~z0 z~00
3
0
5
= det @
1 2 3
10 20 30
100 200 300
1 A
is certainly an alternating function of its rows, and ~x is an alternating function of A1 ; A2 ; A3 since its two sums are of the forms (1) P
=
w~ j ~zj w~ i)~zk ~z2 w~ 1 )~z3 + (~z2 w~ 3 ~z3 w~ 2 )~z1 + (~z3 w~ 1 ~z1 w~ 3 )~z2
zi cyclic (~ (~z1 w~ 2
172
and (2)
The Category of Alternative Algebras X cyclic
i~zj ~zk = 1~z2 ~z3 + 2~z3 ~z1 + 3~z1 ~z2 ;
which are alternating functions of the pairs (~z; w~ ), and ( ; ~z) respectively because, for example, if (~z1 ; w~ 1 ) = (~z2 ; w~ 2 ) then (1) becomes (~z1 w~ 1 ~z1 w~ 1 )~z3 + (~z1 w~ 3 ~z3 w~ 1 )~z1 + (~z3 w~ 1 ~z1 w~ 3 )~z1 = ~0, and if ( 1 ; ~z1 ) = ( 2 ; ~z2 ) then (2) becomes 1 (~z1 ~z3 + ~z3 ~z1 )+ 3 (~z1 ~z1 ) = ~0 because ~z w~ is alternating. An analogous computation shows ; ~y are alternating functions of A1 ; A2 ; A3 , so [A1 ; A2 ; A3 ] alternates. We can also show this using \symmetry" in the indices 1,2: we have two algebra involutions on Z orn(), the usual transpose involution given by the diagonal ip and the standard involution given by an anti-diagonal ip, Atr := ~x ~y ~y . The former interchanges the 12 and ~x ; A := 21 entries, the latter the 11 and 22 entries, and any involution has ([A1 ; A2 ; A3 ]) = [ (A3 ); (A2 ); (A2 )], so
(A1 ; A2 ; A3 ) = (A3 ; A2 ; A1 ); ~y(A1 ; A2 ; A3 ) = ~x(Atr3 ; Atr2 ; Atr1 ) are also alternating. Thus Zorn() isalternative, but itis not associative since the as 0 ~x 1 0 0 ~y 0 0 sociator = ~x ~y 0 doesn't vanish 0 0 ; 0 0 ; 0 0 identically (for ~x = ~e1 ; ~y = ~e2 we get ~x ~y = ~e3 ). The Cayley-Dickson construction in the next section builds an octonion algebra from a distinguished quaternion subalgebra; in the Zorn matrixrepresentation there are 3 separate split quaternion algebras ~ei Hi = ~ei , sharing a common diagonal. Notice that in this representation the norm and trace are the \natural" ones for these 2 2 matrices,
AA = ( ~x ~y) A + A = ( + )
1 0 1 0
0 = N (A)1; 1 0 = T (A)1: 1
We can also prove the composition property N (AB ) = N (A)N (B ) of the norm directly from vector analysis facts, or we can deduce it from alternativity and N (A)1 = AA (see the exercises below).
Exercise 2.7A Establish the vector formula (~ x1 ~y2 )(~y1 ~x2 ) + (~x1 ~x2 ) (~y1 ~y2 ) = (~x1 ~y1 )(~x2 ~y2 ) for any 4 vectors in 3 , and use this to compute norm composition N (A1 A2 ) = N (A1 )N (A2 ) in the Zorn vector-matrix algebra.
4. THE CAYLEY-DICKSON CONSTRUCTION
173
In the Zorn vector-matrix algebra: (1) show directly that A = T (A)1 A, (2) A2 T (A)A + N (A)1 = 0, (3) AA = N (A)1; (4) deduce (3) directly from (1) and (2). (5) Use (1) and alternativity of the Zorn algebra to show (AB )B = A(BB ); (AB )(BA) = (A(BB ))A; (AB )(BA) = (A(BB ))A, then use (3) and (2) to deduce N (AB )1 = N (A)N (B )1. Exercise 2.7B
Over a eld, a composition algebra whose norm splits the slightest bit (admits at least one nontrivial isotropic vector) splits completely into one of the above algebras, yielding a unique split composition algebra of each dimension over a given . The constructions ! B(); Q(); O() are functors from the category of scalar rings to the category of composition algebras.
4. The Cayley-Dickson Construction The Cayley-Dickson construction starts with an arbitrary unital algebra A with involution, and doubles it to get a larger and more complicated unital algebra with involution. We will see that, as mentioned in II Chapter 2.5.1, all composition algebras can be obtained by iterating this doubling process. Theorem 2.8 (Cayley-Dickson Construction KD (A; )). (1) We start from a unital linear algebra A with involution over an arbitrary ring of scalars , and choose an invertible scalar 2 . We form the direct sum of two copies of A, consisting of ordered pairs (a; b) for a; b 2 A. We de ne a new product and involution on this module by (a1 ; b1 ) (a2 ; b2 ) = (a1 a2 + b2 b1 ; b2 a1 + b1 a2 ); (a; b) := (a; b):
If we set m := (0; 1) we can identify (a; 0) with the original element a in A, and (0; b) = (b; 0) (0; 1) with bm 2 Am, so (a; b) = a + bm. In this formulation we have an algebra
KD(A; ) := A Am
(a1 b1 m) (a2 b2 m) = a1 a2 + b2 b1 (a bm) := a bm
b2 a1 + b1 a2 m;
(2) We have the KD Scalar Involution Property: If the original involution on A is a scalar involution with aa = n(a)1 for a quadratic norm form n, then the algebra KD(A; ) will again have a scalar involution with new norm and trace
N (a bm) := n(a) n(b); .
T (a bm) = t(a)
174
The Category of Alternative Algebras
(1) The new algebra is clearly unital with new unit 1 = (1; 0), and clearly is a linear map of period 2; it is an anti-homomorphism of algebras since proof.
(a1 ; b1 ) (a2 ; b2 ) = (a1 a2 + b2 b1 ; (b2 a1 + b1 a2 )) = (a2 a1 + b1 b2 ; b1 a2 b2 a1 ) = (a2 ; b2 ) (a1 ; b1 ) = (a2 ; b2 ) (a1 ; b1 ) : (2) The scalar involution on A has linear trace form t with a + a = n(a; 1)1 = t(a)1, so that a = t(a)1 a commutes with a, and we have aa = n(a)1 too. Using the Cayley-Dickson multiplication rule, we compute (a bm)(a bm)= (a bm)(a ( b)m) = aa bb ba + ba m = n(a) n(b) 1:
(1) Show that shifting the scalar in the Cayley-Dickson construction by a square produces an isomorphic algebra: KD(A; ) = KD(A; 2 ) for any invertible 2 . Exercise 2.8*
We call the resulting algebra the Cayley-Dickson algebra KD(A; )4 obtained from the unital -algebra A and the invertible scalar via the Cayley-Dickson construction or doubling process. The eminentlyforgetable product formula can be broken down into bite-sized pieces which are more easily digested. Besides the fact that A is imbedded as subalgebra with its usual multiplication and involution, we have the
Cayley-Dickson product rules (KD0) am = ma; ab = ab; (KD1) a(bm) = (ba)m; (2.9) (KD2) (am)b = (ab)m; (KD3) (am)(bm) = ba:
Some helpful mnemonic devices: whenever an element b 2 A moves past m it gets conjugated, and when you multiply bm from the left by a or am you slip the a in behind the b. Let us observe how much algebraic structure gets inherited and how much gets lost each time we perform the Cayley-Dickson process. Theorem 2.10 (KD Inheritance). If A is a unital linear algebra with scalar involution, then: (I): KD(A; ) always has a nontrivial involution; (II): KD(A; ) is commutative i A is commutative with trivial involution; (III): KD(A; ) is associative i A is commutative and associative; (IV:) KD(A; ) is alternative only if A is associative with central involution. 4Pronounced \Kay-Dee", not \See-Dee".
4. THE CAYLEY-DICKSON CONSTRUCTION proof.
175
In (1), (I) is clear since the doubling process adjoins a skew
part. In (II), commutativity of the subalgebra A is clearly necessary, as is trivality of the involution by am ma = (a a)m; these suÆce to make the Cayley-Dickson product a1 a2 + b1 b2 a1 b2 + b1 a2 m in 2.9 symmetric in the indices 1 and 2. For (III), associativity of the subalgebra A is clearly necessary, as is commutativity by [a; b; m] = (ab)m a(bm) = (ab ba)m; these conditions suÆce to make the Cayley-Dickson associator vanish: [a1 + b1 m; a2 + b2 m; a3 + b3 m] = (a1 a2 + b2 b1 )a3 + b3 (b1 a2 + b2 a1 ) a1 (a2 a3 + b3 b2 ) (b2 a3 + b3 a2 )b1 + (b1 a2 + b2 a1 )a3 + b3 (a1 a2 + b2 b1 ) (b2 a3 + b3 a2 )a1 b1 (a2 a3 + b3 b2 ) m which by commutativity becomes = a1 a2 a3 + b1 b2 a3 + a1 b2 b3 + b1 a2 b3 a1 a2 a3 a1 b2 b3 b1 a2 b3 b1 b2 a3 + b1 a2 a3 + a1 b2 a3 + a1 a2 b3 + b1 b2 b3 a1 b2 a3 a1 a2 b3 b1 a2 a3 b1 b2 b3 m = 0: Finally, for (IV) certainly associativity of A is necessary, since [c; am; b] + [am; c; b] = (ac)b (ab)c + (ac)b a(cb) m = (aT (c)1)b [a; b; c] a(bT (c)1) m = [a; b; c]m. A central involution is also neces sary: [am; am; bm] = b(aa) a(ba) m = baa aab m vanishes i aa = aa [setting b = 1] commutes with all b, and once all norms are central so are all traces and (thanks to 12 ) all self-adjoint elements. We will leave it as an exercise to show that associativity plus central involution are suÆcient to guarantee alternativity. Show in full detail that KD(A; ) is alternative if A is associative with central involution. Exercise 2.10
Theorem 2.11 (One-Sided Simplicity). When a simple associative non-commutative algebra A with central involution undergoes the Cayley-Dickson process, it loses its associativity and all of its one-sided ideals: KD(A; ) has no proper one-sided ideals whatsoever. In particular, an octonion algebra over a eld has no proper one-sided ideals, hence is always simple. proof. When A is not commutative, we know KD (A; ) = A Am is not associative by (III), and we claim it has no proper left
176
The Category of Alternative Algebras
ideals (then, thanks to the involution , it has no proper right ideals either) when A is simple. So suppose B is a nonzero left ideal of KD. Then it contains a nonzero element x = a + bm, and we can arrange it so that a 6= 0 (if a = 0 then x = bm for b 6= 0, and B contains x0 = mx = m(bm) = a0 with a0 = b 6= 0). By noncommutativity we can choose a nonzero commutator [u; v ] in A; then B also contains LA LAm Lm (Lu Lv Lvu )LA (a + bm) = LA LAm Lm (Lu Lv Lvu )(Aa + bAm) = LA LAm Lm ([u; v ]Aa + bA(vu vu)m) = LA LAm ([u; v ]Aam) = LA ([u; v ]AaA) = A[u; v ]AaA = A [by simplicity of A and ; [u; v ]; a 6= 0]. Therefore B contains A, hence also Lm A = Am = Am, and therefore B = A + Am = KD(A; ). Thus as soon as B is nonzero it must be the entire algebra. This is an amazing result, because the only unital associative algebras having no proper one-sided ideals are the division algebras, but octonion algebras may be far removed from division algebras (even split). It turns out that one-sided ideals are not very useful in the theory of alternative algebras: they are hard to construct, and when they do appear they often turn out to be two-sided. As in Jordan algebras, in alternative algebras the proper analogue of one-sided ideals are the inner ideals B A with UB A B, i.e. bab 2 B for all b 2 B; a 2 A: an alternative algebra has no proper inner ideals i it is a division algebra.
5. The Hurwitz Theorem We will show why the Cayley-Dickson doubling process with its twisted multiplication is genetically programmed to start up naturally inside a composition algebra and continue until it exhausts the entire algebra. Theorem 2.12 (Jacobson Necessity). If A is a proper nite-dimen -sional unital subalgebra of a composition algebra C over a eld , such that the norm form N is nondegenerate on A, then there exist elements m 2 A? with N (m) = 6= 0, and for any such element A + Am is a subalgebra of C canonically isomorphic and isometric to KD(A; ): proof. Because N (; ) is a nondegenerate bilinear form on the nite-dimensional space A, we have a decomposition C = A A?.5
a 2 A determines a linear functional N (a; ), and by nonsingularity this map is injective into the dual space of A, hence by nite-dimensionality onto the dual space; then every c 2 C likewise produces a linear functional N (c; ), but this must have already been claimed by some a 2 A, so c0 = c a has N (c0 ; ) = 0 and c0 2 A? , yielding the decomposition c = a + c0 : 5Recall this basic linear algebra fact: each vector
5. THE HURWITZ THEOREM
177
Here A? 6= 0 since A is proper, so by nondegeneracy 0 6= N (A?; C) = N (A?; A + A?) = N (A?; A?), and N is not identically zero on A?. Thus we may nd an anisotropic vector orthogonal to A; (1) m 2 A? ; N (m) = 6= 0. The amazing thing is that we can choose any vector we like, and then the doubling process starts automatically. We claim the space K := A + Am has the structure of KD (A; ). The rst thing is to establish directness K := A Am; i.e. A \ Am = 0, which will follow from the assumed nondegeneracy ? A \ A = 0 of A because ? (2) Am A from N (am; A) = N (m; aA) (by Left Adjoint 2.6(1)) N (m; A) = 0 since A is assumed to be a subalgebra which by construction is orthogonal to m: In particular (2) implies N (a+bm) = N (a)+N (a; bm)+N (b)N (m) = N (a) N (b) (recalling the de nition (1) of ) and T (bm) = N (1; bm) N (A; Am) = 0 (it is crucial that A be unital), so the involution and norm are the same in K and KD(A; ) : (3) N (a + bm) = N (a) N (b); a + bm = a bm: It remains to prove that the products are the same. But the CayleyDickson Product Rules 2:9 are forced upon us, and the blame rests squarely on Left and Right Kirmse. Notice rst the role of the scalar : it represents the square, not the norm, of the element m: (4) m2 = 1 2 since m = mm [by involution (3)] = N (m)1 [by either Kirmse on y = 1] = +1 [by the de nition (1)]. Then (KD3) follows from ba (am)(bm) = b((am)m) + (am)(bm) [by (4), right alternativity, and involution (3)] = N (b; am)m = 0 [by linearized Left Kirmse and (2)]; (KD2) follows from (ab)m (am)b = (ab)m + (am)b [using the involution (3)] = aN (b; m) = 0 [by linearized Right Kirmse and (2)]; and (KD1) follows from a(bm) (ba)m = a(bm) + (ba)m [from (3)] = (1a)bm (1(bm))a + (bm)a + (ba)m = 1N (a; bm) + bN (a; m) = 0 [by linearized Right Kirmse and (2)]. Thus there is no escape from the strange Cayley-Dickson Product Rules, and K = KD(A; ): Jacobson Necessity can start at any level, not just at the 1-dimensional level 1: from any proper nondegenenerate composition subalgebra B1 = B of C we can apply the KD process to form B2 = KD (B1 ; 2 ) and continue till we exhaust C .
178
The Category of Alternative Algebras
Show that shifting the scalar in the Cayley-Dickson construction by an invertible square (cf. Exercise 2.8) not only produces an isomorphic algebra, inside C = KD(A; ) it produces the same subalgebra: if C = A + Am = KD(A; ) for a nondegenerate composition subalgebra A, then for any invertible 2 we also have C = A + Am0 = KD(A; 2 ) for a suitable m0 . Exercise 2.12
Now we are equipped to prove Hurwitz' Theorem, completely describing all composition algebras. Theorem 2.13 (Hurwitz). Any composition algebra C over a eld of characteristic 6= 2 is nite-dimensional, either: (0) The ground eld C0 = 1 of dimension 1, commutative associative with trivial involution; (I) A quadratic extension C1 = KD(C0 ; 1 ) of dimension 2, commutative associative with nontrivial involution; (II) A quaternion algebra C2 = KD(C1 ; 2 ) of dimension 4, noncommutative associative ; (III) An octonion algebra C3 = KD(C2 ; 3 ) of dimension 8, alternative but noncommutative and nonassociative.
Start with the subalgebra C0 = 1 of dimension 1; this is a unital subalgebra, with nondegenerate norm because the characteristic isn't 2 (in characteristic 2 the norm N (1; 1) = T (1) = 2 vanishes identically). If C0 is the whole algebra, we are done. C0 has Type 0: it is commutative associative with trivial involution. If C0 < C, by Jacobson Necessity we can choose i ? C0 with N (i) = 1 6= 0, and obtain a subalgebra C1 = KD(C0 ; 1 ) of dimension 2. If C1 is the whole algebra, we are done. C1 has Type I: it is still commutative and associative, but with nontrivial involution, by Inheritance Theorem 2.10(I). If C1 < C, by Jacobson Necessity we can choose j ? C1 with N (j ) = 2 6= 0, and obtain a subalgebra C2 = KD(C1 ; 2 ) of dimension 4. If C2 is the whole algebra, we are done. C2 has Type II: it is noncommutative associative by Inheritance Theorem 2.10(II), and is called a (generalized) quaternion algebra. If C2 < C, by Jacobson Necessity we can choose m ? C2 with N (j ) = 2 6= 0, and obtain a subalgebra C3 = KD(C2 ; 3 ) of dimension 8. If C3 is the whole algebra, we are done. C3 has Type II: it is nonassociative by Inheritance Theorem 2.10(III) [but of course is still alternative by the Composition Algebra Proposition 2.6(2)], and is called a (generalized) octonion algebra. proof.
5. THE HURWITZ THEOREM
179
If C3 < C, by Jacobson Necessity we can choose m ? C3 with N (m) = 4 6= 0, and obtain a subalgebra C4 = KD(C3 ; 4 ) of dimension 16. But this is no longer alternative by Inheritance Theorem 2.10(IV) and no longer permits composition. Thus C3 cannot be proper, it must be the entire algebra, and we stop at dimension 8. Notice that we never assumed the composition algebra was nitedimensional: niteness (with bound 8) is a fact of nature for nonassociative composition algebras. Similarly, niteness (with bound 27) is a fact of nature for exceptional Jordan algebras. We want to stress that the bootstrap method of building a composition algebra by the Cayley-Dickson doubling process can proceed in many ways, and that the various ingredients in the construction are not uniquely determined. We know that the quaternion subalgebra Q in an 8-dimensional octonion algebra O = KD(Q; ) over a eld is not uniquely determined: if Q0 is any 4-dimensional quaternion subalgebra of O then O = KD(Q0 ; 0) for some 0 . We demonstrate that an octonion algebra can have quaternion subalgebras of many dierent forms in a most graphic manner: the real split octonion algebra contains a quaternion subalgebra isomorphic to the split quaternion algebra M2 (R ), and also one isomorphic to the non-split Hamilton's quaternion division algebra H : We start from the real split octonions as the split Zorn vectormatrix algebra Z . On the one hand we have
Z = A A` = KD(M2 (R );
1) for a subalgebra A = M2 (R )
for A spanned by
E11 := ( 10 00 ) ; E22 := ( 00 01 ) ; E12 := 00 ~e01 ; E21 := with ` = 0~e2 ~e02 ; N (`) = 1;
0 0 ~e1 0
where ` ? A and A` is spanned by
E11 ` = 00 ~e02 ; E22 ` = 0~e2 00 ; 0 ~e2 = 0 0 0 0 E12 ` = 00 ~e01 ~e1 ~e2 0 = ~e3 0 ; ~e2 0 0 ~e2 = 0 (~e1 ( ~e2 ) = 0 ~e3 : E21 ` = ~e01 00 ~e2 0 0 0 0 0 On the other hand we have
Z = B Bm = KD(H ; 1) for a subalgebra B = H
180
The Category of Alternative Algebras
for B spanned by 1 = ( 10 01 ) ; i = k = ij = ~e1 0 ~e2
0 ~e1 ; j = 0 ~e2 ; ~e1 0 ~e2 0 ( ~e1 )( ~e2 ) 0 ~e3 = ~e3 0 ; 0
where m ? B; N (m) = 1 and Bm is spanned by m = ( 10 01 ) ; im = 0~e1 0~e1 ; jm = 0~e2 0~e2 ; km = [beware the minus sign in k!].
0 ~e3 ~e3 0
6. Problems for Chapter 2 Alternative algebras have a smooth notion of inverses. An element x in a unital alternative algebra is invertible if it has an inverse y, an element satisfying the usual inverse condition xy = yx = 1. An alternative algebra is a division algebra if all its nonzero elements are invertible. Show y = x 1 is uniquely determined, and left and right multiplication operators Lx; Rx are invertible operators: Lx 1 = Lx 1 ; Rx 1 = Rx 1 . We will return to this in Problem 21.3. Problem 2. (1) Show that any alternative algebra satis es the operator identity Uxy = LxUy Rx = Ry UxLy in terms of left, right, and simultaneous multiplications (Ux := LxRx ). Conclude we have the Fundamental Formula Uxyx = UxUy Ux. (2) Show that in any alternative algebra A, an element x determines principal inner ideals Ax and xA (which are seldom one-sided ideals), and any two elements determine inner ideals (xA)y and x(Ay). (3) Show that a unital alternative algebra is a division algebra i it has no proper inner ideals. (4) Extend this result to nonunital algebras (as usual, you will have to explicitly exclude a \one-dimensional" trivial algebra). Problem 1.
CHAPTER 3
Three Special Examples We repeat, in more detail, the discussion of II Chapter 2.5 of the 3 most important examples of special Jordan algebras. In any linear algebra we can always introduce a \Jordan product", though the resulting structure is usually not a Jordan algebra unless we start from something associative. Definition 3.1 (Plus Algebra). If A is any linear algebra with product xy; A+ denotes the linear space A under the Jordan product 1 + A : x y := (xy + yx): 2 This will be unital if A is. Show that A+ is a Jordan algebra if A is merely a left alternative algebra (satisfying the left alternative law Lx2 = L2x , but not necessarily exibility or right alternativity), indeed show it is even a special Jordan algebra by showing that the left regular representation x ! Lx imbeds A+ in E + for E , the associative b ) of endomorphisms of the unital hull of A. algebra End (A Exercise 3.1*
1. Full Type The rst basic example, and progenitor of all special examples, is the Jordan algebra obtained by \Jordanifying" an associative algebra. We can take any associative algebra A, pin a + to it, and utter the words \I hereby dub thee Jordan". + Example 3.2 (Full A ). (1) Any associative -algebra A may be turned into a Jordan -algebra A+ by replacing its associative product by the Jordan product 1 x y = (xy + yx): 2 The auxiliary products are given by x2 = xx; fx; zg = xz + zx; Ux y = xyx; fx; y; zg = xyz + zyx: 181
182
Three Special Examples
(2) A+ is unital i A is (in which case the units coincide) :
e Jordan unit for A+
() e associative unit for A:
Any (resp. unital) associative homomorphism or anti-homomorphism 0 A ! A is at the same time a (resp. unital) Jordan homomorphism + 0+ A ! (A ) ; any (resp. unital) associative subalgebra or ideal of A is a (resp. unital) Jordan subalgebra or ideal of A+ . Any left ideal L or right ideal R of A is an inner ideal of A+ , as is any intersection L \ R; for any elements x; y 2 A the modules xA; Ay; xAy are inner ideals of A+ : proof. Throughout it will be easier to work with the brace fx; y g = 2x y and avoid all the fractions 12 in the bullet. (1) A+ satis es the Jordan axioms since commutativity (JAx1) is clear, and the Jordan identity (JAx2) holds since
ffx2 ; yg; xg
= (xxy + yxx)x + x(xxy + yxx) = x2 yx + yx3 + x3 y + xyx2 = x3 y + x2 yx + xyx2 + yx3 = x2 (xy + yx) + (xy + yx)x2 = fx2 ; fy; xgg:
A \better proof" is to interpret the Jordan identity as the operatoridentity [Vx2 ; Vx] = 0 for Vx = Lx + Rx in terms of the left and right multiplications in the associative algebra A. The associative law [x; y; z ] = 0 can be interpreted 3 ways as operator identities (acting on z; x; y respectively) Lxy Lx Ly = 0 (i.e. the left regular representation x 7! Lx is a homomorphism), Rz Ry Ryz = 0 (the right regular representation x 7! Rx is an anti-homomorphism), Rz Lx Lx Rz = 0 (all L's commute with all R's), so [Vx2 ; Vx ] = [Lx2 + Rx2 ; Lx + Rx ] = [Lx2 ; Lx] + [Rx2 ; Rx] = L[x2 ;x] + R[x;x2 ] = 0. The description of the auxiliary products is a straightforward veri cation, the rst two being trivial, the fourth a linearization of the third, and the (crucial) third formula follows from 2Ux y = fx; fx; y gg fx2; yg = x(xy + yx) + (xy + yx)x (xxy + yxx) = 2xyx. (2) Certainly any associative unit acts as unit for the derived Jordan product; conversely, if e is a Jordan unit then in particular it is an idempotent e2 = e, with eye = y by Auxiliary Products 1.9(2) so multiplying on the left by e gives e(y ) = e(eye) = eye = y , and dually y = ye, so e is an associative unit. (4) Any homomorphism preserving the associative product xy certainly preserves the symmetric Jordan brace fx; y g, and the same holds for any anti-homomorphism: '(fx; y g) = '(xy ) + '(yx) = '(y )'(x) + '(x)'(y ) = '(x)'(y ) + '(y )'(x) = f'(x); '(y )g.
1. FULL TYPE
183
Certainly associative subalgebras or ideals are closed under the corresponding Jordan products, and one-sided ideals and modules B = xAy b (which is even stronger than being inner). are closed under BAB Extend the proof of of the Jordan identity in 3.2(1) to show that if C is a commutative subset of A+ (i.e. [c1 ; c2 ] = 0 for all c1 ; c2 2 C), then the elements of C Jordan-operator commute: [VC ; VC ] = 0: Exercise 3.2A
Repeat the argument of 3.2(2) to show that if J A+ is a special Jordan algebra with unit e, then ex = xe = x for all x 2 J, so that J lives in the Peirce subalgebra eAe of A where e reigns as unit. Exercise 3.2B
Exercise 3.2C If A is an associative algebra on which the Jordan product vanishes
entirely, all x y = 0, show the associative product is \alternating" (x2 = 0; xy = yx for all x; y) and hence 2A3 = 0 (2xyz = 0 for all x; y; z:)
This construction gives us plus functors from the categories of associative [resp. unital associative] -algebras to the category of Jordan [resp. unital Jordan] -algebras; it is easy to see these intertwine with c+ ] and both commute with b )+ = (A the formal unitalization functor [(A + scalar extension functors [(A ) = A+ ]. The ospring of the full algebras A+ are all the special algebras, those that result from associative algebras under \Jordan multiplication"; these were precisely the Jordan algebras the physicists originally sought to mimic yet avoid. Definition 3.3 (Special). A Jordan algebra is special if it can be imbedded in (i.e. is isomorphic to a subalgebra of ) an algebra A+ for some associative A, otherwise it is exceptional. A specialization of a Jordan algebra J in an associative algebra A is a homomorphism : J ! A+ of Jordan algebras. Thus a Jordan algebra is special i it has an injective specialization. By means of some set-theoretic sleight-of-hand, every special algebra actually is a subset of an algebra A+ , so we usually think of special algebras as living inside associative algebras, which is then called an associative envelope.1 The full algebra A+ is \too nice" (too close to being associative); more important roles in the theory are played by certain of its subalgebras (the hermitian and spin types), together with the Albert algebra, in just the same way that the brothers Zeus and Poseidon play more of a role in Greek mythology than their father Chronos. We now turn, 1The set-theoretic operation cuts out the image '(J) with a scalpel and replaces
it by the set J, then sews a new product into the envelope using the old recipe. A Jordan algebra can have lots of envelopes, some better than others.
184
Three Special Examples
in the next two sections, to consider these two ospring. (DNA testing reveals that the third brother, Hades, is exceptional and not a son of Chronos at all; we examine his cubic origins in the next chapter.)
2. Hermitian Type The second important class, indeed the archetypal example for all of Jordan theory, is the class of algebras of hermitian type. Here the Jordan subalgebra is selected from the full associative algebra by means of an involution as in -Algebra De nition 1.6. Definition 3.4 (Hermitian). If (A; ) is a -algebra then H(A; ) denotes the set of hermitian elements x 2 A with x = x, and S k(A; ) denotes the set of skew-hermitian elements x = x. If the involution is understood, we just write H(D) or S k(D). In the case of complex matrices (or operators on complex Hilbert space) with the usual conjugate-transpose (or adjoint) involution, these are just the usual hermitian matrices or operators. In the case of real matrices or operators, we get just the symmetric matrices or operators. For complex matrices there is a big dierence between hermitian and symmetric : the hermitian complex matrices form a real Jordan algebra which is formally real, X12 + : : : + Xr2 = 0 for Xi = Xi implies X1 = : : : = Xr = 0 (in particular, there are no nilpotent elements), whereas the symmetric complex matrices form a complex Jordan algebra which has nilpotent elements X 2 = 0; X = X tr 6= 0, witness the matrix ( 1i i1 ). The term \hermitian"2 is more correctly used for involutions \of the second kind" which are \conjugate-linear", (x) = x for a nontrivial involution ! on the underlying scalars, while the term symmetric is usually used for involutions \of the rst kind" which are linear with respect to the scalars, (x) = x . We will allow the trivial involution, so that \hermitian" includes \symmetric", and WE WILL HENCEFORTH SPEAK OF HERMITIAN ELEMENTS AND NEVER AGAIN MENTION SYMMETRIC ELEMENTS! At the same time, we work entirely in the category of modules over a xed ring of scalars , so all involutions are -linear. This just means that we must restrict the allowable scalars. For example, the hermitian complex 2Not named in honor of some mathematical hermit, but in honor of the French
mathematician Charles Hermite, hence the adjective is pronounced more properly \hair-meet-tee-un" rather than the \hurr-mish-un", but Jacobson is the only mathematician I have ever heard give it this more correct pronunciation. Of course, the most correct pronunciation would be \air-meet-tee-un", but no American goes that far.
2. HERMITIAN TYPE
185
matrices are a real vector space, but not a complex one (even though the matrix entries themselves are complex). Example 3.5 (Hermitian H(A; ). (1) If (A; ) is an associative algebra, then H(A; ) forms a Jordan subalgebra of A+ (which is unital if A is), and any -homomorphism or anti-homomorphism (A; ) ! (A0 ; 0 ) restricts to a Jordan homomorphism H(A; ) ! H(A0 ; 0 ): (2) If B is any subalgebra and I any ideal of A, then H(B; ) is a Jordan subalgebra and H(I; ) a Jordan ideal of H(A; ). For any element x 2 A the module xH(A; )x is an inner ideal of H(A; ). proof. (1) The anti-automorphism is a Jordan automorphism of A+ by Full Example 3.2, so the set H of xed-points forms a Jordan subalgebra (unital if A is), since this is true of the set of elements xed under any Jordan automorphism. Any -homomorphism or antihomomorphism gives as noted in 3.2 a homomorphism A+ ! (A0 )+ which preserves hermitian elements: x = x 2 H ! '(x)0 = '(x ) = '(x) 2 H0 . (2) H(C; ) = C \H(A; ) remains asubalgebra or ideal in H(A; ), b (xhx ) = x hx H b xh x xHx . and xHx satis es (xhx )H Let be an arbitrary ring of scalars. (1) Show that if I is a skew -ideal of an associative -algebra A, in the sense that H(I; ) = 0, then all z 2 I are skew with z 2 = 0 and zx = x z for all x 2 A; conclude each b zA b is a trivial -ideal Z2 = 0. (2) Conclude that if A is semiprime then Z := A I 6= 0 =) H(I; ) 6= 0. (3) Show that from any -algebra A we can construct a standard counterexample, a larger -algebra A0 with an ideal N having nothing to do with H : N \ H(A0 ; ) = 0. Use this to exhibit a nonzero -ideal in (the non-semiprime!) A0 with H(I; ) = 0: Exercise 3.5A*
Exercise 3.5B Under certain conditions, a unit for H(A; ) must serve as unit for all of the associative algebra A. (1) Show that if A has involution and h 2 Hb (A; ); x; a 2 A we have xahx = t(xa)hx t(a x hx) + (xhx )a 2 HA in terms of the trace t(x) = x + x ; conclude that hH = 0 =) hxAhx = 0. If A is semiprime conclude hH = 0 =) hA = 0 (and even h = 0 if h lies in H instead of b ). (2) Show that if A is semiprime and H(A; ) has Jordan unit e, then e just in H must be the associative unit of A [use the result of Exercise 3.2B so see ek = k for all k 2 H, then apply (1) to h = 1 e 2 b(H)]. (3) Use the standard counterexample of Exercise 3.5A to show this can fail if A is not semiprime.
We can describe the situation by saying that we have a hermitian functor from the category of associative -algebras (with morphisms the -homomorphisms) to the category of Jordan algebras, sending (A; ) ! H(A; ) and -morphisms ' to the restriction 'jH . This
186
Three Special Examples
functor commutes with unitalization and scalar extensions. Somewhat surprisingly, the hermitian functor gobbles up it parent, the full functor (cf. Exercise 1.8). + Proposition 3.6 (Full is Hermitian). A is isomorphic to the algebra of hermitian elements of its exchange algebra under the exchange involution: +
A
= H(E x(A); ex):
By the Exchange Involution Proposition 1.8 the exchange algebra E x(A) = A Aop under the exchange involution (x; y )ex = (y; x) has hermitian elements H(E x(A)) = f(x; x) j x 2 Ag naturally identi ed with A via (x; x) ! x; this is a Jordan isomorphism since f(x; x); (y; y)g = (x; x)(y; y )+(y; y )(x; x) = (xy; yx)+(yx; xy ) (beware the products in Aop !) = (xy + yx; yx + xy ) = (fx; y g; fx; y g). proof.
While H(A; ) initially seems to arise as a particular Jordan subalgebra of A+ , at the same A+ arises as a particular instance of H. You should always keep H(A; ) in mind as the \very model of a modern Jordan algebra"; historically it provided the motivation for Jordan algebras, and it still serves as THE archetype for all Jordan algebras. The most important hermitian algebras with niteness conditions are the hermitian matrix algebras, as in II Chapter 2.6. For n = 3 (but for no larger matrix algebras) we can even allow the coordinates to be alternative and still get a Jordan algebra. Example 3.7 (Hermitian Matrix Hn (D; )). (1) For an arbitrary linear -algebra (D; ), the standard conjugate-transpose involution tr X := X is an involution on the linear algebra Mn (D) of all n n matrices with entries from D under the usual matrix product XY . The hermitian matrix algebra Hn (D; ) := H(Mn (D); ) consists of the module Hn (D; ) of all hermitian matrices X = X under the Jordan bullet or brace product X Y = 12 (XY + Y X ) or fX; Y g = XY +Y X = 2X Y . In Jacobson box notation, Hn(D; ) is spanned by the elements
Æ [ii] := ÆEii (Æ = Æ 2 H(D; ) hermitian); ji = d[ji] (d 2 D arbitrary; i 6= j ); d[ij ] := dEij + dE where Eij (1 i; j n) are the usual n n matrix units. (2) For the square X 2 = XX and brace product fX; Y g = XY +Y X we have the multiplication rules (for distinct indices i; j; k 6=) consisting
2. HERMITIAN TYPE
187
of 4 Basic Brace Products Æ [ii]2 = Æ 2 [ii]; fÆ[ii]; [ii]g = (Æ + Æ)[ii]); [jj ]; fd[ij ]; b[ij ]g = (db + bd)[ii] + (db + bd)[jj ]); d[ij ]2 = dd[ii] + dd fÆ[ii]; d[ij ]g = Æd[ij ]; fd[ij ]; Æ[jj ]g = dÆ[ij ]); fd[ij ]; b[jk]g = db[ik]; and a Basic Brace Orthogonality rule fd[ij ]; b[k`]g = 0 if fi; j g \ fk; `g = ;: (3) When the coordinate algebra D is associative, or alternative with hermitian elements in the nucleus (so that any two elements and their bars lie in an associative subalgebra), the 3 Basic U-Products for Ux y = 12 (fx; fx; ygg fx2 ; yg) are UÆ[ii] [ii] = Æ Æ [ii]; Ud[ij ]b[ij ] = dbd[ij ]; Ud[ij ] [jj ] = d d[ii] (i 6= j ); with Basic U -Orthogonality Ud[ij ]b[k`] = 0 if fk; `g 6 fi; j g together with their linearizations in d. (4) If D is associative, for Jordan triple products with the outer factors from distinct spaces we have 3 Basic Triple Products
fd[ij ]; b[ji]; [ii]g = (db + bd)[ii] (i 6= j ) fd[ij ]; b[ji]; c[ik]g = d(bc)[ik] (k 6= i; j ) fd[ij ]; b[jk]; c[k`]g = 21 (dbc)[i`] (i; j; k; ` =6 );
with Basic Triple Orthogonality fd[ij ]; b[k`]; c[pq]g = 0 if indices can't be connected: Throughout the book we will consistently list these 4 Basic Brace Products in the same order: ii2 ; ij 2 ; fii; ij g; fij; jkg. Notice how braces work more smoothly than bullets: the bullet products ii ij; ij jk would involve a messy factor 12 . When (D; ) is an associative -algebra then (Mn (D); ) is again an associative -algebra, and Hn (D; ) is a special Jordan algebra of hermitian type. We will see in the Jordan Coordinates Theorem 14.1 that D must be associative when n 4 in order to produce a Jordan algebra, but when n = 3 it suÆces if D is alternative with nuclear involution. This will not be special if D is not associative; from an octonion algebra with standard involution we obtain a reduced exceptional Albert algebra by this hermitian matrix recipe.
188
Three Special Examples
For general linear algebras, show the U -products 3.7(3) become unmanageable: Uc[ii] b[ii] = 21 ((cb)c + c(bc) + [b; c; c] [c; c; b]) [ii]; Ud[ij] b[ij ] = 12 (db)d + d(bd) + [b; d; d] [d; d; b] [ij ]; Ud[ij] b[jj ] = 12 (db)d + d(bd) [ii] + 12 [b; d; d] [d; d; b] [jj ]; Exercise 3.7A
and the triple product with distinct indices becomes fd[ij ]; b[jk]; c[k`]g = 12 d(bc) + (db)c [i`]; while those with repeated indices become fd[ij ]; b[ji]; c[ik]g = 21 d(bc) + (db)c + [b; d; c] [ik]; fd[ij ]; b[jk]; c[ki]g = 12 d(bc) + (db)c + (cb)d + c(bd) [ii]+ [d; b; c] [c; b; d] [ii] c] [c; d; b] [kk]; + 12 [b; c; d] [d; c; b] [jj ] + 21 [b; d; 1 fd[ij ]; b[ji]; c[ii]g = 2 d(bc) +(db)c + (cb)d + c(bd) + [b; d; c] [c; d; b] [ii] + 12 [b; c; d] [d; c; b] [jj ]: (Yes, Virginia, there are jj; kk-components!)
Exercise 3.7B Show that the correspondences (D; ) ! Hn (D; ); ' ! Hn (') (given by d[ij ] ! '(d)[ij ]) is a functor from the category of unital associative -algebras to the category of unital Jordan algebras.
3. Quadratic Form Type The other great class of special Jordan algebras is the class of algebras coming from quadratic forms with basepoint. If we think of a quadratic form as a norm, a basepoint is just an element with unit norm. Proposition 3.8 (Trace Involution). A basepoint c for a quadratic form Q is just an element with
Q(c) = 1: Once a basepoint has been chosen, we can talk of the trace form and trace involution T (x) := Q(x; c); x := T (x)c x: The trace involution is automatically an involutory linear map T (c) = 2; c = c; x = x
which is isometric with respect to the norm and trace,
Q(x ) = Q(x); T (x ) = T (x):
3. QUADRATIC FORM TYPE
189
Repeating the proof of the Composition Algebra Proposition 2.6(3a), we have T (c) = Q(c; c) = 2Q(c) = 2 straight from the de nition of basepoint, so c = T (c)c c = 2c c = c, and (x ) = T (x)c x = T (x)c (T (x)c x) = x. For the norm of a star we have Q(x ) = Q(T (x)c x) = T (x)2 Q(c) T (x)Q(x; c) + Q(x) = T (x)2 T (x)2 + Q(x) [by de nition of basepoint and trace T ] = Q(x), so for the trace of a star we have T (x ) = Q(x ; c) = Q(x ; c ) = Q(x; c) [linearizing the foregoing] = T (x) too. proof.
Example 3.9 (Quadratic Factor Jord(Q; c)). If Q : M ! is a quadratic form on a -module M having basepoint c, then we can de ne a Jordan -algebra structure on Jord(Q; c) on M by 1 x y := T (x)y + T (y )x Q(x; y )c : 2 Here the unit is 1 = c, and every element satis es the degree 2 iden-
tity
x2 T (x)x + Q(x)c = 0: The trace involution is an algebra involution, (x y ) = x y : The auxiliary U -product is given in terms of the trace involution by Ux y = Q(x; y )x Q(x)y ; and the norm form Q permits Jordan composition with U; Q(1) = 1; Q(Ux y ) = Q(x)Q(y )Q(x): proof. The element c acts as unit by setting y = c in the de nition of the product since T (c) = 2; Q(x; c) = T (x) by the Trace Involution Proposition 3.8, and the degree 2 identity follows by setting y = x in the de nition of the product since Q(x; x) = 2Q(x). The fact that Jord(Q; c) is Jordan then follows from a very general Lemma 3.10 (Degree 2). Any unital commutative linear algebra in which every element x 2 J satis es a degree 2 identity, i.e. an equation x2 x + 1 = 0 (for some ; 2 depending on x): proof. Commutativity (JAx1) is assumed, and the Jordan identity (JAx2) holds since the degree 2 equation implies [x2 ; y; x] = [x 1; y; x] = [x; y; x] (1 is always nuclear) = 0 [since commutative algebras are always exible, (xy )x = x(xy ) = x(yx)]. To see that the trace involution (which is clearly a linear bijection of period 2) is an involution of Jordan algebras, it suÆces by 1.1(2)
190
Three Special Examples
to show it preserves squares, the degree 2 equation (x2 ) and from 2 (x ) = T (x)x Q(x)c T (x )x Q(x )c = 0 since by Trace Involution 3.8 we know preserves traces T and norms Q and xes c. To obtain the expression for the U -operator we use the fact that c is the unit to compute
Ux y = 2x (x y ) x2 y
= x T (x)y + T (y )x Q(x; y )c
= T (x)x y + T (y )x2
= T (x)T (y ) Q(x; y ) x = Q(x; y )x
Q(x; y )x
= T (y ) T (x)x Q(x)c
T (x)x Q(x)c
T (x)x y
y
Q(x)y
Q(x; y )x + Q(x)y
Q(x) T (y )c y
Q(x)y
(by de nition of y ). For Q permitting composition with U , we use our newly-hatched formula for the U -operator to compute
Q(Ux y ) = Q (Q(x; y )x Q(x)y ) = Q(x; y )2 Q(x)
Q(x; y )Q(x)Q(x; y ) + Q(x)2 Q(y )
= Q(x)2 Q(y ) [by Trace Involution 3.8 again.] We can form a category of quadratic forms with basepoint by taking as objects all (Q; c) and as morphisms (Q; c) ! (Q0 ; c0 ) the linear isometries ' which preserve basepoint, Q0 ('(x)) = Q(x) and '(c) = c0 . Since such a ' preserves the traces T 0 ('(x)) = Q0 ('(x); c0 ) = Q0 ('(x); '(c)) = Q(x; c) = T (x) by Trace Involution 3.8 and hence all the ingredients of the Jordan product 3.9, it is a homomorphism of Jordan algebras. In these terms we can formulate our result by saying we have a Jordani cation functor from the category of -quadratic forms with basepoint to unital Jordan -algebras, given on objects by (Q; c) ! Jord(Q; c) and trivially on morphisms by ' ! '. This functor commutes with scalar extensions. (1) Show that over a eld the category of quadratic factors (with morphisms the isomorphisms) is equivalent to the category of quadratic forms with basepoint (with morphisms the isometries preserving basepoints), by proving that a linear map ' is an isomorphism of unital Jordan algebras Jord(Q; c) ! Jord(Q0 ; c0 ) Exercise 3.10*
3. QUADRATIC FORM TYPE
191
i it is an isometry of quadratic forms-with-basepoint (Q; c) ! (Q0 ; c0 ). (2) Prove that any homomorphism ' whose image is not entirely contained in c0 is a (not necessarily bijective) isometry of forms, and in this case ker(') Rad(Q). (3) Show that a linear map ' with '(c) = c0 is a homomorphism of Jord(Q; c) into the 1-dimensional quadratic factor + i the quadratic form Q takes the form Q(c + x0 ) = 2 '(x0 )2 relative the decomposition J = c J0 for J0 = fx0 j T (x0 ) = 0g the subspace of trace zero elements.
3.11 (Spin Factors JSpin(M; )). Another construction starts from a \unit-less" -module M with symmetric bilinear form , rst forms J := 1 M by externally adjoining a unit, and then de nes a product on J by having 1 act as unit element and having the product of \vectors" v; w 2 M be a scalar multiple of 1 determined by the bilinear form: JSpin(M; ) : J = 1 M; 1 x = x; v w = (v; w)1: Example
It is easy to verify directly that the resulting algebra is a unital Jordan algebra, but this also follows since JSpin(M; ) = Jord(Q; c) for c := 1 0; Q(1 v ) := 2 (v; v ); T (1 v ) := 2: In fact, since we are dealing entirely with rings of scalars containing 21 , this construction is perfectly general: every quadratic factor Jord(Q; c) is naturally isomorphic to the spin factor JSpin(M; ) for (v; w) = Q(v; w) the negative of the restriction of Q to M = c?, since we have the natural decomposition J = c M: The special case where is the ordinary dot product on M = n is denoted by JS pinn () = JSpin(n ; ). BEWARE: The restriction of the global Q to M is the NEGATIVE of the quadratic form q (v ) = (v; v ). The scalar q (v ) comes from the coeÆcient of 1 in v 2 = q (v )1, the Q (the generic norm) comes from the coeÆcient of 1 in the degree 2 equation, which you should think of as
Q(x) xx so that for the skew elements v 2 M it reduces to the negative of the square. This is the same situation as in the composition algebras of the last chapter, where the skew elements x = i; j; k; ` with square 1 have norm Q(x) = +1. The bilinear form version in terms of on M is more useful when dealing with bullet products, the global quadratic version in terms of Q on J is more useful when dealing with U -operators; one needs to be ambidextrous in this regard.
192
Three Special Examples
We can form a category of symmetric -bilinear forms by taking as objects all and as morphisms ! 0 all -linear isometries. In these terms we can formulate 3.11 by saying we have a spin functor from symmetric -bilinear forms to unital Jordan -algebras given by ! JSpin(M; ) and ' ! JSpin('), where the Jordan homomorphism is de ned by extending ' unitally to J, JSpin(')(1 v ) := 1 '(v ). This functor again commutes with scalar extensions. Since 12 2 we have a category equivalence between the category of symmetric -bilinear forms and the category of -quadratic forms with basepoint, given by the pointing correspondence P sending ! (Q; 1) on J := 1 M; ' ! JSpin(') as above, with inverse T sending (Q; c) ! := 12 Q(; )jc? ; ' ! 'jc? . [Warning: while T P is the identity functor, P T is only naturally isomorphic to the identity: P T (Q; c) = (Q; 1) has had its basepoint surgically removed and replaced by a formal unit 1.] 3.11 says that under this equivalence it doesn't matter which you start with, a symmetric bilinear form or a quadratic form with basepoint (Q; c), the resulting Jordan algebra will be the same.
4. Reduced Spin Factors There is a way to build a more specialized sort of Jordan algebra out of a quadratic or bilinear form, an algebra which has idempotents and is thus \reduced," the rst step towards being \split". These will be the degree 2 algebras which occur in the nal classical structure theory of Part III in Chapter 23, and thus are now for us the most important members of the quadratic factor family. We will see in Chapter 8, when we discuss idempotents and Peirce decompositions, that JSpin(M; ) has a proper idempotent e 6= 0; 1 i the quadratic form is \unit-valued": q (v ) := (v; v ) = 1 for some v . In our reduced construction, instead of just adjoining a unit we externally adjoin two idempotents e1 ; e2 ; in eect, we are adjoining a unit 1 = e1 + e2 together with an element v = e1 e2 where q (v ) will automatically be 1. We de ne the product by having ei act as half-unit element on W and having the product of vectors w; z in W be a scalar multiple of 1 determined by the bilinear form as usual: J := e1 W e2 ; 1 := e1 + e2 ; ei ej := Æij ei ; ei w := 12 w; w z := (w; z )1: By making identi cations e1 = (1; 0; 0); e2 = (0; 0; 1); w = (0; w; 0) we can describe this more formally in terms of ordered triples. At the same time, because of the primary role played by q , in our reduced
4. REDUCED SPIN FACTORS
193
construction we henceforth depose the bilinear form and give priority to the quadratic form q (x) = (x; x); q (x; y ) = 2 (x; y ): Example 3.12 (Reduced Spin RedS pin(q )). The reduced spin factor of a quadratic form q on a -module W consists of the direct product module3 RedS pin(q ) := W with square (; w; )2 := (2 + q (w); ( + )w; 2 + q (w)); hence bullet products
+Æ + z+ w; Æ + (w; z )): (; w; ) ( ; z; Æ ) := + (w; z ); 2 2 It is again easy to verify directly that this is a unital Jordan algebra, since it can be described as a spin construction: RedS pin(q) = e1 e2 W = 1 v W = JSpin(M; ) for 1 := (1; 0; 1); v := (1; 0; 1); e1 = (1; 0; 0); e2 := (0; 0; 1); and (v w; v z ) := + (w; z ) on M := v W: Like any spin factor, it can also be described as a quadratic factor as in 3:11 in terms of a global quadratic form: RedS pin(q) = Jord(Q; c) for Q((; w; )) := q (w); c := (1; 0; 1); T (; w; ) := + ; (; w; ) := ( ; w; ; ): Thus the quadratic form is obtained by adjoining a \hyperbolic plane" Q(e1 e2 ) = to the negative of q on W . Here the U -operator is given explicitly by U(;w; ) ( ; z; Æ ) := ("; y; ) for " := [2 + q (w; z ) + q (w)Æ ]; := [ 2 Æ + q (w; z ) + q (w) ]; y := [ + Æ + q (w; z )]w + [ q (w)]z: proof. The Quadratic Factor 3.9 has quadratic form given by ~ w. Here Q(x) = ~ 2 (~v ; v~) = ~ 2 ~2 q (w) when v~ = v ~ w where the scalars ~ ; ~ are in our case x = (; w; ) = ~ 1 v ~ determined by ~ (1; 0; 1)+ (1; 0; 1) = (1; 0; 0)+ (0; 0; 1), i.e. ~ + ~ = ; ~ ~ = with solution ~ = 21 ( + ); ~ = 12 ( ), hence 3The reader might ask why we are not \up front" about the two external copies
of we adjoin, parametrizing the module as W ; the answer is that we have an ulterior motive { we are softening you up for the Peirce decomposition in Chapter 8 with respect to e = (1; 0; 0), which has (; 0; 0); (0; W; 0); (0; 0; ) as the Peirce spaces J2 ; J1 ; J0 respectively.
194
Three Special Examples
~ 2 ~2 = 14 [(2 + 2 + 2 ) (2 2 + 2 )] = 41 [4 ] = and (w; w) = q (w) as required. All 3 constructions coincide since since everyone has the same unit 1 and the same trace T (x) = + , and on traceless elements x = (; w; ) the same square x2 = 1 for = Q(x) = q (w) 2 = (w; w) 2 = (v + w; + w): The U -operator in the quadratic factor has, by Quadratic Factor 3.9, Ux y = Q(x; y )x Q(x)y , which becomes [ + Æ + q (w; z )](; w; ) [ q (w)](Æ; z; ) for x = (; w; ); Q(x) = [ q (w)]; y = ( ; z; Æ ); y = (Æ; z; ); Q(x; y ) = Q ((; w; ); (Æ; z; )) = [ + Æ q (w; z )] = [ + Æ + q (w; z )], where [ + Æ + q (w; z )] [ q (w)]Æ = [2 + q (w; z ) + q (w)Æ ] = " [ + Æ + q (w; z )] [ q (w)] = [ 2 Æ + q (w; z ) + q (w) ] = [ + Æ + q (w; z )]w [ q (w)]( z ) = [ + Æ + q (w; z )]w + [ q (w)]z = y as required.
5. PROBLEMS FOR CHAPTER 3
195
5. Problems for Chapter 3 Certainly if A+ is simple then A must be simple too (associative ideals being Jordan ideals). A famous result of I.N. Herstein says the converse is true: if A is simple then so is A+ . Here the ring of scalars can be arbitrary, we don't need 21 (as long as we treat A+ as a quadratic Jordan algebra). Associative rings are equal-characteristic employers; except for Lie or involution results, they are completely oblivious to characteristic. (1) Verify the following Jordan expressions for associative products: (i) zbza = fz; b; zag z (ab)z; (ii) azzb = fa; z; zbg zbza; (iii) 2zaz = fz; fz; agg fz 2; z g; (iv) zazbz = fzazb; z g z 2 azb. (2) Use these to show that if K / A+ is a Jordan ideal, b zA b +A b z 2A b K. Conclude that if some z 2 6= 0 then for any z 2 K we have z A 2 b b then Az A is a nonzero associative ideal of A contained in K, while if z 2 = 0 for b = z Az b Az b = 0 and any Z := A b zA b is an associative ideal (not all z 2 K then 2z Az 2 3 necessarily contained in K) with 2Z = Z = 0. (3) Show that if A is semiprime then every nonzero Jordan ideal contains a nonzero asociative ideal, and use this to show A+ is simple i A is. Problem 1.
A famous theorem of L.K. Hua (with origins in projective geometry) asserts that any additive inverse-preserving map ! 0 of associative division rings must be a Jordan homomorphism, and then that every Jordan homomorphism is either an associative homomorphism or anti-homomorphism. Again, this result (phrased in terms of quadratic Jordan algebras) holds for an arbitrary ring of scalars. This was generalized by Jacobson and Rickert to domains: if ' : A+ ! D+ is a quadratic Jordan homomorphism into an associative domain D (i.e. '(x2 ) = '(x)2 ; '(xyx) = '(x)'(y)'(x)), then ' must be either an associative homomorphism or anti-homomorphism. (1) Let (x; y) := '(xy) '(x)'(y) measure deviation from associative homomorphicity, and (x; y) := '(xy) '(y)'(x) measure deviation from associative anti-homomorphicity; show (x; y) (x; y) = 0 for all x; y when ' is a quadratic Jordan homomorphism. (Before the advent of Jordan triple products this was more cumbersome to prove!) Conclude that if D is a domain then for each pair x; y 2 A either (x; y) = 0 or (x; y) = 0. (2) For any x de ne x := fy 2 A j (x; y) = 0g and x := fy 2 A j (x; y) = 0g. Conclude A = x [ x . (3) Using the fact that an additive group cannot be the union of two proper subgroups, conclude that for any x we have x = A or x = A. (4) De ne := fx 2 A j x = A, ie. (x; A) = 0g to consist of the \homomorphic elements", and := fx 2 A j x = A, ie. (x; A) = 0g to consist of the \anti-homomorphic elements". Conclude A = [ , and conclude again that = A or = A, ie. (A; A) = 0 or (A; A) = 0, i.e. ' is either an associative homomorphism or anti-homomorphism. Problem 2*.
A symmetric bilinear form on an n-dimensional vector space V over a eld is given, relative to any basis v1 ; : : : ; vn , by a symmetric matrix S : P P P (v; w) = ni;j=1 i j sij (v = i i vi ; w = j j vj ; sij = sji ), and this matrix Problem 3.
196
Three Special Examples
can always be diagonalized: there is a basis u1; : : : ; un with respect to which S = diag(1 ; 2 ; : : : ; n ). (1) Show the i 's can be adjusted by squares in (the matrix of with respect to the scaled basis 1 u1 ; : : : ; n un is diag(21 1 ; 22 2 ; : : : ; 2n n )), so if every element in has a square root in then the matrix can be reduced to a diagonal matrix of 1's and 0's. Conclude that over an algebraically closed eld all nondegenerate symmetric bilinear forms on V are isometrically isomorphic, so up to isomorphism there is only one nondegenerate algebra JSpin(V; ) = JS pinn () of dimension n + 1 (V of dimension n). (2) Use your knowledge of Inertia to show that over the real numbers R we can replace the 's by 1 or 0, and the number of 1's, 1's, and 0's form a complete set of invariants for the bilinear form. Conclude there are exactly n + 1 inequivalent nondegenerate symmetric bilinear forms on an n-dimensional real vector space, and exactly n + 1 nonisomorphic spin factors J = JSpin(V; ) of dimension n + 1, but all of these have JC = JS pinn (C ) over the complex numbers. The J-vN-W classi cation of simple Euclidean Jordan algebras includes JS pinn(R) = R1 Rn under the positive de nite inner (dot) prodP uct h; i on Rn . The standard hermitian inner product h~x; ~yi = ni=1 i i (~x = (1 ; : : : ; n ); (~y = (1 ; : : : ; n ) on C n is also a positive de nite hermitian form, h~x; ~xi > 0 if ~x 6= 0. Show nevertheless that JS pinn(C ) = C 1C n with ~x~y = h~x; ~yi1 is NOT Euclidean: even when n = 1, there are nonzero elements u; v of JS pinn (C ) with u2 + v2 = 0: Problem 4*.
A derivation 1.1(3) of a linear algebra A is a linear transformation which satis es the product rule. (1) Show Æ(1) = 0 if A is unital, and Æ(x2 ) = xÆ(x) + Æ(x)x; show this latter suÆces for Æ to be a derivation if A is commutative and 21 2 . (2) Show Æ is a derivation i it normalizes the left multiplication operators [Æ; Lx] = LÆ(x) (dually i [Æ; Ry ] = RÆ(y) ). Show that T (1) = 0; [T; Lx] = Ly for some y in a unital A implies y = T (x). (3) Show the normalizer fT j [T; S ] Sg of any set S of operators form a Lie algebra of linear transformations; conclude anew that Der(A) is a Lie algebra. (4) In an associative algebra show that Æ(z ) = [x; z ] = xz zx is a derivation for each x (called the inner or em adjoint derivation ad(x)). Problem 5.
(1) Show that any automorphism or derivation of an associative algebra A remains an automorphism or derivation of the Jordan algebra A+ . (2) If J A+ is special, show that any associative automorphism or derivation which leaves J invariant is a Jordan automorphism or derivation. In particular, though the ad(x) for x 2 J need not leave J invariant, the maps Dx;y = ad([x; y]) = [Vx ; Vy ] must, and hence form inner derivations. If J = H(A; ) is hermitian, show any ad(z ) for skew z = z 2 A induces a derivation of J. (3) Show that in any Jordan algebra the map Dx;y = 4[Lx; Ly ] = [Vx ; Vy ] is an \inner" derivation for each pair of elements x; y. Problem 6.
5. PROBLEMS FOR CHAPTER 3
197
Show that any D 2 End(J) which satis es D(1) = 0; Q(D(x); x) = 0 for all x is a derivation of Jord(Q; c). If is a eld, or has no nilpotent elements, show that these two conditions are necessary as well as suÆcient for D to be a derivation. (2) Deduce the derivation result from the corresponding automorphism result over the ring of dual numbers: show D is a derivation of J i T = 1J + "D is an automorphism of J["] = J ["] ("2 = 0). Problem 8. (1) Show directly that we can create a Jordan algebra Jord( 0 ; c) out of any global bilinear symmetric form 0 and choice of basepoint c (0 (c; c) = 1) on a module W over , by taking as unit element 1 = c and as product x y := 0 (x; c)y + 0 (c; y)x + 0 (x; y)c (cf. 3.9.1). (2) Show that bilinear forms with basepoint and bilinear forms produce exactly the same algebras Jord(0 ; c) = JSpin(M; ) for W = 1 M; c = 1; 0 (x; y) := + (v; w) for x = 1 v; y = 1 w; M = fx 2 W j 0 (x; c) = 0g; = 0 jM . (3) Show that bilinear forms with basepoint and quadratic forms with basepoint produce exactly the same algebras: Jord(0 ; c) = Jord(Q; c) for Q(x) := 0 (x; x); 0 (x; y) := 12 Q(x; y). Question 1. Decide on a de nition of derivations and inner derivations of unital quadratic Jordan algebras which reduces to the previous one for a linear Jordan algebras. Show that derivations can be de ned operator-wise in terms of their interactions with multiplication operators ( nd formulas for [D; Vx ]; [D; Ux]; [D; Vx;y ]). Show that derivations form a Lie algebra 1.1(3) of endomorphisms, with the inner derivations as an ideal. Problem 7*.
CHAPTER 4
Jordan Algebras of Cubic Forms In this chapter we describe the Jordan algebras that result from cubic forms with basepoint. Unlike the case of quadratic forms, which always spawn Jordan algebras, only certain very special cubic forms with basepoint give rise to Jordan algebras. Not all algebras resulting from a cubic form are exceptional, but the special ones can be obtained by the previous special constructions, so we are most interested in the 27-dimensional exceptional Albert algebras. These are forms of the split Albert algebra Alb( ) = H3 (O( )) of 3 3 hermitian matrices with split octonion entries, but are not themselves always reduced H3(O)'s (they may not have any proper idempotents), indeed they may be division algebras. Clearly reduced Albert matrix algebras are not enough { we must handle algebras which do not come packaged as matrices. But even for H3 (O) it has gradually become clear that we must use the cubic norm form and the adjoint to understand inverses, isotopes, and inner ideals. Unlike the case of Jordan algebras coming from quadratic forms, those born of cubic forms have a painful delivery. Establishing the Jordan identity and the properties of the Jordan norm requires many intermediate identities, and we will banish most of these technical details to Appendix C, proving only the important case of a nondegenerate form on a nite-dimensional vector space over a eld. However, unlike the other inmate of the Appendix, Macdonald's Theorem (whose details are primarily combinatorial), the cubic form details are are important features of the cubic landscape for those who intend to work with cubic factors (either for their own sake, or for their relations with algebraic groups, Lie algebras, or geometry). But for general students at this stage in their training, the mass of details would obscure rather than illuminate the development.
1. Cubic Maps Although quadratic maps are intrinsically de ned by their values (and quadratic relations can always be linearized), cubic and higher degree maps require a \scheme-theoretic" treatment: to truly understand 199
200
Cubic Forms
the map, especially its internal workings, we need to know the values of the linearizations on the original module, equivalently we need to know the values of the map over all extensions. To extend a cubic map f : X ! Y of -modules from the scalars from to the polynomial ring [t1 ; : : : ; tn ] we need to form values P i ti xi )
f(
=
P 3 P P 2 i ti f (xi )+ i6=j ti tj f (xi ; xj )+ i<j 1, every v 6= 0 has promiscuously many \inverses" w with (v; w) = 1, even if v has (v; v ) = 0 and hence squares to 0! At the other extreme, for coordinatizing a projective plane one needs a coordinate ring which is a \division algebra" in the strong sense that for each x 6= 0 the operators Lx and Rx should be invertible transformations (so the equations xy = a; zx = b have unique solutions y; z for any given a; b). This is overly-restrictive, for A+ fails this test even for the well-behaved associative division algebra A = H of real quaternions (think of i j = 0 for the eminently invertible complex number i).
1. Jordan Inverses Jacobson discovered the correct elemental de nition of inverse at the same time he discovered U -operators, and these have been inextricably linked ever since their birth. Definition 6.1 (Inverse). An element x of a unital Jordan algebra is invertible if it has an inverse y (denoted x 1 ) satisfying the quadratic Jordan inverse conditions (Q1) Ux y = x; (Q2) Ux y 2 = 1: An algebra in which every nonzero element is invertible is called a Jordan division algebra. 225
226
Inverses
In associative algebras invertibility is re ected in the left and right multiplication operators: an element x in a unital associative algebra is invertible i Lx is invertible, in which case Lx 1 = Lx 1 (and dually for Rx ). Analogously, invertibility in Jordan algebras is re ected in the U -operator. Theorem 6.2 (Invertibility Criterion). (1) The following are equivalent for an element x of a unital Jordan algebra: (i) the element x is invertible in J; (ii) the operator Ux is invertible on J; (iii) the operator Ux is surjective on J; (iv ) the unit 1 lies in the range of the operator Ux : (2) If x is invertible then its inverse is uniquely determined by the
Jordan Inverse Recipe
x 1 = Ux 1 x: The U-Inverse Formula states that U -operator of the inverse is the inverse of the original U -operator, Ux 1 = Ux 1 ; and the L-Inverse Formula states that L-multiplication by the inverse is given by a shift of original L-multiplication, Lx 1 = Ux 1 Lx ; which therefore commutes with Lx : proof. (1) Clearly (ii) =) (iii) =) (iv ), and (iv ) =) (ii) since Ux a = 1 =) 1J = U1 = Ux Ua Ux =) Ux has left and right inverses =) Ux is invertible. Clearly (i) =) (iv ) by 6.1(Q2). Finally (ii) =) (i) 1 for y = Ux x: we have (Q1) Ux y = x by de nition, and if we write 1 = Ux a then Ux y 2 = Ux Uy 1 = Ux Uy Ux a = UUx y a (by the Fundamental Formula 5.7(FI)) = Ux a = 1, so (Q2) holds as well. (2) The Inverse Recipe holds by de nition 6.1(Q1) and the fact that Ux is invertible by (ii). The U-Inverse Formula holds by multiplying the relation Ux Ux 1 Ux = UUx x 1 = Ux [by the Inverse Recipe and the Fundamental Formula] on both sides by Ux 1 . The L-Inverse Formula holds since the right side commutes with Lx (since Ux does), and equals the left side by cancelling Ux from Ux Vx 1 Ux = Ux (Ux 1 ;1 )Ux = UUxx 1 ;Ux1 [by the Fundamental Formula] = Ux;x2 = Ux Vx [by Commuting Formula 5.7(FII)]. Assume x is invertible in a Jordan algebra. Use the L-Inverse formula to show Lx 1 Lx = 1J i Lx2 = L2x i x satis es the associativity condition [x; x; y] = 0 for all y. Exercise 6.22
1. JORDAN INVERSES
227
A unital associative algebra is a division algebra i it has no proper one-sided ideals, which is equivalent to having no proper inner ideals. The situation for Jordan algebras is completely analogous. Theorem 6.3 (Division Algebra Criterion). The following are equivalent for a unital Jordan algebra: (i) J is a division algebra; (ii) J has no trivial elements or proper principal inner ideals; (iii) J has no proper closed principal inner ideals; (iv ) J has no proper inner ideals. proof. Notice that since J is unital, hats are unnecessary, and the principal inner ideal is (x] = (x) = Ux (J). J is a division algebra [as in (i)] () all x 6= 0 are invertible () if x 6= 0 then (x] = J [by Invertibility Criterion 6.2(1)(iii)] () if x 6= 0 then (x] 6= 0 and (x] 6= 0 implies (x] = J () no trivial elements and no proper principal inner ideals [as in (ii)] () if x 6= 0 then [x] = J [as in (iii)] (note in the unital case that (x) = J () [x] = J since =) is clear, and (= holds by J = U1 J U[x]J (x) from Principal Inner 5.9). Clearly (iv ) =) (iii), and (iii) =) (iv ) since any nonzero inner ideal B contains a nonzero x, so [x] 6= 0 implies [x] = J and hence B must be all of J: We will see later (in 18.4) that this remains true even if J is not unital. (1) Show that the ban on trivial elements in (ii) is necessary: if = ["] for a eld and "2 = 0; show that every elememt x is either invertible or trivial, so (x) = (x] is either J or 0, yet J is not a division algebra. (2) Show directly that an associative algebra A (not necessarily unital) is a division algebra bb B i it is not trivial and has no proper inner ideals B (-submodules with bA for all b 2 B). Exercise 6.3* J
Now we describe inversion in our basic Jordan algebras. Example 6.4 (Full and Hermitian Invertibility). (1) If A is a unital associative algebra, an element x has inverse y in the Jordan algebra + A if and only if it has inverse y in the associative algebra A, (A1) xy = 1; (A2) yx = 1; and its Jordan and associative inverses coincide; A+ is a Jordan division algebra i A is an associative division algebra. (2) If A is a unital associative algebra with involution , an element in the Jordan algebra H(A; ) of symmetric elements is invertible if and only if it is invertible in A, and H(A; ) is a Jordan division algebra if A is an associative division algebra.
228
Inverses
(1) Certainly the associative inverse conditions (A1-2) in + A imply the Jordan inverse conditions (Q1-2) in A ; conversely, (Q2) xy 2 x = 1 implies x has a left and right inverse, hence an inverse, and cancelling x from (Q1) xyx = x shows xy = yx = 1. (2) In H(A; ) the inverse y = x 1 is symmetric if x is. proof.
Verify directly that in any special unital Jordan algebra 1 2 J A+ an element of J has Jordan inverse i it has an associative inverse which is also contained in J. Exercise 6.4
Example 6.5 (Factor Invertibility). An element x of a quadratic factor Jord(Q; c) (resp.cubic factor Jord(N; #; c)) determined by a quadratic form Q or cubic form N with basepoint, is invertible if and only if its norm Q(x) or N (x) is invertible in the ring of scalars [assuming the scalars act faithfully in the cubic case], in which case the inverse is x 1 = Q(x) 1 x Q(x 1 ) = Q(x) 1 x 1 = N (x) 1 x# ; N (x 1 ) = N (x) 1 : Over a eld, Jord(Q; c) or Jord(N; #; c) is a Jordan division algebra if and only if Q or N is an anisotropic quadratic or cubic form. The Tits Constructions Jord(A; ) and Jord(A; u; ; ) over a eld are Jordan division algebras i (I) A is an associative division algebra of degree 3 and (II) 62 n(A) is not a norm. proof. Let n stand for the norm (either Q or N ). If x has inverse y then 1 = n(c) = n(Ux y 2) = n(x)2 n(y )2 since the norm permits composition with U by Quadratic Factor Example 3.9 or the Cubic Construction 4.2(3) [here's where we need the annoying faithfulness hypothesis on the scalars], showing that n(x) is an invertible scalar; then cancelling n(x) from n(x) = n(Ux y ) = n(x)n(y )n(x) shows n(x); n(y ) are inverses as in the last assertion. Conversely, if n(x) is invertible we can use it to construct the inverse in the usual way: from the Degree 2 Identity 3.9 or Degree 3 Identity 4.2(2) xd T (x)xd 1 + : : : + ( 1)d n(x)c = 0 (d = 2; 3), which as usual implies x y = 1 for y = ( 1)d+1 n(x) 1 [xd 1 T (x)xd 2 + : : :] 2 [x]. But by Power-Associativity 5.6 this implies Ux y = x (x y ) = x; Ux y 2 = (x y )2 = 1, so y = x 1 as in 6.1. The inverse reduces to Q(x) 1 (x T (x)1) = Q(x) 1 x in quadratic factors and to N (x) 1 (x2 T (x)x + S (x)1) = N (x) 1 x# [by Sharp Expression 4.2(2)] in cubic factors. While anisotropic quadratic forms are a dime a dozen over suitable elds, anisotropic Jordan cubics are much harder to come by. If A is a
1. JORDAN INVERSES
229
9-dimensional central division algebra over (none exist over the reals or complexes, but they do exist over rational or p-adic elds), then A(t) = A (t) remains a division algebra over the rational function eld, and the scalar t in not a norm. All details will be left, as usual, to Appendix C. Exercise 6.5 (1)
A \norm" on a Jordan algebra is a polynomial function from J to satisfying N (1) = 1; N (Uxy) = N (x)2 N (y) (but seldom N (x y) = N (x)N (y), another good reason for using the U -operators instead of the L-operators). Use this to show that if x is invertible so is its norm N (x), and N (x 1 ) = N (x) 1 . (2) An \adjoint" is a vector-valued polynomial map on J such that x# satis es Uxx# = N (x)x; Ux Ux# = N (x)2 1J . Show from this that, conversely, if N (x) is invertible then x is invertible, x 1 = N (x) 1 x# (just as with the adjoint of a matrix).
We now verify that the quadratic de nition 6.1 we have given agrees with the original linear de nition given by Jacobson (cf. II Chapter 3.2). Lemma 6.6 (Linear). An element of an arbitrary unital Jordan algebra has inverse y i it satis es the linear inverse conditions (L1) x y = 1; (L2) x2 y = x: proof. The quadratic and linear invertibility conditions (Q) and (L) are strictly between x and y , so all take place in [x; y ], and by the Shirshov-Cohn Principle 5.3 we may assume we are in some associative + A . The reason (Q) and (L) are equivalent is that they are both equivalent to the associative invertibility conditions (A) of 6.4: we have already seen this in (1) for (Q) ; clearly (A) implies (L1-2), 12 (xy + yx) = 1; 12 (x2 y + yx2) = x, and conversely multiplying (L1) xy + yx = 2 on the left and right by x yields x2 y + xyx = 2x = xyx + yx2 , so x2 y = yx2 , hence both equal x by (L2), therefore xy = (yx2 )y = y (x2 y ) = yx, hence both equal 1 by (L1), and we have (A). Show the quadratic inverse conditions (Q1-2) are equivalent to the old-fashioned linear conditions (L1-2) by direct calculation, without invoking any speciality principle. (1) Show (Q1-2) =) Ux invertible. (2) Show Ux invertible =) (L1-2) for y = Ux 1 (x) [by cancelling Ux from Ux (xn y) = Ux (xn 1 ) for n = 1; 2 to obtain (Ln)]. (3) Show Ux invertible (= (L1-2) [get (Q1) by de nition of the U -operator, and (Q2) from x2 y2 = 1 using the linearized Jordan identity (JAx20 ), hence x y2 = y]. Exercise 6.6*
In associative algebras, the product xy of two invertible elements is again invertible, as is xyx, but not xy + yx (as we already noted, the invertible x = i; y = j in the quaternion division algebra H have xy +
230
Inverses
yx = 0!) Similarly, in Jordan algebras, the product Ux y of invertible elements remains invertible, but x y need not be. Lemma 6.7 (Invertible Products). (1) Invertibility of a U -product amounts to invertibility of each factor: if x; y are elements of a unital Jordan algebra then Ux y invertible () x; y invertible; in which case (Ux y ) 1 = Ux 1 y 1: (2) Invertibility of a power amounts to invertibility of the root: xn invertible (n 1) () x invertible, in which case (xn ) 1 = (x 1 )n =: x n has Ux n = (Ux ) n : (3) InvertibilityQof a direct product amounts to invertibility of each component: if J = Ji is a direct product of unital Jordan algebras, then Q x = xi invertible in J () each xi invertible in Ji ; Q Q in which case ( xi ) 1 = xi 1 : proof. (1) Ux y invertible () UUx y = Ux Uy Ux invertible (by the Invertibiity Criterion 6.2(1)) () Ux ; Uy are invertible [=) since Ux has left and right inverses, hence is invertible, so Uy is too, (= is clear]; then by the Inverse Recipe 6.2(2) (Ux y ) 1 = (UUx y ) 1 (Ux y ) = (Ux Uy Ux ) 1 (Ux y ) [by the Fundamental Formula] = Ux 1 Uy 1 y = Ux 1 y 1 . (2) follows by an easy induction using (1) [n = 0; 1 being trivial]. The second part of (2) also follows immediately from the rst without any induction: Ux n := U(xn ) 1 = (Uxn ) 1 [by the U-Inverse Formula 6.2(2)] = Uxn 1 [by Power Associativity 5.6(2)] = Ux n . The rst part can also be established immediately from the Shirshov-Cohn Principle: since this is a matter strictly between x and x 1 , it takes place in [x; x 1 ], so we may assume an associative ambience; but in A+ Jordan invertibility reduces to associative invertibility by 6.4, where the rst part is well-known. (3) This is clear from the component-wise operations in the direct product and the elemental de nition 6.1. Exercise 6.7A
Carry out the \easy induction" mentioned in the proof of 6.7(2).
De ne negative powers of an invertible element x by x n := (x 1 )n . (1) Show that we have power associativity xn xm = xn+m ; (xn )m = xnm ; fxn ; xm g = 2xn+m ; Uxn xm = x2n+m ; fxn ; xm ; xp g = 2xn+m+p and operator identities Uxn = Uxn ; Vxn;xm = Vxn+m ; Vxn ;xm Uxp = Uxn+m+p;xp for all integers n; m; p. (2) Show Uf (x;x 1) Ug(x;x 1) = U(f:g)(x;x 1 ) ; Vf (x;x 1) Ug(x;x 1 ) = Exercise 6.7B
2. LINEAR AND NUCLEAR INVERSES
231
U(f:g)(x;x 1);g(x;x 1) for any polynomials f; g in the commutative polynomial ring [t; s] in two variables.
In the Jacobson Coordinatization a crucial role will be played by \symmetry automorphisms" which permute the rows and columns of a matrix algebra. These are generated by a very important kind of \inner" automorphism. Notice that the usual associative inner automorphism x 7! uxu 1 will be expressible in Jordan terms as x 7! uxu = Uu x if u = u 1 , i.e. if u2 = 1: Definition 6.8 (Involution). An element u of order 2 in a Jordan algebra J; u2 = 1, will be called an involution of J. The corresponding operator Uu will be called the involution on J determined by u. Lemma 6.9 (Involution). If u is an involution in a unital Jordan algebra J, then the involution Uu is an involutory automorphism of J, Uu2 = 1J . proof. Notice that an involution is certainly invertible: by the Fundamental Formula Uu2 = Uu2 = U1 = 1J shows [by the Invertibility Criterion 6.2(1), or Power Invertibility 6.7(2)] that Uu , and hence u, is invertible. In 1.1(2) we noted that ' = Uu will be an automorphism as soon as it preserves squares, and Uu (x2 ) = Uu Ux (1) = Uu Ux (u2) = (Uu x)2 [by the Fundamental Formula applied to z = 1. [Alternately, since this is a matter strictly between the elements u; x we can invoke the Shirshov-Cohn Principle to work in an associative algebra A, and here Uu a = uau 1 is just the usual inner associative automorphism determined by the element u and so certainly preserves Jordan products.]
2. Linear and Nuclear Inverses In considering \twistings" of Jordan matrix algebras by isotopy, we will encounter nonassociative algebras A = M3 (D) for certain alternative -algberas D where the hermitian part H3 (D) forms a Jordan subalgebra of A+ even though the entire A+ is not itself Jordan. We can understand the isotopic twistings much better from the broad perspective of M3 than from the restricted perspective of H3 , so it will be worth our while to gather some observations about inverses (and later isotopes) by very tame (namely nuclear) elements. There is a very general notion of inverses pioneered by von Neumann. Definition 6.10 (Linear Inverses). An element x of an arbitrary unital linear algebra A is linearly invertible if it has a linear inverse
232
Inverses
y satisfying the linear inverse condition xy = 1 = yx where x; y operator-commute : the operators Lx ; Ly ; Rx ; Ry all commute: Such a linear inverse y is unique: if y 0 is another linear inverse for x then y 0 = Ly0 (1) = Ly0 Lx (y ) = Lx Ly0 (y ) [by operator-commutativity of x; y 0 ] = Lx Ry (y 0) = Ry Lx (y 0) [by operator-commutativity of x; y ] = Ry (1) = y . It is easy to see that for commutative Jordan algebras, x; y are linear inverses i they are Jordan inverses. Certainly x and its Jordan inverse y = x 1 satisfy the linear inverse condition by the U -Inverse Formula 6.2(2); conversely, if x y = 1 and Lx ; Ly commute then also x2 y = Ly Lx (x) = Lx Ly (x) = Lx (1) = x, so x; y satisfy (L1-2) and by the Linear Lemma 6.6 are Jordan inverses. When we pass beyond Jordan algebras, as long as we stick to nuclear elements the notion of inverse works as smoothly as it does for associative algebras. Lemma 6.11 (Nuclear Inverse). If a nuclear element u of a unital linear algebra A satis es uv = 1 = vu for some element v 2 A, then necessarily v is nuclear too, and we say u has a nuclear inverse. In this case the operators Lu ; Lv are inverses and Ru ; Rv are inverses, and v := u 1 is uniquely determined as u 1 = Lu 1 1 or as u 1 = Ru 1 1. The operators Lu ; Ru ; Uu := Lu Ru ; Vu := Lu + Ru ; Lu 1 ; Ru 1 ; Uu 1 := Lu 1 Ru 1 ; Vu 1 := Lu 1 + Ru 1 all commute (though the Jordan operators Vu ; Vu 1 are no longer inverses). In particular, u; v are linear inverses. proof. First we verify that any such v is nuclear, [v; a; b] = [b; v; a] = [a; b; v ] = 0 for all a; b 2 A. The trick for the rst two is to write a = ua0 [note a = 1a = (uv )a = u(va) by nuclearity, so we may take a0 = va]; then [v; ua0; b] = (vua0 )b v (ua0b) [by nuclearity of u] = a0 b (vu)a0 b = 0 [by vu = 1 and nuclearity of u], similarly [b; v; ua0] = (bv )ua0 b(vua0 ) = (bvu)a0 ba0 = 0; dually, for the third write b = b0 u so that [a; b0 u; v ] = (ab0 u)v a(b0 uv ) = (ab0 )uv ab0 = 0 by uv = 1 and nuclearity of u: Now Lu commutes with Rx and satis es Lu Lx = Lux for all elements x 2 A by nuclearity [u; A; x] = [u; x; A] = 0, so it commutes with Rv and Lu Lv = Luv = L1 = 1A ; dually for v since it is nuclear too, so Lu ; Lv are inverse operators. Then Lu (v ) = 1 implies v is uniquely determined as v = Lu 1 (1). Dually for the right multiplications. Then the left and right multiplications and the Jordan operators live in the subalgebra of operators generated by the commuting Lu ; Lu 1 ; Ru ; Ru 1 , where everyone commutes.
3. PROBLEMS FOR CHAPTER 6
233
3. Problems for Chapter 6 Let J be a unital Jordan algebra. (1) Show that u is an involution in the sense of 6.8 i u = e e0 for supplementary orthogonal idempotents e; e0, equivalently i u = 2e 1 for an idempotent e. [Later we will see Uu = E2 E1 + E0 has a nice description as a \Peirce re ection" in terms of the Peirce projections Ei (e):] (2) Show that u ! e is a bijection between the set of involutions of J and the set of idempotents of J. (3) Show a bijective linear transformation T on a unital J is an automorphism i it satis es T (1) = 1 and is \structural", UT (x) = T UxT for some operator T and all x, in which case T = T 1. (4) If J = J1 J2 is a direct sum of unital Jordan algebras Ji with units ei , and v1 2 J1 has v12 = e1, show T := Uv1 +e2 is structural with T = T = T 1, yet T (1) = e1 + e2 6= 1, so T is not an automorphism. Problem 2*. A derivation of a quadratic Jordan algebra is an endomorphism D of J such that D(1) = 0; D(Ux y) = UD(x);xy + Ux D(y). Show that D(x 1 ) = Ux 1D(x) if x is invertible. Problem 3*. Recall that a derivation of a linear algebra A is an endomorphism D satisfying the \product rule" D(xy) = D(x)y + xD(y) for all x; y. (1) Show that if x; y are linear inverses, then D(y) = Uy D(x) where the operator Uy := Ly Vy Ly2 ; Vy := Ly + Ry (this is the correct U -operator for noncommutative Jordan algebras). (2) Show that a derivation necessarily maps the nucleus into the nucleus, D(Nuc(A)) Nuc(A); show that if x; y are nuclear inverses then D(y) = yD(x)y. (3) Show that if x; y satisfy the linear inverse conditions in a unital Jordan algebra, then D(y) = Uy D(x): Problem 1.
CHAPTER 7
Isotopes 1. Nuclear Isotopes If u is an invertible element of an associative algebra, or more generally a nuclear element of an arbitrary linear algebra A having a nuclear inverse as in 6.11, then we can form a new linear algebra by translating the unit from 1 to u. We can also translate an involution to its conjugate by a -hermitian element. Definition 7.1 (Nuclear Isotope). If u is a nuclear element with nuclear inverse in a unital linear algebra A, then we obtain a new linear algebra, the nuclear u-isotope Au , by taking the same -module structure but de ning a new product Au : xu y := xuy (= (xu)y = x(uy )) for u 2 Nuc(A)): If is an involution on A such that u = u, then we obtain a new \involution", the nuclear u-isotope u , by u : xu := uxu 1: Proposition 7.2 (Nuclear Isotopy). (1) If u is an invertible nuclear element in A, then the isotopic algebra Au is just an isomorphic copy of the original algebra : ' : Au ! A is an isomorphism for ' = Lu or Ru : In particular, it has unit 1u = u 1 , and is associative i A is. Nuclear isotopy is an equivalence relation, A1 = A; (Au )v = Auvu ; (Au )u 2 = A (u; v 2 Nuc(A)): (2) If u is hermitian with respect to an involution , then remains an involution on the isotope Au and the isotopic involution u is an involution on the original algebra A, (xu y ) = yu x ; xu u = x; (xy )u = y u xu ; and the two algebras are are -isomorphic : ' = Lu : (Au ; ) ! (A; u ) is an isomorphism. 235
236
Isotopes
In particular, the translated involution has translated space of hermitian elements H(A; u) = uH(A; ): proof. (1) That ' = Lu (dually Ru ) is an isomorphism follows since ' = Lu is a linear bijection with inverse ' 1 = Lu 1 by the Nuclear Inverse Lemma 6.11, and ' is an algebra homomorphism since '(xu y ) = u(x(uy )) = (ux)(uy ) [by nuclearity of u] = '(x)'(y ). In particular, the nuclear isotope has unit ' 1 (1) = Lu 1 (1) = u 1 , which is also easy to verify directly: u 1 u y := u 1 uy = 1y = y from nuclearity and u 1 u = 1, and dually yu u 1 = y from uu 1 = 1. The equivalence relations hold because the algebras have the same underlying -module structure and the same products, x1 y = xy; xu vu y = xuvuy; uu 2 u = 1 (parentheses unnecessary by nuclearity of u; v ): remains an involution on Au because it is still linear of period 2 and is an anti-automorphism of the isotope because (xu y ) = ((xu)y )) = y (u x ) [since is an involution on A] = y (ux ) [since u = u] = yu x . That u is an involution on A follows from direct calculation: xu u = u (ux u 1) u 1 = u ((u ) 1 (x ) u ) u 1 = (u(u) 1) x (uu 1 ) = x [by nuclearity, u = u, and x = x] and (xy )u = u(xy )u 1 = uy x u 1 = (uy )u 1 u(x u 1 ) [by nuclearity] = (uy u 1 ) (ux u 1) (by nuclearity) = y u xu . We saw the map ' : (Au ; ) ! (A; u ) is an isomorphism of algebras, and it preserves involutions since '(x ) = ux = (ux )(u u 1) [since u = u is invertible] = u (x (uu 1 )) [since u is nuclear] = u (x u) u 1 [since u = u is nuclear too!] = u(ux) u 1 [since is an involution on A] = (ux)u = ('(x))u . This gives another proof that u is an involution on A. The nal translation property follows immediately, since a -isomorphism takes hermitian elements to hermitian elements. If u is any element of a commutative linear algebra A, de ne the linear u-homotope A(u) to be the linear algebra having the same underlying module as A, but with new product x (u) y := (x u) y + x (u y) (x y) u. (1) Show the linear homotope is again commutative. If A is unital and u is linearly invertible (cf. 6.10), show the homotope has new unit u 1 , using operator-commutativity of the linear inverse. In this case we speak of the linear u-isotope. (2) If u is nuclear, show the linear homotope reduces to the nuclear homotope (de ned as in 7.1) by x (u) y := x u y). Exercise 7.2A
(1) Go through the dual argument that for nuclear u the map ' = Ru is an algebra isomorphism Au ! A and that if u = u then, just as Lu is a -isomorphism of with the left u-translate xu := ux u 1, so is Ru with the Exercise 7.2B
2. JORDAN ISOTOPES
237
right u-translate u 1x u = xu 1 : Ru is an isomorphism (Au ; ) ! (A; u 1 ) of -algebras. (2) If u = u is skew, show Lu : (Au ; ) ! (A; u ) is an isomorphism of -algebras, where the negative is an involution on Au (but not on the original A !).
Let A be any (not necessarily commutative) linear algebra. (1) Show that the associator [x; y; z ]+ := (x y) z x (y z ) in A+ reduces to 1 associators and commutators 4 [x; y; z ] + [x; z; y ] + [y; x; z ] [z; y; x] [y; z; x] [z; x; y] + [y; [x; z ]] in A itself. (2) Use this to show that if u is a nuclear element of A then [u; y; z ]+ = 0 , [y; [u; z ]] = 0; [x; u; z ]+ = 0 , [u; [x; z ]] = 0, and show x (u y) = 12 (xuy + yux) , [x; [y; u]] = 0. (3) Conclude that a nuclear u remains nuclear in A+ i [u; [A; A]] = [A; [u; A]] = 0, in which case the linear homotope (as in Ex. 7.2B above) of the commutative algebra A+ is the same as the plus algebra of the nuclear homotope of A : A+ (u) = Au + . Exercise 7.2C
Because it just reproduces the original algebra, the concept of nuclear isotopy plays no direct role in nonassociative theory. However, even in the associative case isotopy is nontrivial for algebras with involution and thus in Jordan algebras isotopy can lead to new algebras.
2. Jordan Isotopes For Jordan algebras we have a useful notion of isotope for all invertible elements, not just the nuclear ones. Just as with nuclear isotopes, the isotope of a Jordan product consists in sticking a factor u in \the middle" of the Jordan product x y . However, it is not so simple to decide where the \middle" is! In the (usually non-Jordan) linear algebra + (u) y := 1 (x y + y x) = 1 (xuy + yux), Au , the Jordan product is x u 2 u 2 which suggests we should stick the u in the middle of the triple product 1 fx; u; y g. As in associative algebras, this process produces a new Jor2 dan algebra even if u is not invertible, leading to the following de nition for homotopes of Jordan algebras. Proposition 7.3 (Jordan Isotope). (1) If u is an arbitrary element of Jordan algebra, we obtain a new Jordan algebra, the u-homotope (u) , de ned to have the same underlying -module as J but new bullet J product 1 x (u) y := fx; u; yg: 2 The auxiliary products are then given by
x(2;u) = Ux u; Ux(u) = Ux Uu ; Vx(u) = Vx;u; (u) (u) Ux;z = Ux;z Uu ; Vx;y = Vx;Uuy ; ( u ) fx; yg = fx; u; yg; fx; y; zg(u) = fx; Uuy; zg:
238
Isotopes
(2) If J is unital, the homotope will be again unital unital i the element u is invertible, in which case the new unit is 1(u) = u 1 : In this case we call J(u) the u-isotope (roughly corresponding to the distinction between isomorphism and homomorphism). (3) An isotope has exactly the same trivial and invertible elements as the original algebra,
x trivial in J(u) () x trivial in J; x invertible in J(u) () x invertible in J; in which case x( 1;u) = Uu 1 x 1 : In particular, (u)
J
is a division algebra
()
J
is a division algebra:
(4) We say two unital Jordan algebras J; J0 are isotopic if one is isomorphic to an isotope of the other. Isotopy is an equivalence relation among unital Jordan algebras, since J
(1)
= J; (J(u) )(v) = J(Uu v) ; (J(u) )(u
2)
= J:
proof. (1) We rst check the auxiliary products: all follow from Macdonald's Principle and linearization, or they can all be established by direct calculation: x(2;u) = 21 fx; u; xg = Ux u; Vx(u) (y ) = fx; y g(u) = fx; u;yg = Vx;u(y) by linearization ortwice the bullet, and 2Ux(u) := Vx(u) 2 Vx((2u);u) = Vx;uVx;u VUxu;u [using the above form of the square and V ] = Ux Uu;u [by Macdonald's Principle] = 2Ux Uu , hence (u) fx; y; zg(u) = Ux;z = Ux;z Uu too by linearization. The new bullet is clearly commutative as in (JAx1). To show Vx(u) commutes with Vx(2;u) as in the Jordan identity (JAx2), it suÆces to show it commutes with 2Ux(u) = Vx(u) 2 Vx(2;u) , and Vx(u) Ux(u) = Vx;uUx Uu = Ux Vu;xUu = Ux Uu Vx;u using the Commuting Formula 5.7(FII) twice. Thus the homotope is again a Jordan algebra. (2) If J is unital and the homotope has a unit v then 1J = Uv(u) = Uv Uu implies Uv is surjective, hence by the Invertibility Criterion 6.2(1) is invertible, so Uu = Uv 1 is also invertible, and u is invertible by the Invertibility Criterion again. In this case we can see directly that u 1 is the unit for J(u) : by de nition of the Jordan triple product 1 fu 1 ; u; y g = (u 1 u) y + u 1 (u y ) u (u 1 y ) = 1 y = y 2 by operator commutativity in the L-Inverse Formula 6.2(2).
3. QUADRATIC FACTOR ISOTOPES
239
(3) follows from the fact that triviality or [by the Invertibility Criterion 6.2(1)] invertibility of x is measured by its U -operator, and Uu invertible guarantees Ux(u) = Ux Uu vanishes or is invertible i Ux is, where in the latter case x( 1;u) = Ux(u) 1 x = Ux Uu 1 x = Uu 1 Ux 1 x = Uu 1 x 1 by the Inverse Recipe 6.2(2). Just as in the nuclear case 7.2, the equivalence relations (4) follow easily: 12 fx; 1; y g = x y; 21 fx; v; y g(u) = 12 fx; Uu v; y g [using (1)], hence 1 2 (u) = 1 fx; 1; y g [by Inverse De nition 6.1(Q2)] = x y . 2 fx; u ; y g 2 From these relations for elemental isotopy it is easy to see that general isotopy is an equivalence relation. Show that the spec c form of the inverse in J(u) is x( 1;u) = Uu 1x 1 by verifying that this satis es the Inverse Condition 6.1 in J(u) . Compare this veri cation with that of the Linear Inverse Condition 6.6.
Exercise 7.3A*
(1) Show that for any linear algebra A and any u 2 A, in A+ the Jordan product satis es 4[(x u) y + x (u y) (x y) u] = 2 (x(uy) + y(ux)) + [y; u; x] + [x; u; y] [y; x; u] [x; y; u] + [u; y; x] + [u; x; y]. (2) De ne nuclear and Jordan homotopes as in 7.1, 7.3 (omitting unitality and the requirement that u be invertible), and use (1) to prove A+ (u) = Au + : x (u) y = 21 (xuy + yux) for any nuclear u 2 A. Exercise 7.3B
We remark that in the rare case that u is nuclear, then the Jordan isotope reduces to the nuclear isotope, (u) = J : x (u) y := 1 (xuy + yux) = x y J (u 2 Nuc(J)): u u 2 To get an idea of what life was like in pioneer days, before we had a clear understanding of the Jordan triple product, you should go back and try to prove the Isotope Proposition without mentioning triple products, using only the dot product and the de nition 7.3(1): x u y := (x u) y + x (u y ) (x y ) u: You will quickly come to appreciate the concept and notation of triple products!
3. Quadratic Factor Isotopes Our rst example of Jordan isotopy is for Jordan algebras of quadratic forms. Here the description is easy: the u-isotope simply replaces the original unit c by u 1 , and then performs a cosmetic change of quadratic form from Q to Q(u)Q to insure the new ruler has norm 1.
240
Isotopes
7.4 (Quadratic Factor Isotopes). (1) The isotopes of the quadratic factor Jord(Q; c) are again quadratic factors, obtained by scaling the quadratic form and shifting the basepoint : Jord(Q; c)(u) = Jord(Q(u) ; c(u)) for Q(u) (x) := Q(x)Q(u); c(u) := u 1 = Q(u) 1 u ; T (u) (x) = Q(x; u ); x(u) = Q(u) 1 (Uu x) : (2) The diagonal isotopes of the reduced spin factor rspq are again reduced spin factors, obtained by scaling the quadratic form : RedS pin(q)(;0;) = RedS pin(q) via (; w; ) ! (; w; ): proof. For (1) note that (Q(u) ; c(u) ) is a quadratic form with ( basepoint: Q u) (c(u) ) := Q(u 1 )Q(u) = 1 by Factor Invertibility 6.5. The trace associated with (Q(u) ; c(u) ) by the Trace De nition 3.8 is T (u) (x) := Q(u) (x; c(u) ) = Q(x; u 1 )Q(u) = Q(x; u ) [by Factor Invertibility]. The involution is given by x(u) := T (u) (x)c(u) x = Q(x; u )Q(u) 1 u x = Q(u) 1 Q(u ; x )u Q(u)x [since the involution is isometric on 1 1 Q] = Q(u) Q(u; x )u Q(u)x = Q(u) Uu x : The square in Jord(Q; c)(u) is, by 7.3(1), x(2;u) = Ux u = Q(x; u )x Q(x)u [by Quadratic Factor 3.9] = T (u) (x)x Q(x)Q(u)Q(u) 1 u = T (u) (x)x Q(u) (x)c(u) , which by 3.9 is the square in Jord(Q(u) ; c(u) ), so these two algebras have the same product and thus coincide. For (2), the isotope J(u) by a diagonal element u = (; 0; ) 2 RedS pin(q) = (; W; ) has squares given by (; w; )(2;(;0; )) = ([2 + q (w) ]; [ + ]w; [ 2 + q (w)]) by Reduced Spin 3.12, while similarly by 3.12 squares in RedS pin(q ) are given by (~; w; ~)2 = ([~2 + q (w)]; [~ + ~]w; [ ~2 + q (w)]): Thus the mapping (; w; ) ! (; w; ) =: (~; w; ~) preserves squares and hence is an isomorphism: [~2 + q (w)] = [2 + q (w) ]; [ ~2 + q (w)] = [ 2 + q (w)]; [~ + ~]w = [ + ]w. Example
4. Cubic Factor Isotopes Just as the Quadratic isotopes are again Quadratic Factors, so the Cubic isotopes are again Cubic Factors, though the veri cation takes
4. CUBIC FACTOR ISOTOPES
241
longer since the requirements for admissibility of cubic forms are so stringent. Example 7.5 (Cubic Factor Isotopes). (1) For each sharped cubic form (N; #; c) and element u with invertible norm N (u) 2 , we obtain an isotopic sharped cubic form (N (u) ; #(u) ; c(u) ) obtained by scaling the cubic form and shifting the sharp and basepoint :
N (u) (x) := N (x)N (u); c(u) := u 1 = N (u) 1 u# (by 6:5); x#(u) := N (u) 1 Uu# x# = N (u) 1 Uu x) # (by 4:2(3)); x#(u) y = N (u) 1 Uu# (x#y ) = N (u) 1 Uu x # Uu y; T (u) (x; y ) := T (x; Uu y ); T (u) (y ) := T (u; y ); S (u) (x) = T (u# ; x# ): If (N; c) is a Jordan cubic, so is its isotope (N (u) ; c(u) ). (2) Moreover, the Jordan algebras constructed from these cubic form isotopes are just the Jordan isotopes of the Jordan algebras constructed from the original forms,
Jord(N (u) ; #(u) ; c(u) ) = Jord(N; #; c)(u); Jord(N (u) ; c(u) ) = Jord(N; c)(u): Hence isotopes of cubic factor Jord(N; #; c); Jord(N; c) are again cubic factors.
Certainly (N (u) ; #(u) ; c(u) ) is a cubic form with quadratic map and basepoint: N (u) (c(u) ) = N (u 1 )N (u) = 1 [by Factor Invertibility 6.5]. The derivatives and traces associated with the isotopic form are proof.
N (u) (x; y ) = N (u)T (x# ; y ); N (u) (x; y; c(u) ) = T (u##x; y ); T (u) (y ) = T (y; u); T (u) (x; y ) = T (x; Uu y ); S (u) (x) = T (x# ; u#): To see these, we compute
N (u) (x; y ) = N (u)N (x; y ) = N (u)T (x# ; y )
[by Trace-Sharp 4:1(2)];
hence the quadratic spur form is given by
S (u) (x) := N (u) (x; c(u) ) = N (u)T (x#; N (u) 1 u# ) = T (x# ; u#); so by linearization
S (u) (x; y ) = N (u) (x; y; c(u) ) = T (x#y; u#) = T (x; y #u#)
242
Isotopes
[by Sharp Symmetry (4.4.1)]. The linear trace is T (u) (y ) := N (u) (c(u) ; y ) = N (u)N (N (u) 1 u# ; y ) = N (u) 1 T ((u#)# ; y ) [by Trace-Sharp 4:1(2)] = N (u) 1 T (N (u)u; y ) [by Adjoint Identity 4:1(2)] = T (u; y ): Combining these, the bilinear trace is given by T (u) (x; y ) := T (u) (x)T (u) (y ) N (u) (x; y; c(u) ) = T (x; u)T (y; u) T (x; y #u# ) = T (x; T (u; y )u u# #y ) = T (x; Uu y ) by de nition 4.2(1) of the U -operator in Jord(N; #; c): We now verify the three axioms 4.1(2) for a sharp cubic. For the Trace-Sharp Formula 4.1(2), N (u) (x; y ) = T (u) (x#(u) ; y ); we compute N (u) (x; y ) = N (u)T (x# ; y ) = N (u)T (Uu Uu 1 x# ; y ) [by the U -Inverse Formula 6.2(2)] = N (u)T (N (u) 2Uu Uu# x# ; y ) [since u 1 = N (u) 1 u#] = T (N (u) 1 Uu# x# ; Uu y ) [using U -Symmetry 4.4(3)] = T (u) (N (u) 1 Uu# x# ; y ): For the Adjoint Identity 4.1(2), x#(u)#(u) = N (u) (x)x; we compute x#(u)#(u) = N (u) 1 Uu N (u) 1 Uu# x# # = N (u) 1 N (u) 2 Uu Uu# x# # = N (u) 3 N (u)2 x# # [since u# = N (u)u 1 ] = N (u) 3 N (u)4 x## = N (u) N (x)x = N (u) (x)x: Finally, the c-Sharp Identity 4.1(2), c(u) #(u) y = T (u) (y )c(u) y; follows by cancelling Uu from Uu c(u) #(u) y T (u) (y )c(u) + y = Uu N (u) 1 Uu# N (u) 1 u# #y T (u; y )u 1 + y = N (u) 2 Uu Uu# u# T (u; y )u + Uu y = u##y T (u; y )u + Uu y = 0 [by the de nition 4.2(2) of the U -operator].
5. MATRIX ISOTOPES
243
Thus the isotope (N (u) ; #(u) ; c(u) ) is again a sharped cubic form, and so by Cubic Construction 4.2(1) produces a Jordan algebra Jord(N (u) ; #(u); c(u) ) whose U -operator is T (u) (x; y )x x#(u) #(u) y = T (x; Uu y )x N (u) 1 Uu N (u) 1 Uu# x# #Uu y = T (x; Uu y )x N (u) 2 Uu Uu# x# #Uu y = T (x; Uu y )x x# #Uu y [since # = N (u)u 1 ] = Ux (Uu y ) = Ux(u) y; which is the usual U -operator of the Jordan isotope. Since both algebras have the same unit c(u) = 1(u) = u 1 and the same U -operators, they also have the same squares x2 = Ux 1 and hence the same bullet products, so the two algebras coincide. If (N; c) is a Jordan cubic as in 4.3, i.e. the bilinear form T is nondegenerate, then invertibility of Uu guarantees nondegeneracy of T (u) , and N (u) (x; y ) = T (u) (x#(u) ; y ) [by the above] for all y shows by nondegeneracy that the adjoint induced by (N (u) ; c(u) ) is just x#(u) .
5. Matrix Isotopes As we noted in 7.1, nothing much happens when we take isotopes of a full plus-algebra. Example 7.6 (Full Isotopes). If u is an invertible element of an associative algebra, then the u-isotope of the Jordan algebra A+ is iso ( u mophic to A+ again: A+ ) = Au + = A+ : (u) y = 1 fx; u; y g [by proof. The rst equality follows from x 2 Jordan Isotope de nition 7.3] = 12 (xuy + yux) [by speciality] = 21 (x u y + y u x) [by Nuclear Isotope de nition 7.1], and the second isomorphism follows from Nuclear Isotopy 7.2, since any isomorphism A ! B is also an isomorphism A+ ! B+ of the associated plus-algebras. A more substantial change occurs when we take isotopes of Jordan algebras of hermitian type, because we change the nature of the underlying involution: as we have remarked before, H2 (R ) is formally real but H2 (R )(u) for u = E12 + E21 has nilpotent elements (E11 )(2;u) = 0, so isotopy is in general a broader concept than isomorphism. Example 7.7 (Hermitian). The Jordan isotopes of H(A; ) for an associative -algebra (A; ) are obtained either by changing the algebra structure but keeping the involution, or equivalently by keeping the
244
Isotopes
algebra structure but changing the involution: for invertible u = u , H(A; )(u) = H(Au; ) = H(A; u) = uH(A; ) under ' = Lu . proof. The rst equality follows since both are precisely the Jordan subalgebra of hermitian elements of A+ (u) = Au + [by Full Isotopy 7.6] under the original involution [which we noted in Nuclear Isotopy 7.2(2) is also an involution of the nuclear isotope Au ], and the second isomorphism follows from Nuclear Isotopy 7.2(2). The isotopes of H (Mn (D); ) we will need are nuclear isotopes coming from diagonal matrices . Example 7.8 (Twisted Matrix Isotopes). (1) Let Mn (D) be the linear algebra of n n matrices over a unital linear -algebra (D; ) under the standard conjugate-transpose involution X = (X )tr . Let = diag f 1 ; : : : ; n g ( i = i invertible in H(D; ) \ Nuc(D)) be an invertible hermitian nuclear diagonal matrix. Then the twisted matrix algebra Hn (D; ) := H(Mn (D); ) consists of all hermitian matrices of Mn (D) under the shifted involution X := X 1 , and is linearly spanned by the elements (in Jacobson -box notation) : Æ [ii] := i ÆEii (Æ 2 H(D; )); ji = d[ji] (d 2 D); d[ij ] := i dEij + j dE (where Eij (1 i; j n) are the usual n n matrix units). (2) The multiplication rules for distinct indices i; j; k 6= in the isotope consist of 4 Basic Brace Products (i) Æ [ii]2 = Æ i Æ [ii] ; i d[jj ] ; (ii) d[ij ]2 = d j d[ii] + d (iii) fÆ[ii] ; d[ik] g = Æ id[ik] ; fd[ik] ; Æ[kk] g = d k Æ[ik] ; (iv ) fd[ij ] ; b[jk] g = d j b[ik] ; and a Basic Brace Orthogonality rule (v ) fd[ij ] ; b[k`] g = 0 (if fi; j g \ fk; `g = ;): (3) We have an isomorphism of this twisted algebra with the isotope of the untwisted algebra : Hn(D; )( ) ! Hn(D; ) via L : d[ij ] ! d[ij ] : proof. Since it is easy to verify that is nuclear in Mn (D), (3) follows from 7.7. That Hn (D; ) is spanned by the elements (1) is i Eji ) 1 = j dE ji ; it easily veri ed directly, using ( i dEij ) = (d `k ) = also follows from the isomorphism (7) and d[k`] = (dEk` + dE
k dEk` + `dE`k = d[k`] . The rules (i)-(v) follow immediately from the
5. MATRIX ISOTOPES
245
de nitions (remembering that we need only check that the Eik -entries of two hermitian elements a[ik] coincide, since then their Eki -entries will automatically coincide too). We will almost never work directly from the multiplication table of a twisted matrix algebra: we will do any necessary calculations in our coordination theorems 12.7, 17.2 in the untwisted matrix algebra, then hop over unpleasant twisted calculations with the help of the magic wand of isotopy. Exercise 7.8A D:
Show that Nuc(Mn (D)) =
Mn (Nuc(D)) for any linear algebra
Show that Hn (D; ) can also be coordinatized another way: it is i 1Eji (d 2 D). This spanned by Æ[ii] := ÆEii (Æ 2 i H(D)); d[ij ] := dEij + j d coordinatization has the advantage that the parameter in d[ij ] precisely labels the ij -entry of the matrix, the coeÆcient of Eij , but it has the drawback that Hn (D; ) is spanned by the a[ij ] for all a 2 D (as expected), but by the [ii] for in the i -translate i H(D) (so the diagonal entries are coordinatized by dierent submodules). Verify these assertions, show we have the index-reversing relation i 1 [ji] , work out the multiplication rules (Basic Products and Basic d[ij ] = j d Orthogonality), and give an isomorphism Hn (D; )( ) ! Hn (D; ) in these terms. Exercise 7.8B
Work out the formulas for the 3 Basic U -Products, U -orthogonality, 3 Basic Triple Products, and Basic Triple Orthogonality in a twisted matrix algebra Hn (D; ) corresponding to those in the untwisted Hermitian Matrix Example 3.7. Exercise 7.8C
To make the -box notation a bit clearer, consider the isotope H3(D)( ) for an alternative algebra D with nuclear involution, and diagonal = diag ( 1 ; 2; 3 ) for i 2 Nuc(D). The general element X = [11] + [22] + Æ [33] + a[12] + b[13] + c[23] will look as matrix like 0
a b a c b c Æ
1
0
1
1 1 a 1 b C =B @ 2 a 2 2 c A :
3b 3 c 3 Æ The alternate notation i [ii] := i Eii ; a[ij ] := aEij + j a i 1 Eji of Ex. 7.8B loses the symmetry in i; j and requires separate coordinate spaces for each diagonal entry, but in these terms the description of a general X = 1 [11] + 2 [22] + Æ3 [33] + a3 [12] + b2 [13] + c1 [23] (a3 ; b2 ; c1 2 D; 1 2 1 H(D); 2 2 2 H (D); Æ3 2 3 H(D)) looks above the diagonal like the standard H3 (D): X=
@
A
246
Isotopes 0
1
1 a3 b2 C B 1 C: a
c
X=B 3 2 1 2 1 A @ 1 1
3 b2 1 3 c1 2 Æ3 When (D; ) is an alternative -algebra whose involution is central with nondegenerate trace, the algebra H3 (D; ) is a cubic factor with norm N , so by Cubic Isotope 7.5 we know that any diagonal isotope H3(D; ) is again a cubic factor with Jordan product induced by the norm N = N ( )N = 1 2 3 N and unit 1 . Then the linear bijection L : H3 (D; ) ! H3 (D; ) transports the structure of a cubic factor over to the twisted 3 3 matrices. Theorem 7.9 (Freudenthal Isotope). If D is a unital alternative algebra with scalar involution over , and = diag( 1 ; 2 ; 3 ) a diagonal matrix for invertible elements i of , then the twisted Jordan matrix algebra H3 (D; ) is a cubic factor determined by the sharped cubic form P N (x) := 1 2 3 i i j k n(ak ) + 1 2 3 t(a1 a2 a3 ) c := e1 + e2 + e3 P T (x; y ) := i i i + j k t(ai bi ) P P x# := i j k j k n(ai ) ei + i i aj ak i ai [jk] where P P P P x = i i ei + i ai [jk] ; y = i i ei + i bi [jk] ; for i ; i 2 ; ai ; bi 2 D, with box notation ei := Eii = i 1 [ii] ; d[jk] kj , and (ijk) is always a cyclic permutation of (123): = j dEjk + k dE proof. We could repeat the proof of the Freudenthal Construction 4.5, throwing in gammas at appropriate places, to show the indicated cubic form is sharped and the resulting Jordan algebra has the structure of a Jordan matrix algebra. But we have already been there, done that, so we will instead use isotopy. This has the added advantage of giving us insight into why the norm form takes the particular form (with the particular distribution of gammas). We know ' = L is an isomorphism J := H3 (D; ) ! H3 (D; ) =: J0 , and by 4.5 and 7.5 the isotope J is a cubic factor with norm N ( )N (X ), so the isomorphic copy J0 is also a cubic factor with sharped cubic form N 0 ('(X )) := N ( ) (X ) = N ( )N (X ), trace bilinear form 0 T 0 ('(X ); '(Y )) := T ( ) (X; Y ) = T (U X; Y ), and sharp mapping '(X )# := '(X #( ) ) = N ( ) 1 '(U # X # ):
6. PROBLEMS FOR CHAPTER 7 P
P
247
The crucial thing to note is that x = i i ei + i ai [jk] 2 H3 (D; ) is not quite in Gamma-box : sincePei = i 1 [ii] , its GammaP notation box expression is x = iP (i i 1 )[ii] + Pi ai [jk] , and thus is the image x = '(X ) for X = i (i i 1 )[ii] + i ai [jk] 2 H3 (D; ); the presence of these i 1 down the diagonal is what causes the norm, trace, and sharp to change form. Since all i ; i are scalars in , the norm becomes N 0 (x) = N ( )N (X ) = 1 2 3 N (X ) P 1 )n(a ) + t(a a a ) = 1 2 3 (1 1 1 )(2 2 1 )(3 3 1 ) (
k k 1 2 3 k i P = 1 2 3 i i j k n(ak ) + 1 2 3 t(a1 a2 a3 ): For the trace we use UPi Æi ei a[jk] = Æj Æk a[jk] to see T 0 (x; y ) = T ( ) (X; Y ) = T (U X; Y ) P = i ( i2 i i 1 )( i i 1 ) + t ( j k ai )bi P = i i i + j k t(ai bi ) : P Finally, for the sharp we use # = i j k ei ; U # a[jk] = ( k i )( i j )a[jk] to see x#0 = ' X #( ) = ' N ( ) 1 U # X # P = ( 1 2 3 ) 1 ' i ( j k )2 (j j 1 )(k k 1 ) n(ai ) [ii] P + i ( k j i2 ) aj ak (i i 1 )ai [jk] P P = i j k j 1 k 1 j k n(ai ) i 1 [ii] + i i aj ak i 1 i ai [jk] P P = i j k j k n(ai ) ei + i i aj ak i ai [jk] : Thus the norm, trace, and sharp naturally take the form given in the theorem. Even though we have already been there, done that, do it again: repeat the proof of the Freudenthal Construction 4.5, throwing in gammas at appropriate places, to show directly that the indicated cubic form is sharped and the resulting Jordan algebra has the structure of a Jordan matrix algebra. Exercise 7.9
6. Problems for Chapter 7 For an invertible element u of a unital associative algebra A, let u^ denote the inner automorphism or conjugation by u; x ! uxu 1. (1) Show that the inner automorphisms form a subgroup of the automorphism group of A, with u^ Æ v^ = ucv, and that u^ = 1A is the identity i u lies in the center of A. If A has an involution, show Æ u ^ Æ = (uc ) 1 . (2) Show from these that u := Problem 1.
248
Isotopes
u^ Æ is an anti-automorphism of A, which has period 2 i u = u for some element of the center of A (in which case is necessarily unitary, = 1). In particular, conclude that if u is hermitian or skew (u = u), then the (nuclear) u-isotope u is again an involution of A. (3) If u is skew, show the u -hermitian elements are precisely all us for -skew elements s. [The fact that isotopes can switch skew elements of one involution into hermitian elements of another is crucial to understanding the \symplectic involution" on 2n 2n matrices (the adjoint with respect to a nondegenerate skew-hermitian bilinear form on a 2n-dimensional space, equivalently the conjugate-transpose map on n n matrices over a split quaternion algebra).] (1) De ne an invertible linear transformation on a unital Jordan algebra J to be structural if there exists an invertible \adjoint" transformation T on J satisfying UT (x) = T UxT for all x 2 J. Show that the adjoint is uniquely determined by T = T 1UT (1) . Show that the set of all structural transformations forms a group, the structure group S tr(J) of J, containing all invertible scalar multiplications 1J (for invertible in ) and all invertible operators Ux (for x invertible in J). Show the structure group contains the automorphism group as precisely the transformations which do not shift the unit [T (1) = 1, hence T is \orthogonal": T = T 1]. Give an easy example to show this inclusion is strict: there exist orthogonal T which are not automorphisms. (2) Show that any invertible structural T induces an isomorphism J(u) ! J(v) for v = (T ) 1 (u) on any isotope (u) (u) ! J(v) is an isomorphism on some isotope, J . (3) Show conversely that if T : J then T is structural with T = Uu T 1Uv 1 and T (v) = u [note T (u 1) = v 1 ]. (4) Conclude that the group of all structural transformations is precisely the \autotopy group" of J (the isotopies of J with itself), and two isotopes J(u) ; J(v) are isomorphic i the elements u; v are conjugate under the structure group, so the isomorphism classes of isotopes corresponds to the conjugacy classes of invertible elements under the structure group. (5) Show that if u has an invertible square root in J (u = v2 for some invertible v) then J(u) = J. (6) Conclude that if J is nite-dimensional over an algebraically closed eld, every isotope of J is isomorphic to J. (7) If every invertible element has a square root, show that the structure group consists of all T = Ux A for an invertible x and automorphism A, so in a sense the structure group \essentially" just contains the automorphisms and U -operators. Problem 2*.
De ne the left-u-isotope u A of a linear algebra A by an invertible nuclear element u to be the same space but new product xu y := uxy. Of course, when u is central the left isotope coincides with the middle isotope, and therefore is perfectly associative. In general, the factor u acts on the left as a formal symbol akin to a parenthesis, and the left isotope is close to being a \free" nonassociative algebra. What a dierence the placement of u makes! Can you necessary and suÆcient, or at least useful, conditions on u for u A to be associative? To have a unit? Question 1*.
6. PROBLEMS FOR CHAPTER 7
249
In in nite-dimensional situations it is often unnatural to assume a unit, yet useful to have isotopes. We can intruduce a notion of generalized Jordan isotope J(~u) , where u~ is invertible in some larger algebra J~ J, as long as this induces a product back on J : fJ; u~; Jg J (eg. if J~ is the unital hull of J, or more generally as long as J / J~). How much of the Jordan Isotope Proposition 7.3 holds true in this general context? Can you give an example of a generalized isotope which is not a standard isotope? Question 2*.
Second Phase :
The Tale of Two Idempotents
It was the best of times, it was the worst of times; the spring of hope, the winter of despair; we had everything before us, we had nothing before us; idempotents in the pot, but only two to go around.
In this phase we will investigate the structure of Jordan algebras having two supplementary orthogonal idempotents. These come in two basic avors: the reduced spin factors determined by a quadratic form, and the hermitian algebras of 2 2 hermitian matrices with entries from a symmetrically-generated associative -algebra. We develop the necessary machinery for Peirce decompositions with respect to a single idempotent (the driving force of the classical structure theory), and obtain two general coordinatization theorems: Jordan algebras with Peirce frames of size 2 satisfying the Spin Peirce relation are reduced spin factors, and those satisfying the Hermitian Peirce condition are hermitian algebras.
CHAPTER 8
Peirce Decomposition In our analysis of Jordan algebras with capacity we are, by the very nature of capacity, forced to deal with sums of orthogonal idempotents and their associated Peirce decompositions. For a while we can get by with the simplest possible case, that of a single idempotent. The fact that the unit element 1 decomposes into two orthogonal idempotents e; e0 makes the identity operator decompose into the sum of three orthogonal idempotents (projections), and decompositions of the identity into orthogonal projections precisely correspond to decompositions of the underlying module into a direct sum of submodules. In associative algebras, the decomposition 1A = L1 = Le+e0 = Le + Le0 of the left multplication operator leads to a one-sided Peirce decomposition A = eA e0 A into two subspaces, and the decomposition 1A = L1 R1 = Le Re + Le Re0 + Le0 Re + Le0 Re0 leads to a 2-sided Peirce decomposition into four subspaces A = A11 A10 A01 A00 . In Jordan algebras the decomposition of the quadratic U -operator 1J = U1 = Ue+e0 = Ue + Ue;e0 + Ue0 leads to a decomposition into three spaces J = J2 J1 J0 (where J2 corresponds to A11 , J0 corresponds to A00 , but J1 corresponds to the sum A10 A01 ).
1. Peirce Decompositions We recall the most basic properties of idempotent elements. Definition 8.1 (Idempotent). Recall from Powers 5:5 that an element e of a Jordan algebra J is an idempotent if e2 = e (hence en = e for all n 1): The complementary idempotent is e0 := 1 e 2 Jb. A proper idempotent is an idempotent e 6= 1; 0; an algebra is reduced if it contains a proper idempotent. Two idempotents e; f are orthogonal if e f = 0. The complement e0 = 1 e lives only in the unital hull Jb. It is again an idempotent, orthogonal to e in Jb : (e0 )2 = (1 e)2 = 1 2e + e2 = 1 2e + e = 1 e = e0 , and e e0 = e (1 e) = e e2 = 0. Clearly e and its complement are supplementary in the sense that they sum to 1 : e + e0 = 1 2 bJ: 251
252
Peirce Decomposition
8.2 (Peirce Decomposition). (1) The Peirce projections Ei = Ei (e) determined by e are the multiplication operators Theorem
E2 = Ue ;
E1 = Ue;e0 ;
E0 = Ue0 :
P
They form a supplementary family of projection operators on J; i Ei = 1J; Ei Ej = Æij Ei , and therefore the space J breaks up as the direct sum of the ranges : we have the Peirce Decomposition of J into Peirce
subspaces
= J2 J1 J0 for Ji := Ei (J): (2) Peirce decompositions are inherited by ideals or by subalgebras containing e: we have Peirce Inheritance J
K
= K2 K1 K0 for Ki = Ei (K) = K \ Ji (K / J or e 2 K < J):
(1) Since the Ei are multiplication operators they leave the ideal J / Jb invariant, therefore if they are supplementary orthogonal projections on the unital hull they will restrict to such on the original algebra. Because of this, it suÆces to work in the unital hull, so we assume from the start that J is unital. Trivially the Ei are supplementary operators since e; e0 are supplementary elements: E2 + E1 + E0 = Ue + Ue;e0 + Ue0 = Ue+e0 = U1 = 1J by the basic property of the unit element. Now e; e0 2 [e], and by Operator Power-Associativity Theorem 5.6(2) we know Up Uq = Upq ; hence by linearization Up Uq;r = Upq;pr for any p; q; r 2 [e], in particular Up Uq = Up Uq;r = Uq;r Up = 0 whenever p q = 0, which immediately yields orthogonality of E2 = Ue ; E1 = Ue;e0 ; E0 = Ue0 by orthogonality e e0 = 0. It also implies E2 ; E0 are projections since idempotent elements p p = p always produce idempotent U -operators, Up Up = Upp = Up by the Fundamental Formula (or Operator Power Associativity again). The complement E1 = 1 (E2 + E0 ) must then be a projection too. [We could also check this directly: further linearizaproof.
tion of Operator Power Associativity shows Up;q Uz;w = Upz;qw + Upw;qz , so Ue;e0 Ue;e0 = Uee;e0 e0 + Uee0 ;e0 e = Ue;e0 + 0. ]
Thus we have decomposed the identity operator into supplementary orthogonal projections, and this immediately decomposes the underlying -submodule into the Peirce subspaces Ji = Ei (J):1
1Despite our earlier promise to reserve the term space for vector spaces, and
speak instead of submodules, in this case we cannot resist the ubiquitous usage Peirce subspace.
1. PEIRCE DECOMPOSITIONS
253
(2) Since the Peirce projections Ei are built out of multiplications by 1 and e, they map K into itself: an ideal is invariant under any multiplications from J, and a subalgebra is invariant under multiplications from itself. Thus the Peirce projections induce by restriction a decomposition 1K = E2 jK + E1 jK + E0 jK of the identity operator on K and so a decomposition K = K2 K1 K0 for Ki := Ei (K) K\Ei (J) = K\Ji , and we have equality since if x 2 K \ Ji then x = Ei (x) 2 Ei (K). Write an expression for the Peirce projections in terms of the operator Le , and see once more why the U -formulation is preferable. Exercise 8.2*
We will usually denote the Peirce projections by Ei ; if there is any danger of confusion (e.g. if there are two dierent idempotents running around), then we will use the more explicit notation Ei (e) to indicate which idempotent gives rise to the Peirce decomposition. Historically, the Peirce decomposition in Jordan algebras was introduced by A.A. Albert as the Jordan decomposition of the space into eigenspaces for the left-multiplication operator Le , which satis ed the equation (t 1)(t 12 )(t 0) = 0. This approach breaks down over rings without 21 , and in any event is messy: the projections have a simple expression in terms of U -operators, but a complicated one in terms of Le . The most elegant description of the Peirce spaces is due to Loos (it works in all characteristics, and for Jordan triples and pairs as well), using the important concept of Peircer (pronounced purser, not piercer!) 0 Definition 8.3 (Peircer). For each 2 we set e() := e + e 2 b J and de ne the Peircer E () to be its U -operator,
E () := Ue() = E0 + E1
+ 2 E
2
=
2 X
i=0
i Ei :
Here e(1) = e0 + e = 1 and e() e( ) = e( ) in the special subalgebra [e] = e e0 of Jb, so from Operator Power Associativity 5:6(2) the Peircer determines a homomorphism of multiplicative monoids from into End (Jb): we have a 1-dimensional Peircer torus of operators on b J; E (1) = E0 + E1 + E2 = 1J0 ; E ( ) = E ()E ( ): (1) Show that the Peircer E ( 1) is the Peirce Re ection around the diagonal Peirce spaces, E ( 1) = 1 on the diagonal J2 + J0 and E ( 1) = 1 on the o-diagonal J1 , and is an involutory map. (2) Show that the element u(e) = e( 1) = e0 e = 1 2e is an involution (u(e)2 = 1) and hence the operator E ( 1) = Uu(e) is an involutory automorphism (cf. the Involution Lemma 6.9). (3) Exercise 8.3*
254
Peirce Decomposition
In Problem 1 of Chapter 6 you showed (we hope) that there is a 1-1 correspondence between involutions u2 = 1 in a unital Jordan algebra, and idempotents e2 = e, given by u(e) := 1 2e; e(u) = 12 (u + 1). Show that e; e0 determine the same Peirce Re ection, Uu(e) = Uu(e0 ) ; explain why this does not contradict the bijection between idempotents and involutions.
We really want to make use of the Peircer for indeterminates. Let
:= [t] be the algebra of polynomials in the indeterminate t, with the natural operations. In particular, is a free -module with basis of all monomials ti (i 0Pin I) so J J0 := Jb = bJ[t] which consists of all formal polynomials ni=0 ti xi in t with coeÆcients xi from bJ with the natural operations. Theorem 8.4 (Eigenspace Laws). The Peirce subspace Ji relative to an idempotent e is the intersection of J with the eigenspace in J[t] of the Peircer E (t) with eigenvalue ti for the indeterminate t 2 [t], and also the eigenspace of the left multiplications Ve (resp. Le ) with eigenvalues i ( resp. 21 i): we have the Peircer, V -, and L-Eigenspace
Laws
= fx 2 J j E (t)x = ti xg = fx 2 J j Vex = ixg = fx 2 J j Le x = 2i xg: proof. For the Peircer Eigenspace Law, the characterization of P2 i i Ji = Ei (J) as eigenspace of E (t) = i=0 t Ei with eigenvalue t , follows immediately from the fact that t is an indeterminate: if x = x2 x1 x0 2 J; xi = Ei (x), then in J0 = Jb[t] we have E (t)(x) = t2 x2 + tx1 + x0 , which coincides with ti x i xi = x; xj = 0 for j 6= i by identifying coeÆcients of like powers of t on both sides. The second Eigenspace Law, characterizing JP i as the i-eigenspace of Ve = Ue;1 = Ue;1 e + Ue;e = E1 + 2E2 = 2i=0 iEi for eigenvalue i, follows from the fact that distinct values i; j 2 f2; 1; 0g have j Pi invertible when 12 2 : Ve(x) = 2x2 + x1 + 0x0 coincides with ix i j (j i)xj = 0, which by directness of the Peirce decomposition means (j i)xj = 0 for j 6= i, i.e. xj = 0 for j 6= i, i.e. x = xi . Multiplying by 12 gives the third Eigenvalue Law for Le = 21 Ve : Ji
The nal formulation above in terms of Le explains why, in the older literature (back in the L-ish days), the Peirce spaces were denoted by J1 ; J 12 ; J0 instead of by J2 ; J1 ; J0 . Note that the Peircer has the advantage that the eigenvalues ti = t2 ; t; 1 are independent over any scalar ring , whereas independence of the eigenvalues i = 2; 1; 0 for
2. PEIRCE MULTIPLICATION RULES
255
Ve (even more, for Le ) requires injectivity of all j i, equivalently injectivity of 2: The Peircers also lead quickly and elegantly to the Peirce decomposition itself. As with a single variable, for := [s; t] the algebra of polynomials in two independent indeterminates, we have J J0 := b J = Jb[s; t] consisting of all formalPpolynomials in s; t with coeÆcients i j b. The Peircer Torus yields from J i;j s t Ei Ej = E (s)E (t) = E (st) = P i i j i (st) Ei , so identifying coeÆcients of s t on both sides beautifully reveals the Peirce projection condition Ei Ej = Æij Ei .
2. Peirce Multiplication Rules The underlying philosophy of Peirce decompositions is that they are just big overgrown matrix decompositions, behaving like the decomposition of an associative matrix algebra M2 (D) into -submodules DEij which multiply like the matrix units Eij themselves. The only point to be carefully kept in mind is that in the Jordan case, as in hermitian matrices, the o-diagonal spaces DE12 ; DE21 cannot be separated, they 21 j d 2 Dg. are lumped into a single space J1 = D[12] = fdE12 + dE This symmetry in indices is important to keep in mind when talking about spaces whose indices are \linked" or \connected". The Peircer is even more eective in establishing the multiplicative properties of Peirce spaces, because it is itself a U -operator and hence interacts smoothly with its fellow U -operators by the Fundamental Formula. In contrast, the L-operator does not interact smoothly with U or L-operators: there is no pithy formula for Uex or Lex . We will need an indeterminate t and its inverse t 1 , requiring us to extend our horizons from polynomials to Laurent polynomials. Recall that Laurent polynomials in t are just polynomials in t and its inverse: PM 1
:= [t; t ] consists of all nite sums i= N i ti , with the natural operations. In particular, is a free -module with basis of all monob mials tj (j 2 Z) over , so J J0 :=P J = bJ[t; t 1 ] which consists of M all nite formal Laurent polynomials i= N ti xi in t with coeÆcients xi from Jb, with the natural operations. Theorem 8.5 (Peirce Multiplication). Let e be an idempotent in a Jordan algebra J. We have the following rules for products of Peirce spaces Jk := Jk (e). We let k; `; m represent general indices, and i represents a diagonal index i = 2; 0 with complementary diagonal index j = 2 i. We agree that Jk = 0 if k 6= 0; 1; 2. For the bilinear, quadratic, and triple products we have the 3 Peirce Brace Rules 2 2 Ji Ji ; J1 J2 + J0 ; fJi; J1 g J1
256
Peirce Decomposition
the Peirce U -Product Rule UJk J` J2k ` ; and the Peirce Triple Rule fJk ; J`; Jmg Jk `+m: We have the Peirce Orthogonality Rules fJi; Jj g = fJi; Jj ; Jg = UJi (Jj ) = UJi (J1) = 0: In particular, the diagonal Peirce spaces J2 ; J0 are inner ideals which annihilate each other whenever they get the chance. proof. These formulas come easily from the fact that we had the foresight to include a scalar t 1 in . Since E (t) = Ue(t) we can use the Fundamental Formula to see E (t) (Uxi yj ) = E (t)Uxi E (t)E (t 1 )yj [by Peircer Torus 8.3, since t and t 1 cancel each other] = UE (t)xi E (t 1 )yj = Uti xi t j yj = t2i j Uxi yj , and Uxi yj lies in the eigenspace J2i j . In particular we have the U -Orthogonality: Ux2 y0 2 J4 = 0; Ux2 y1 2 J3 = 0; Ux0 y1 2 J 1 = 0; Ux0 y2 2 J 2 = 0. Similarly, the linearized Fun 1 damental Formula shows E (t)fxi ; yj ; zk g = E (t)Uxi ;zk E (t) E (t )yj = fE (t)xi; E (t 1)yj ; E (t)zk g = fti xi; t j yj ; tk zk g = ti j+k fxi ; yj ; zk g, and fxi; yj ; zk g lies in the eigenspace Ji j+k . In particular, we have most of Triple-Orthogonality: fJ2 ; J0 ; Ji g J2 0+i = 0 if i 6= 0, dually fJ0; J2 ; Jj g J0 2+j = 0 if j 6= 2. But the triple orthogonality relations that the products fJ2 ; J0 ; J0 g J2 ; fJ0 ; J2; J2 g J0 actually vanish, do not follow quite so spontaneously. By symmetry Ji (e) = Jj (e0 ) in bJ it suÆces to prove fJ2 ; J0 ; J0 g = 0. We rst turn this into a brace, fJ2 ; J0 ; J0 g = fJ0 ; J2 ; J0 g+ ffJ2; J0 g; J0g 0 + ffJ2; J0g; Jg using the Triple Switch Formula 5.7(FIV), then we nish it o using brace orthogonality fJ2 ; J0g = 0 from fJ2 ; J0 g = fJ2 ; J0 ; 1g = fJ2 ; J0 ; e + e0 g = fJ2 ; J0 ; e0 g J2 and at the same time fJ2 ; J0 g = fe + e0 ; J2 ; J0 g = fe; J2 ; J0 g J0 . The brace rules follow by applying the triple-product results in bJ where the unit is 1 = e2 + e0 ; e2 = e 2 J2 ; e0 = e0 = 1 e 2 J0 : J2i = UJi (ei + ej ) Ji + 0 = Ji ; fJi ; J1 g = fJi ; ei + ej ; J1 g fJi ; Ji ; J1 g + 0 2 J1 ; J1 = UJ1 (e2 + e0 ) J0 + J2 .
Give an alternate proof of the diÆcult Peirce relation fJi ; Jj ; Jg = 0 for complementary diagonal indices. Use fx; x; yg = fx2 ; yg to show that fei ; ei ; ej g = 0, then use a linearization of the Triple Shift Formula 5.7(FIII) to show that Vei ;ej = Vei ;fei ;ei ;ej g = 0; Vyj ;ei = Vej ;fyj ;ej ;ei g = 0; Vxi ;yj = Vei ;fyj ;ei ;xig = 0; fxi ; yj ; z g = Vxi ;yj (z ) = 0. Exercise 8.5
3. BASIC EXAMPLES OF PEIRCE DECOMPOSITIONS
257
3. Basic Examples of Peirce Decompositions We now give examples of idempotents and their Peirce decompositions, beginning with Jordan algebras obtained from an associative algebra A. A full algebra J = A+ has exactly the same Peirce decompositions as the associative algebra A, in particular the Peirce space J1 breaks up into two pieces A10 ; A01 . This is atypical for Jordan algebras; the archetypal example is hermitian matrices, where the space A10 is inextricably tied to the space A01 through the involution. Example 8.6 (Associative Peirce Decomposition). If e is an idempotent in an associative algebra A, it is well-known that if we set b , the any element x 2 A has a decomposition x = e0 = 1 e 2 A 0 1x1 = (e + e )x(e + e0 ) = exe + exe0 + e0 xe + e0 xe0 = exe + (ex exe) + (xe exe) + (x ex xe + exe). If we set e1 = e; e0 = e0 this becomes x = e1 xe1 + e1 xe0 + e0 xe1 + e0 xe0 , and we obtain an associative Peirce decomposition A = A11 A10 A01 A00 (Aij := ei Aej ) relative to e. Since (ei Aej )(ek Ae` ) = Æjk ei Aej Ae` ei Ae` , these satisfy the associative Peirce multiplication rules Aij Ak` Æjk Ai` : Every associative algebra can be turned into a Jordan algebra by the plus functor, but in the process information is lost. The fact that we can only recover the product xy + yx, not xy itself, causes the o-diagonal Peirce spaces A10 ; A01 to get confused and join together. Example 8.7 (Full Peirce Decomposition). If e is an idempotent in an associative algebra A, the associated full Jordan algebra J = A+ has Jordan Peirce decomposition + + + + for A = A2 A1 A0 + + + A2 = A11 ; A1 = A10 A01 ; A0 = A00 : Thus the Jordan 1-eigenspace is a mixture of the associative \left" eigenspace A10 and \right" eigenspace A01 . The Peirce Multiplication Rules 8.5 are precisely the rules for multiplying matrices, and one should always think of Peirce rules as matrix decompositions and multiplications. If A = M2 (D) is the algebra of 2 2 matrices over an associative algebra D and e = E11 the rst matrix unit, the Peirce spaces Aij relative to e are just the matrices having all entries 0 except for the ij -entry: 0 0 0 0 0 0D A11 = ( D 0 0 ) ; A10 = ( 0 0 ) ; A01 = ( D 0 ) ; A00 = ( 0 D ) :
258
Peirce Decomposition
When the associative algebra has an involution, we can reach inside the full algebra and single out the Jordan subalgebra of hermitian elements. Again the o-diagonal Peirce subspace is a mixture of associative ones, but here there are no regrets { there is no longing for J1 to split into A10 and A01 , since these spaces do not contribute to the symmetric part, instead we have J1 = fx10 + x10 j x10 2 A10 g consisting precisely of the symmetric traces of elements of A10 (or dually A01 = (A10 ) ). Example 8.8 (Hermitian Peirce Decomposition). If the associative algebra A has an involution , and e = e is a symmetric idempotent, then the associative Peirce spaces satisfy Aij = Aji, the Jordan algebra H(A; ) contains the idempotent e, and its Peirce decomposition is precisely that induced from the Full Decomposition 8:7 : H(A; ) = J2 J1 J0 for J2 = H(A11 ; ); J0 = H(A00 ; ); J1 = H(A10 A01 ; ) = fx10 x10 j x10 2 A10 g: Our nal examples of idempotents and Peirce decompositions concerns Jordan algebras of quadratic and cubic forms. Theorem 8.9 (Quadratic Reduction Criterion). In the Jordan algebra Jord(Q; c) = JSpin(M; ) of the Quadratic Factor and Spin Examples 3:9; 3:11, we say an element e has Trace 1 Type if T (e) = 1; Q(e) = 0 (hence e = 1 e): This is equivalent to 1 ( (v; v ) = 1): e = (1 + v ) for v 2 V with Q(v ) = 1 2 Over any ring of scalars such an element is a proper idempotent.2 When is a eld, these are the only proper idempotents, e 2 Jord(Q; c) proper () e has Trace 1 Type ( a eld); and such proper idempotents exist i Q is isotropic, Jord(Q; c) is reduced for nondegenerate Q over a eld () Q is isotropic, Q(x) = 0 for some x 6= 0: proof. Clearly e of Trace 1 Type is idempotent by the Degree 2 Identity in Quadratic Factor 3.9, e2 = T (e)e Q(e)1 = e, and diers from 1; 0 since T (1) = 2; T (0) = 0. Here T (e) = 1 i e = 12 (1 + v ) where T (v ) = 0, i.e. v 2 V , in which case Q(e) = 0 i 0 = Q(2e) = Q(1 + v ) = 1 + Q(v ), i.e. i Q(v ) = (v; v ) = 1: 2Notice we could also characterize the improper idempotents
e = 1 as Trace 2 Type (T (e) = 2; Q(e) = 1; e = e) and e = 0 as Trace 0 Type (T (e) = Q(e) = 0; e = e), but we will refrain from doing so.
3. BASIC EXAMPLES OF PEIRCE DECOMPOSITIONS
259
To see these are the only proper idempotents when is a eld, an idempotent e is proper () e 6= 1; 0 () e 62 1 (because 0; 1 are the only idempotents in a eld), so from from the Degree 2 Identity 3.9 0 = e2 T (e)e + Q(e)1 = [1 T (e)]e + Q(e)1 ) 1 T (e) = 0; Q(e) = 0. Certainly if such exist then Q is isotropic, Q(e) = 0 for e 6= 0. The surprising thing is that the annihilation by a nondegenerate Q of some nonzero element automatically creates an idempotent! If Q(x) = 0; T (x) = 6= 0 then e = 1 x has Q(e) = 0; T (e) = 1 and hence is a proper idempotent of Trace 1 Type. If T (x) = 0 we claim there must exist a element x0 = Uy x with nonzero trace: otherwise T (Uy x) = 0 for all y , so linearizing y ! y; 1 would give 0 = T (fy; xg) = T T (y )x + T (x)y Q(x; y )1 (by the de nition 3.9 of the bullet) = 2T (x)T (y ) Q(x; y )2 = 0 2Q(x; y ), and Q(x; y ) = 0 = Q(x) for x 6= 0 and all y would contradict nondegeneracy of Q. Such an element x0 is necessarily nonzero and isotropic by Jordan Composition 3.9, Q(x0 ) = Q(Uy x) = Q(y )2Q(x) = 0. Thus in either case, isotropy spawns an idempotent.
Exercise 8.9 The improper idempotents in
Jord(Q; c) can't be characterized solely
in terms of their trace and norm: show that T (x) = 2; Q(x) = 1 i x is unipotent (x = 1 + z for nilpotent z ), and T (x) = Q(x) = 0 i x is nilpotent, and a perfectly respectable Jord(Q; c) (e.g. Q nondegenerate over a eld) may well have nilpotent elements.
We saw in Reduced Spin 3.12 a way to build a reduced quadratic factor out of a quadratic form q by adjoining a pair of supplementary orthogonal idempotents e1 ; e2 : Example 8.10 (Reduced Spin Peirce Decomposition). The Peirce decomposition of the reduced spin factor RedS pin(q ) = e1 W e2 of a quadratic form, with respect to the created idempotents e = ei ; e0 = e3 i is just the natural decomposition
RedS pin(q) = J2 J1 J0
for
J2
= e;
J1
= W;
J0
= e0 :
The situation for idempotents in Cubic Factors is more complicated; the proper idempotents come in two types, Trace 1 and Trace 2. Theorem 8.11 (Cubic Reduction Criterion). In the cubic factor Jord(N; #; c) we say an element e has Trace 1 Type or Trace 2 Type if Trace 1 : Trace 2 :
T (e) = 1; e# = 0 (hence S (e) = N (e) = 0) T (e) = 2; e# = e0 (hence S (e) = 1; N (e) = 0; (e0 )# = 0):
260
Peirce Decomposition
Over any ring of scalars such elements are proper idempotents. An idempotent is of Trace 1 Type i its complement is of Trace 2 Type. When is a eld, these are the only proper idempotents, e 2 Jord(N; #; c) proper () e has Trace 1 or 2 Type ( a eld); and proper idempotents exist i N is isotropic, Jord(N; #; c) is reduced for nondegenerate N over a eld () N is isotropic, N (x) = 0 for some x 6= 0: proof. Recall the basic Adjoint and c-Sharp Identities 4.1(2), Degree 3 Identity 4.2(1), Sharp Expression 4.2(2), and Spur Formula 4.4(5): x## = N (x)x; 1#x = T (x)1 x; x3 T (x)x2 + S (x)x N (x)1 = 0; x# = x2 T (x)x + S (x)1; S (x) = T (x# ):
From the Sharp Expression we see x2 x = x# +[T (x) 1]x S (x)1 will vanish (and x will be idempotent) if either x# = 0; T (x) = 1; S (x) = 0 (as in Trace 1) or x# = 1 x; T (x) = 2; S (x) = 1 (as in Trace 2), so e of Trace 1 or 2 Type is idempotent over any ring of scalars. It will be proper, diering from 1; 0 since T (1) = 3; T (0) = 0 6= 1; 2 as long as the characteristic isn't 2: Notice that in Trace 1 Type the rst two conditions imply the last two, since S (e) = T (e# ) = 0 [by the Spur Formula] and N (e) = N (e)T (e) = T (N (e)e) = T (e##) = 0 [by the Adjoint Identity]. Similarly, in Trace 2 Type the rst two conditions imply the nal three, S (e) = T (e# ) = T (1 e) = 3 2 = 1; (e0 )# = (1 e)# = 1# 1#e+e# = 1 (T (e)1 e) + e0 [by the c-Sharp Identity] = 1 (2 e) + (1 e) = 0, so 2N (e) = N (e)T (e) = T (N (e)e) = T (e##) = T ((e0 )# ) = 0. Thus in all cases it is the trace and sharp condition which are basic. If e has Trace 1 Type then its complement e0 = 1 e has Trace 2 Type (and vice versa). For the trace this follows from T (1 e) = 3 T (e). For the sharp, e of Trace 2 Type has (e0 )# = 0 by assumption, and e of Trace 1 Type has (e0 )# = (1 e)# = 1# 1#e + e# = 1 (T (e)1 e) + 0 [by the c-Sharp Identity] = 1 (1 e) = e = (e0 )0 . To see these are the only proper idempotents when is a eld, for idempotent e the Degree 3 Equation becomes 0 = e T (e)e + S (e)e N (e)1 = [1 T (e) + S (e)]e N (e)1, and e 6= 1; 0 () e 62 1 =) 1 T (e) + S (e) = N (e) = 0, in which case the Sharp Expression says e# = e T (e)e + S (e)1 = [1 T (e)]e + S (e)1 = [ S (e)]e + S (e)1 = S (e)e0 . If S (e) = 0 then 1 T (e) = 0; e# = 0, and we have Trace 1 Type; if S (e) 6= 0 then S (e) = T (e# ) [by the S -Formula] = S (e)T (e0 ) =
4. PEIRCE IDENTITY PRINCIPLE
261
S (e)[3 T (e)], so S (e)[T (e) 2] = 0 implies T (e) = 2 by cancelling S (e), then S (e) = T (e) 1 = 1; e# = e0 as in Trace 2 Type. Show that some restriction on the scalars is necessary in 8.9, 8.11 to be able to conclude that proper idempotents have Trace 1 or 2 type: if " 2 is a proper idempotent then e = "1 is a proper idempotent in Jord(Q; c) with T (e) = 2"; Q(e) = "; e = e, and similarly a proper idempotent in Jord(N; #; c) with T (e) = 3"; S (e) = "; N (e) = "; e# = "1: Exercise 8.11A
Characterize the improper idempotents in Jord(N; #; c) as Trace 3 Type (T (e) = 3; S (e) = 3; N (e) = 1; e# = e) and Trace 0 Type (T (e) = S (e) = N (e) = 0; e# = e). (1) Show that an element x has T (x) = 3; S (x) = 3; N (x) = 1; x# = x i x = 1 + z for nilpotent z with T (z ) = S (z ) = N (z ) = 0; z # = 2z , then use the Adjoint Identity N (z )z = z ## to conclude z = 0; x = 1. (2) Show that an element x has T (x) = S (x) = N (x) = 0; x# = x i x = z nilpotent, N (z ) = 0; z # = 2z , then use the Adjoint Identity again to conclude z = 0. Exercise 8.11B
To see explicitly what the Peirce decompositions look like, we examine the reduced case of 3 3 hermitian matrices over a composition algebra. Example 8.12 (H3 (C) Peirce Decomposition). In a cubic factor J = H3 (C) for a composition algebra of dimension 2n (n = 0; 1; 2; 3) over a general ring of scalars (with norm form n(x)1 = xx = xx 2 1) the idempotent e = 1[11] is of Trace 1 Type, its complement e0 = 1[22]+ 1[33] is of Trace 2 Type, and the Peirce decomposition relative to e is 00 0 0 0 J2 = e = fall for 2 g, 000 J1
= C[12] C[13] = fall
0 a b a 0 0 for a; b b0 00 00 0c for ; 0 c
2 Cg, 2 ; c 2 Cg.
= [22] C[23] [33] = f all Thus J2 is just a copy of the scalars of dimension 1; J1 has dimension 2n+1 , and J0 = H2 (C) = RedS pin(q ) relative to the quadratic form 0 c q ( c 0 ) = n(c) has dimension 2n + 2: J0
4. Peirce Identity Principle We can use the Shirshov-Cohn Principle to provide a powerful method for verifying facts about Peirce decompositions in general Jordan algebras. Theorem 8.13 (Peirce Principle). Any Peirce elements x2 ; x1 ; x0 from distinct Peirce spaces Ji (e) relative to an idempotent e lie in a special subalgebra B of J containing e. Therefore, all Jordan behavior of distinct Peirce spaces in associative algebras persists in all
262
Peirce Decomposition
Jordan algebras: if f ; f are Jordan polynomials, a set of relations f (e; x2 ; x1 ; x0 ) = 0 will imply a relation f (e; x2 ; x1 ; x0 ) = 0 among Peirce elements xi in all Jordan algebras if it does in all associative algebras, i.e. if the relations f (e; a11 ; a10 + a01 ; a00 ) = 0 imply f (e; a11 ; a10 + a01 ; a00 ) = 0 for all Peirce elements aij 2 Aij relative to idempotents e in all unital associative algebras A. In particular, we have the Peirce Identity Principle: any Peirce identity f (e; x2 ; x1 ; x0 ) = 0 for a Jordan polynomial f will hold for all Peirce elements xi in all Jordan algebras if f (e; a11 ; a10 + a01 ; a00 ) = 0 holds for all Peirce elements aij 2 Aij in all associative algebras A.
It suÆces to establish this in the unital hull, so we may assume from the start that J is unital. Then everything takes place in the unital subalgebra B = [e; x] of J for x = x2 + x1 + x0 , since each xi can be recovered from x by the Peirce projections (x2 = E2 (x) = Ue x; x1 = E1 (x) = Ue;1 ex; x0 = E0 (x) = U1 e x). By the Shirshov-Cohn Theorem B = H(A; ) is special, so if a Jordan polynomial f (e; x2 ; x1 ; x0 ) vanishes in the special algebra B, it will vanish in J. The Full Jordan Peirce decomposition of B is related to the associative decomposition 8.6 of A by 8.7, so we can write x2 = a11 ; x1 = a10 + a01 ; x0 = a00 , therefore f (e; a11 ; a10 + a01 ; a00 ) = 0 in A implies f (e; x2 ; x1 ; x0 ) = 0 in B and hence J: proof.
Let us see how powerful the Principle is by re-deriving the Peirce identities. The associative Peirce decomposition 8.6 has Aij Ak` = Æjk Ai` with orthogonal Peirce projections Cij (a) = ei aej , so the Jordan Peirce projections E2 = C11 ; E1 = C10 + C01 ; E0 = C00 are also supplementary orthogonal projections as in 8.2. We can recover all Peirce Bullet Rules 8.5: x22 2 A11 A11 A11 = J2 ; x2 x0 2 A11 A00 + A00 A11 = 0; x2 x1 2 A11 (A10 + A01 ) + (A10 + A01 )A11 A10 + A01 = J1 ; x21 2 (A10 + A01 )2 = A10 A01 + A01 A10 A11 + A00 = J2 + J0 , and dually x20 2 J0 ; x0 x1 2 J1 . In Peirce Orthogonality 8.5 we get Ux2 (y0 + z1 ) A11 (A00 + A10 + A01 )A11 = 0; fx2 ; y0 ; z1 g A11 A00 A + AA00 A11 = 0 [but NOT fJ2 ; J0 ; J0 g = 0 directly]. For the non-orthogonal products in 8.5 we get Ux1 y2 2 (A10 + A01 )A11 (A10 + A01 ) A00 = J0 but we DO NOT get Uxk yk 2 Jk directly since two factors come from the same Peirce space (however, we do get Uxk xk 2 Jk , so by linearizing Uxk yk fx2k ; yk g 0 mod Jk from our bilinear knowledge); for trilinear products we can use Triple Switch to reduce to Uxk yj if one index is repeated, otherwise to fx2 ; y0; z1 g = 0 if all 3 indices are distinct.
We will see an example in 13.11 where the restriction to distinct Peirce spaces is important (though it seems hard to give an example with only a single idempotent).
5. PROBLEMS FOR CHAPTER 8
263
5. Problems for Chapter 8 Find the Peirce decompositions of M2 (D)+ and H2 (D) determined = 1: by the hermitian idempotent e = 12 1d d1 where d 2 D is unitary, dd = dtd Problem 2. While we have hyped the peircer U -operators as a tool for Peirce decomposition, we will confess here in this out-of-the-way corner that we can get along perfectly well in our linear situation with the L or V operators. (Part of our bias is due to the fact that peircers work well over arbitrary scalars, so in quadratic Jordan algebras, but also in Jordan triples where there is no Ve ). Derive the Peirce U-Product and Triple Rules 8.5 (from which the Brace Rules follow) for the Peirce spaces Ji (e) as the i-eigenspaces for the operator Ve : use the Fundamental Lie Formula (FV) and the 5-Linear (FVe)0 to show UJi (Jj ) J2i j ; fJi ; Jj ; Jk g Ji j +k : Problem 1.
CHAPTER 9
O-Diagonal Rules In this chapter we x a Peirce decomposition J = J2 J1 J0 with respect to an idempotent e, and gather detailed information about the product on the Peirce space J1 . From this point on the behavior of the spaces J2 ; J0 begins to diverge from that of the Peirce space J1 . Motivated by the archetypal example of e = E11 in hermitian 2 2 matrices, where by Hermitian Example 8.8 J2 = H(D)[11]; J0 = H(D)[22] are represented by diagonal matrices, and J1 = D[12] by odiagonal matrices, we will call Ji (i = 2; 0) the diagonal Peirce spaces, and J1 the o-diagonal Peirce space. The diagonal Peirce spaces are more \scalar-like" subalgebras, and act nimbly on the more lumbering, vector-like o-diagonal space. To make the distinction between the two species visually clearer, we will henceforth use letters ai ; bi ; ci and subscripts i = 2; 0; j = 2 i to denote diagonal elements, while we use letters x1 ; y1; z1 and subscript 1 to denote o-diagonal elements.
1. Peirce Specializations In general a specialization of a Jordan algebra
is a homomorphism J ! for some associative algebra A; this represent J (perhaps not faithfully) as a special algebra. In the important case A = End (M ) of the algebra of all linear transformations on a module M , we speak of a specialization of J on M ; this represents J as a -submodule of linear transformations on M , with the product on 1 J represented by the Jordan product (T1 T2 + T2 T1 ) of operators. 2 A Peirce decomposition provides an important example of a specialization. Proposition 9.1 (Peirce Specialization). If J = J2 J1 J0 is the Peirce decomposition of a Jordan algebra with respect to an idempotent e, the the V -operators provide Peirce Specializations of the diagonal Peirce subalgebras Ji (i = 2; 0) on the o-diagonal space J1 via + A
i (ai ) := Vai jJ1
265
(ai 2 Ji ):
J
266
O-Diagonal Rules
These satisfy the Peirce Specialization Rules
i (a2i ) = i (ai )2 ; i (Uai bi ) = i (ai )i (bi )i (ai ); i (ai )i (bi ) = Vai ;bi jJ1 ; 2 (e) = 1J1 (e) : proof. The Peirce Multiplication Rules show that the diagonal spaces act on the o-diagonal space: J1 is closed under brace multiplication by diagonal elements, fJi ; J1 g J1 for i = 2; 0 by the Peirce Brace Rule 8.5. Thus we may de ne linear maps i of Ji into End (J1 ) by restricting the action of brace multiplication. These satisfy the specialization rules since the dierences Va2i Va2i = 2Uai ; VUai bi Vai Vbi Vai = Vai Uai ;bi Vbi Uai ; Vai ;bi Vai Vbi = Uai ;bi [using 5.7 Specialization (FIII)0 and Triple Switch (FIV)] all vanish on J1 by Peirce Orthogonality 8.5 UJi J1 = 0. The subalgebra J2 is unital with unit e, and the Peirce specialization 2 is unital since Ve = 1J1 on J1 by the Eigenspace Laws 8.4.
Notice that it is the V -operators that provide the correct action, not the L-operators. When the element ai clearly indicates its aÆliation with Ji , we will omit the redundant second index and simply write (ai ). We can consider to be a linear map from J2 + J0 into End (J1 ), but note that is no longer a homomorphism when considered on the direct sum J2 J0 : we will see in 9.3 that the images (J2 ); (J0 ) commute rather than annihilate each other. In certain important situations the Peirce specializations will be faithful (i.e. injective), so that the diagonal subalgebras Ji will be special Jordan algebras. Lemma 9.2 (Peirce Injectivity). (1) We have a Peirce Injectivity Principle: both Peirce specializations will be injective if there exists an injective element in the o-diagonal Peirce space, if Uy1 is injective on J for some y1 2 J1 (e); then already (ai )y1 = 0 =) ai = 0 (i = 2; 0):
In particular, this will hold if there exists an invertible element y1 :
(2) We have a Simple Injectivity Principle: the Peirce specializations will be injective if J is simple: Ker(i ) / J is an ideal in J, so if J is simple and e 6= 1; 0 is proper, then the Peirce specializations 2 ; 0 are injective. (3) We have a Peirce Speciality Principle: In a simple Jordan algebra all proper idempotents are special, in the sense that the Peirce subalgebras Ji (e) (i = 2; 0; e 6= 1; 0) are special.
1. PEIRCE SPECIALIZATIONS
267
(1) If fai ; y1 g = 0 then 2Uy1 (ai ) = fy1 ; fy1 ; ai gg fy12; ai g 2 0 fJ2 + J0 ; ai g Ji by the Peirce Bullet Rules 8.5, yet it also falls in J2 i by the Peirce U -Products 8.5, so 2Uy1 (ai ) is zero, hence ai = 0 by our always-assumed injectivity of 2 ( 12 2 ) and the hypothesized injectivity of Uy1 . (2) Since i is a homomorphism, its kernel Ki := Ker(i ) is an ideal of Ji ; fKi ; Ji g Ki ; this ideal has fKi ; J1 g = 0 by de nition, and fKi ; Jj g = 0 by Peirce Orthogonality Rules 8.5, so fKi ; Jg Ki and Ki is an ideal in J. If J were simple but Ki 6= 0 then Ki = J would imply Ji = J; but J = J2 would imply e = 1, and J = J0 would imply e = 0, contrary to our properness hypothesis. (3) The Ji are special since they have an injective specialization i : proof.
At this point it is worth stopping to note a philosophical consequence of this basic fact about Peirce specializations: we have a Peirce limit to exceptionality: if J is exceptional with unit 1 then simplicity cannot continue past 1; if J eJ for simple algebras with units 1 < ~1; then J must be special
since 1~ > 1 implies e = 1 6= 0; ~1 is proper, so J Je2 (e) is special. This reminds us of the Hurwitz limit to nonassociativity : if C is a nonassociative composition algebra, then composition cannot continue past C. Recall that the original investigation by Jordan, von Neumann, and Wigner sought a sequence of simple exceptional nite-dimensional (hence unital) algebras J(1) J(2) : : : J(n) : : : such that the \limit" might provide a simple exceptional setting for quantum mechanics. We now see why such a sequence was never found: once one of the terms becomes exceptional the rest of the sequence comes to a grinding halt as far as growth of the unit goes. (An exceptional simple J can be imbedded in an egregiously non-simple algebra J J0 with larger unit, and it can be imbedded in larger simple algebras with the same unit, such as scalar extensions J , but it turns out these are the only unital imbeddings which are possible.) Another important fact about Peirce products is that the Peirce specializations of the diagonal subspaces on J1 commute with each other; in terms of bullet or brace products this can be expressed as associativity of the relevant products.
268
O-Diagonal Rules
9.3 (Peirce Associativity). The Peirce specializations of i commute: for all elements ai 2 Ji we have the operator relations (a2 ) (a0 ) = (a0 ) (a2 ) = Ua2 ;a0 ; in other words we have the elemental relations 1 [a2 ; x1 ; a0 ] = 0; (a2 x1 ) a0 = a2 (x1 a0 ) = fa2 ; x1 ; a0 g: 4 proof. This follows from the Peirce Principle 8.13 since it involves only e; x1 ; a2 ; a0 , and is straightforward to verify in an associative algebra A: for x1 = a10 + a01 ; a2 = a11 ; a0 = a00 we have, by associative Peirce orthogonality, fa11 ; fa00 ; a10 + a01 gg = fa11 ; a00 a01 + a10 a00 g = a11 a10 a00 + a00 a01 a11 = fa11 ; a10 + a01 ; a00 g and dually. [We can also argue directly: fa2 ; fa0 ; x1 gg = fa2 ; a0 ; x1 g + fa2 ; x1 ; a0 g [using Triple Switch (FIVe)] = fa2 ; x1 ; a0 g by Peirce Orthogonality 8.5, and dually by interchanging 2 and 0:] Proposition
2. Peirce Quadratic Forms The remaining facts we need about Peirce products all concern the components of the square of an o-diagonal element. Definition 9.4 (Peirce Quadratic Forms). An o-diagonal element x1 squares to a diagonal element x21 = Ux1 1 = Ux1 e0 + Ux1 e2 2 J2 J0 . We de ne quadratic maps qi : J1 ! Ji (i = 2; 0) by projecting the square of an o-diagonal element onto one of the diagonal spaces: qi (x1 ) := Ei (x21 ); qi (x1 ; y1) := Ei (fx1 ; y1g): Though these maps are not usually scalar-valued, the Peirce subalgebras Ji (i = 2; 0) are so much more scalar-like than J itself that, by abuse of language, we will refer to the qi as the Peirce quadratic forms (instead of quadratic maps). BEWARE (cf. the warning after Spin Factors Example 3.11): these maps, built out of squares, correspond to the forms q (v ) of Reduced Spin Factors 3.12, and thus to the NEGATIVE of the global form Q in Jord(Q; c): if J = RedS pin(q ) = e2 W e0 then qi (w) = q (w)ei where q (w) 2 is a true scalar. Proposition 9.5 (q -Properties). Let x1 ; y1 2 J1 (e) relative to an idempotent e 2 J, and let i = 2; 0; j = 2 i; e2 = e; e0 = 1 e 2 Jb. We have an Alternate Expression for the quadratic forms: qi (x1 ) = Ux1 ej : (1) Cubes and fourth powers can be recovered from the quadratic maps by Cube Recovery: x31 = fqi (x1 ); x1 g; qi (x1 )2 = Ux1 (qj (x1 )):
2. PEIRCE QUADRATIC FORMS
269
(2) The operator Ux1 can be expressed in terms of q 's by the U1q Rules: Ux1 (y1 ) = fqi (x1 ; y1 ); x1 g fqj (x1 ); y1 g; Ux1 (ai ) = qj (x1 ; ai x1 ); fx1; ai; z1 g = qj (fx1; aig; z1 ) = qj (x1 ; fai; z1 g): (3) The q 's permit composition with braces by the q-Composition
Laws
qi (fai ; x1 g) = Uai (qi (x1 )); qi (fai ; x1 g; x1 ) = Vai (qi (x1 )); qi (faj ; x1 g) = Ux1 (a2j ): proof. All except the formula for Ux1 (y1 ) and the symmetry of the linearized product in the U 1q Rules follow directly from the Peirce Principle 8.13 since they involve only e; x1 ; ai , and are straightforward to verify in an associative algebra A from the associative Peirce relations. In detail, we may assume A is unital to have complete symmetry in the
indices i; j , so we need only verify the formulas for i = 2; j = 0. For x1 = a10 +a01 we have q2 (x1 ) = a10 a01 ; q0 (x1 ) = a01 a10 so we obtain Alternate Expression Ux1 e0 = x1 e0 x1 = a10 a01 = q2 (x1 ), Cube Recovery x31 = (a10 + a01 )3 = a10 a01 a10 + a01 a10 a01 = a10 a01 (a10 + a01 )+(a10 + a01 ) a10 a01 = q2 (x1 )x1 + x1 q2 (x1 ), fourthpower Recovery q2 (x1 )2 = (a10 a01 )(a10 a01 ) = (a01 + a10 )(a01 )(a10 )(a01 + a10 ) = x1 q0 (x1 )x1 , the second U 1q-Rule q0 (x1 ; fa11 ; x1 g) = q0 (a10 + a01 ; a11 a10 + a01 a11 ) = a01 (a11 a10 ) + (a01 a11 )a10 = 2Ux1 a11 ; the rst q-Composition Law q2 (fa11 ; x1 g) = q2 (a11 a10 + a01 a11 ) = (a11 a10 )(a01 a11 ) = a11 (a10 a01 )a11 = Ua11 q2 (x1 ), the second by linearizing x1 ! x1 ; z1 , and the third q2 (fa00 ; x1 g) = q2 (a00 a01 + a10 a00 ) = (a10 a00 )(a00 a01 ) = a10 (a00 a00 )a01 = Ux1 (a200 ).
For the rst U 1q -Rule, linearize x1 ! x1 ; y1 in Cube Recovery to see Ux1 (y1 ) + fx1 ; x1 ; y1 g = fq2 (x1 ); y1 g + fq2 (x1 ; y1 ); x1 g and hence Ux1 (y1 ) = fq2 (x1 ; y1); x1 g + fq2 (x1 ) x21 ; y1 g [from Triple Shift with y = 1] = fq2 (x1 ; y1 ); x1 g fq0 (x1 ); y1 g. Symmetry in the third U 1q -Rule follows by taking Peirce 0-components of fx1 ; a2 ; z1 g = ffx1 ; a2 g; z1 fa2; x1 ; z1g by Triple Switch 5.7(FIV).
Exercise 9.5A* Spurn new-fangled principles and use good-old explicit calculation
to establish q-Properties. Show: (1) Ei (x21 ) = Ei (Ux1 (1)) = Ei (Ux1 (ei ) + Ux1 (ej )) = Ux1 (ej ). (2) fqi (x1 ); x1 g = Vx1 Ux1 (ej ) = Ux1 Vx1 (ej ) = Ux1 (x1 ). (3) qi (x1 )2 = (Ux1 (ej ))2 = Ux1 Uej (x21 ) = Ux1 qj (x1 ). (4) fx1 ; ai ; z1g = Ej ( fai ; x1 ; z1 g+ffx1; ai g; z1 g) = qj (fx1 ; ai g; z1), and then 2Ux1 ai = fx1 ; ai ; x1 g = qj (x1 ; fai ; x1 g). (5) qi (x1 ; fx1 ; ai g) = Vai qi (x1 ), then 2Uai qi (x1 ) = 2qi (fai ; x1 g). (6) qi (faj ; x1 g) = Ei (faj ; x1 g2 ) = Ei Uaj x21 + Ux1 a2j + fx1 ; Uai x1 g = Ux1 a2j :
270
O-Diagonal Rules
Exercise 9.5B (1) Show the linearization of the quadratic map qi
satis es qi (x1 ; y1 ) = Ei (fx1 ; y1 g) = fx1 ; ej ; y1 g = fx1 ; y1 ; ei g = fei ; x1 ; y1 g. (2) Give an alternate direct proof of U 1q: linearize the Triple Shift Formula to show Vei ;U (x1 )y1 = Vqi (x1 ;y1 );x1 Vy1 ;qj (x1 ) as operators, then have this act on ej . (3) Generalize the Fourth Power Rule to show (Ux1 aj )2 = Ux1 Uaj qj (x1 ). (4) Give an alternate direct proof of q-Composition using Ufa;xg + UUa (x);x = Ua Ux + UxUa + Va UxVa (linearizing y ! a; 1 in the Fundamental Formula) and Peirce Orthogonality to show Ufai ;x1g (ej ) = Uai Ux1 (ej ). (5) When there exists an invertible element v1 2 J1 , show the quadratic forms qi take on all values: qi (J1 ) = qi (v1 ; fJj ; v1 g) = Ji [show Ji = Uv1 Jj ]. Lemma 9.6 (q -Nondegeneracy). (1) In any Jordan algebra, an element z 2 J1 belonging to both radicals Rad(qi) := fz1 2 J1 j qi(z1 ; J1 ) = 0g is trivial: q2 (z1 ; J1 ) = q0 (z1 ; J1 ) = 0 =) z1 is trivial: (2) If the algebra J is nondegenerate, then the Peirce quadratic forms are both nondegenerate, Rad(qi) = 0 (i = 2; 0; J nondegenerate): proof. (1) follows immediately from the U 1q -Rules 9.5(2), Uz (J1 ) = b Uz (Ji ) = 0 for any z = z1 with qi (z; J1 ) = 0 for both i = 2 and i = 0. We must work a bit harder to show (2), but some deft footwork and heavy reliance on nondegeneracy will get us through. The rst easy step is to note Uz (Jbj ) = qi (z; Jbj z ) qi (z; J1 ) = 0 for z 2 Rad(qi ). The next step is to use this plus nondegeneracy to get Uz (Jbi ) = 0: each bj = Uz (^ai ) 2 Jj vanishes because it is trivial in the nondegenerate b) = Ub (Jbj ) [by Peirce U -Products 8.5] = Uz Ua^ Uz (Jbj ) [by J; Ubj (J j i the Fundamental Formula] = Uz Ua^i (0) [our rst step] = 0. The third step is to use the U 1q -Rule to conclude Uz (J1 ) = 0 from qi (z; J1 ) = 0 [our hypothesis] and qj (z ) = Uz (^ei ) = 0 [our second step]. Thus Uz kills all three parts Jbj ; Jbi ; J1 of Jb so z itself is trivial, and our fourth and nal step is to use nondegeneracy of J yet again to conclude z = 0.
Let e be an idempotent in a Jordan algebra with the property that J2 (e); J0 (e) contain no nonzero nilpotent elements. (1) Show that the quadratic forms q2 ; q0 on J1 have the same radical Rad(qi ) = fz1 2 J1 j qi (z1 ; J1 ) = 0g. (2) Show that z1 2 J1 is trivial in such a J i z1 2 Rad(q2 ) = Rad(q0 ). (3) Show that if z = z2 + z1 + z0 is trivial in any Jordan algebra with idempotent e, so are z2 and Exercise 9.6*
3. PROBLEMS FOR CHAPTER 9
271
z0; conclude z2 = z0 = 0 when J2 (e); J0 (e) contain no nonzero nilpotent elements. (4) Show such a J is nondegenerate i the quadratic maps qi are nondegenerate (in the sense that Rad(qi ) = 0).
3. Problems for Chapter 9 Work out in detail the Peirce specialization (investigating Peirce injectivity and associativity) and Peirce quadratic forms (including U 1q Rules,qComposition Laws, and radicals) for the case of a hermitian matrix algebra H2 (D) for an associative algebra D with involution. Question 2. Repeat the above question for the case of a reduced spin factor RedS pin(q). Question 1.
CHAPTER 10
Peirce Consequences 1. Diagonal Consequences We now gather a few immediate consequences of the Peirce multiplication rules which will be important for our classi cation theorems. For showing that nondegenerate algebras with minimum condition on inner ideals have a capacity, it will be important to know that the diagonal Peirce spaces inherit nondegeneracy and the minimum condition. Once more we x throughout the chapter an idempotent e and its Peirce decomposition into spaces Jk = Jk (e), and refer to Ji (i = 0; 2) as a diagonal Peirce space, with opposite diagonal space Jj (j = 2 i). Throughout this section we use the crucial Diagonal Orthogonality property that the U -operator of a diagonal element kills all elements from the other Peirce spaces:
ai 2 Ji =) Uai Jbj = Uai bJ1 = 0; Uai Jb = Uai Jbi directly from the Peirce Othogonality Rules 8.5 in the unital hull. Lemma 10.1 (Diagonal Inheritance). We have the following localglobal properties of the diagonal Peirce spaces Ji (i = 0; 2): (1)An element is trivial locally i it is trivial globally:
zi 2 Ji is trivial in Ji i it is trivial in J ; in particular, if J is nondegenerate then so is Ji :
(2) A submodule is inner locally i it is inner globally: a -submodule Bi Ji is an inner ideal of Ji i it is an inner ideal of J:
(3) An inner ideal is minimal locally i it is minimal globally: an inner ideal Bi Ji is minimal in Ji i it is minimal in J; in particular, if J has minimum condition on inner ideals, so does Ji :
Thus the diagonal Peirce spaces inherit nondegeneracy and minimum condition on inner ideals from J. 273
274
Peirce Consequences
(1), (2) follow immediately from Diagonal Orthogonality: = 0 () Uzi (Jb) = 0 and UBi (Jbi ) Bi () UBi (Jb) Bi . Then the lattice of inner ideals of Bi is just that part of the lattice of inner ideals of J falling in Bi , so (3) follows. It will also be important to know that when we get down to as small a Peirce 0-space as possible (J0 = 0), we must have the biggest idempotent possible (e = 1): Lemma 10.2 (Idempotent Unit). If e is an idempotent in a nondegenerate Jordan algebra with Peirce space J0 = 0, then e = 1 is the unit of J: proof. We will prove that the Peirce decomposition 8.2(1) reduces to J = J2 , where e is the unit by the Eigenspace Laws 8.4. By hypothesis J0 = 0, so we only need to make J1 vanish. This we accomplish by showing that every z1 2 J1 is weakly trivial, Uz1 (J) = 0, and therefore (in view of the Triviality Proposition 5.10(4)) vanishes by nondegeneracy. When J0 = 0 we trivially have Uz1 (J0 ) = 0, and Uz1 (J2 ) J0 [by Peirce U -Products 8.5] = 0, so all that remains is to prove Uz1 (J1 ) = 0 too. By the U 1q -Rules 9.5(2) it suÆces to show q2 (J1 ) = q0 (J1 ) = 0; the second vanishes by hypothesis, and for the rst1 we have z2 := q2 (z1 ) = z12 weakly trivial since Uz2 (J) = Uz2 (J2 ) [by Diagonal Orthogonality] = Uz21 (J2 ) [by the Fundamental Formula] Uz1 J0 [by Peirce U -Product 8.5] = 0, so by nondegeneracy all z2 = q2 (z1 ) vanish.2 In our strong coordinatization theorems a crucial technical tool is the fact that \both diagonal Peirce spaces are created equal", thanks to symmetries which interchange the two diagonal Peirce spaces. These symmetries arise as involutions determined by strong connection. We say e and e0 are connected if there is an invertible element v1 in the o-diagonal Peirce space J1 (v12 = v2 + v0 for vi invertible in Ji ), and are strongly connected if there is an o-diagonal involution v1 (v12 = 1): Proposition 10.3 (Connection Involution). (1) Any o-diagonal involution v1 2 J1 ; v12 = 1, determines a connection involution x 7! x := Uv1 (x) on J which xes v1 and interchanges e and its complement e0 , e = e0 ; e0 = e; v1 = v1 : proof.
Uzi (Jbi )
1Note we can't just quote Alternate Expression 9.5 here because e0 is alive and
well in bJ, even if his relatives J0 are all dead in J: 2Of course, once all z 2 = 0 we can avoid q -Properties and see directly that 1 all z1 are weakly trivial: all z1 y1 = 0 by linearization, so by de nition of the U -operator all Uz1 (y1 ) = 2z1 (z1 y1 ) z12 y1 = 0:
2. DIAGONAL ISOTOPES
275
(2) The connection action on the Peirce spaces can be expressed in terms of the trace functions ti (x1 ) := qi (v1 ; x1 ) (i = 2; 0; j = 2 i) determined by the involution v1 : a2 = t0 (a2 v1 ) on J2 ; a0 = t2 (a0 v1 ) on J0 ; x1 = fti (x1 ); v1 g x1 on J1 ; qi (x1 ) = qj (x1 ): (3) The o-diagonal connection xed points are just the diagonal multiples of v1 : the xed set of x1 ! x1 on J1 is fJ2 ; v1 g = fJ0 ; v1 g : fai; v1 g = fai; v1g = fai ; v1g: proof. All of these except the action on J1 follow from the Peirce Principle by a straightforward calculation in special algebras (where t2 (a2i v1 ) = ajiaii aij if v1 = aij + aji ; a2i = aii ). We can also calculate them directly, as follows. For convenience of notation set v = v1 ; e2 = e; e0 = 1 e = e0 . For (1) Uv (e2 ) = E0 (v 2 ) [by the Alternate Expression 9.5] = e0 by involutority, Uv v = v 3 = v v 2 = v 1 = v . For the action (2) on the individual spaces, on J1 we have by the U 1q -Rules 9.5(2) that Uv (x1 ) = fqi (v; x1 ); v g fqj (v ); x1 g = fti (x1 ); v g fej ; x1 g (by de nition of ti and involutority) = fti(x1 ); vg x1 , the actions on Ji follow from the U 1q -Rules by de nition of ti , and the action on q 's follows from qi (x) = Ux ej [by Alternate Expression 9.5] = Ux ej [ is a homomorphism by the Involution Lemma 6.9] = Ux (ei ) [by (1)] = qj (x). (3) If x1 is xed then 2x1 = x1 + x1 = fti (x1 ); v g [by (2)] 2 fJi ; v g, conversely fJi ; v g is xed since fai ; v g = fai ; vg [ is still a homomorphism] = fai ; v g [by (1)] = Vv Uv (ai ) = Uv2 ;v (ai ) [by Commuting Formula 5.7(FII)] = Vv (ai ) = fai ; v g:
2. Diagonal Isotopes We will reduce our non-strong coordinatization theorems to our strong ones by passing to a diagonal isotope where the connection becomes strong. First we have a general result. Lemma 10.4 (Diagonal Isotope). Let J be a unital Jordan algebra with Peirce decomposition J = J2 J1 J0 relative to an idempotent e 2 J. If u = u2 + u0 is a diagonal element with ui invertible in Ji , then the diagonal isotope J(u) has unit 1(u) = e2(u) + e(0u) for idempotents e(2u) := u2 1 ; e(0u) := u0 1 (inverses taken in Ji ) and the corresponding Peirce decompositions in J(u) and J coincide: (u) (u) Jk (e ) = Jk (e) (k = 2; 1; 0):
276
Peirce Consequences
The new quadratic forms and Peirce specializations on the o-diagonal space J1 are given by qi(u) (x1 ) = qi (x1 ; uj x1 ); i(u) (ai ) = i (ai )i (ui ): (u) we denote the unit, square, U -operator by proof. In e J := J ~ ~1; x2 ; Ue . By the Jordan Isotope Proposition 7.3(2) and Invertible Products 6.7(3) we have ~1 := 1(u) = u 1 = u2 1 + u0 1 [remember that these denote the inverses in the subalgebras Ji , not in J itself !!], where e~ := e(2u) := u2 1 satis es e~~2 = Uu2 1 (u) = Uu2 1 (u2 ) = u2 1 = e~, hence the complement e~0 := ~1 e~ = u0 1 = e(0u) is an idempotent orthogonal to e~ in Je. The Peirce decomposition in eJ has eJ2 = U~e~(Je) UJ2 (J) J2 by Diagonal Orthogonality, similarly Je0 = U~e~0 (Je) J0 and eJ1 = U~e~;e~0 (JeL ) UJ2 ;J0 (JL ) J1 , so one decomposition sits inside the other e e J = Ji (~ e) Ji (e) = J, which implies they must coincide: e Ji (~ e) = Ji (e) for each i. The Peirce Quadratic forms 9.4 take the indicated form since x~2 = Ux (u) [by Jordan Isotope 7.3(2)] = Ux (u2 ) + Ux (u0 ) = q0 (x; u2 x) + q2 (x; u0 x) [by the U 1q -Rules 9.5(2)]. The Peirce specializations have Veai = Vai ;ui [by Jordan Isotope 7.3(2)] = Vai Vui [by the Peirce Specialization Rules 9.1].
The isotopes we want are normalized so that e(2u) = e; e(0u) = q0 (v1 ) for an invertible element v1 . Amazingly, this turns v into an involution (talk about frogs and princes!) Proposition 10.5 (Creating Involutions). (1) Any invertible odiagonal element can be turned into an involution by shifting to a suitable diagonal isotope with the same Peirce spaces: if v1 2 J1 (e) is invertible in J with v12 = v2 + v0 (vi = qi (v1 )); and we set u := u2 + u0 for u2 := e; u0 := v0 1 , then v1 2 J1 becomes involutory in J(u) , v1(2;u) = 1(u) = e(2u) + e(0u) (e(2u) = e; e(0u) = v0 ; J(ku) = Jk ); and the corresponding shifted connection involution x 7! x(u) := Uv(1u) (x) in the isotope J(u) is given by a2 (u) = Uv1 (a2 ); a0 (u) = Uv11 (a0 ); x1 (u) = Uv1 fv0 1 ; x1 g: (2) The shifted Peirce specializations in the isotope take the form 2(u) (a2 ) = 2 (a2 ); 0(u) (a0 ) = 0 (a0 )0 (v0 ) 1 ; (u) 2(u) v0k = 2(u) Uv11 v0k = 2 v2 k 1 :
2. DIAGONAL ISOTOPES
277
(3) The shifted Peirce quadratic forms take the form q0(u) (x1 ) = q0 (x1 ); q2(u) (x1 ) = q2 x1 ; v0 1 x1 : 2 2k = v k + v k for all k by proof. From v = v2 + v0 we get v 2 0 orthogonality of J2 ; J0 [even for negative powers, by Invertible Products 6.7(3)], in particular we have the alternate description u0 = v0 1 = E0 (v 2 ). Since Uv switches Ji ; Jj we have a ipping rule Uv1 (vik ) = Ej (Uv1 v 2k ) = Ej (v 2(k1) ) = vjk1 : (1) When u2 = e; u0 = v0 1 the diagonal isotope J(u) has 1(u) = e(2u) + e(0u) for e(2u) = e; e(0u) = u0 1 = v0 with J(ku) = Jk for k = 2; 1; 0 by Diagonal Isotope 10.4. The emperor's new square is v ~2 = Uv u [by 1 Jordan Isotope 7.3(2)] = Uv e + v0 = v0 + v20 [by Alternate Expression 9.5 and ipping for i = 0; k = 1] = u0 1 + e = (u0 + e) 1 [by Invertible Products 6.7(3)] = u 1 = ~1 [by Jordan Isotope 7.3(2) again]. Once v becomes involutory it is granted an involution x := Uev (x) = Uv Uu (x). On J2 we have a2 = Uv Uu (a2 ) = Uv Uu2 (a2 ) = Uv Ue (a2 ) = Uv (a2 ). On J0 we have a0 = Uv Uu (a0 ) = Uv Uu0 (a0 ) = Uv UE0 (v 2 ) (a0 ) (as we noted above) = Uv Uv 2 (a0 ) [since E2 (v 2 ) can be ignored on J0 by Peirce Orthogonality 8.5] = Uv Uv 2 (a0 ) [by Fundamental and U-Inverse Formulas 6.2(2)] = Uv 1 (a0 ). Finally, on J1 we have x1 = Uv Uu (x1 ) = Uv Ue;u0 (x1 ) = Uv fv0 1 ; x1 g. This establishes (1). In (2) the rst part follows immediately from Diagonal Isotope 10.4, (u) 2 (a2 ) = 2 (a2 )2 (e2 ) = 2 (a2 ) [by unitality in Peirce Specialization 1 (u) 1 9.1] and 0 (a0 ) = 0 (a0 )0 (v0 ) = 0 (a0 ) 0 (v0 ) [since 0 is a homomorphism by Peirce Specialization 9.1]. The second part follows (u) k 1 1 k from ipping (i = 0; k = k); 2 Uv (v0 ) = 2 (v2 ) [since 2(u) = 2 ] = (v2 )k 1 [since 2 is also a homomorphism]. (3) also follows immediately from Diagonal Isotope 10.4, q0(u) (x1 ) = q0 (x1 ; e2 x1 ) = 21 q0 (x1 ; x1 ) [by the Eigenspace Law 8.4] = q0 (x1 ) and q2(u) (x1 ) = q2 x1 ; v0 1 x1 . This result will be a life-saver when we come to our coordinatization theorems: once the connecting element v becomes involutory, the bar involution can be used to simplify many calculations.
278
Peirce Consequences
3. Problems for Chapter 10 Work out in detail the connection involution (including a condition for x 2 J1 to be involutory, action of the connection involution, connection xed points, shifted quadratic forms and specializations) relative to an idempotent e = 1[11] in a hermitian matrix algebra H2 (D) for a associative algebra D with involution. as in Proposition 10.3. Question 2. Repeat the above question for the case of a reduced spin factor RedS pin(q). Question 1.
CHAPTER 11
Spin Coordinatization In this chapter we give a general coordinatization theorem for reduced spin Jordan algebras, without assuming any simplicity. This is an example of a whole family of coordinatization theorems, asserting that if an algebra has a family of elements of a certain kind then it is a certain kind of algebra. The grand-daddy of all coordinatization theorems is the Wedderburn Coordinatization Theorem: if an associative algebra A has a supplementary family of n n associative matrix units feij g (1 i; j n), then it is a matrix algebra, A = Mn(D) with coordinate ring D = e11 Ae11 under an isomorphism sending eij ! Eij . We will establish Jordan versions of this for n n hermitian matrices (n = 2 in Chapter 12, n 3 in Chapter 17). Throughout the chapter we deal with a Peirce decomposition J = J2 J1 J 0 of a unital Jordan algebra J with respect to an idempotent e. Recall from Connection Involution 10.3 that e and 1 e are connected if there is an invertible element in J1 , and strongly connected if there is an involutory element v12 = 1 in J1 (in which case we denote the connection involution simply by a bar, x := Uv1 x). We will consistently use the notation e2 := e; e0 := 1 e, so our Peirce decomposition is always with respect to e2 . Elements of the diagonal Peirce spaces Ji (i = 2; 0) will be denoted by letters ai ; bi , while o-diagonal elements will be denoted by letters x1 ; y1 , and we have Peirce Specializations 9.1 i (ai ) = Vai jJ1 of the diagonal spaces on the o-diagonal space, and the Peirce Quadratic Forms 9.4 qi (x1 ) = Ei (x21 ) 2 Ji :
1. Spin Frames For both the spin and hermitian algebras of capacity 2, a family of 2 2 hermitian matrix units is just a 2-frame. Here frame suggests a frame for weaving, for hanging a pattern around, and indeed coordinatization will consist in draping coordinates around a frame. Definition 11.1 (2-Frame). A 2-frame fe2 ; v1 ; e0 g for a unital Jordan algebra J consists of two supplementary orthogonal idempotents e2 ; e0 connected by an invertible element v1 2 J1 . The frame is strong if v1 strongly connects the idempotents, v12 = e2 + e0 = 1; in this case the 279
280
Spin Coordinatization
connection involution x 7! x := Uv1 (x) is, by the Connection Involution Proposition 10:3, an automorphism interchanging e2 and e0 . We begin by investigating the Peirce relation that sets spin factors o from all others. Definition 11.2 (Spin Peirce Relation). A Jordan algebra J satis es the Spin Peirce Relation with respect to a pair of supplementary idempotents e2 ; e0 if for all x1 ; y1 in J1 q2 (x1 ) y1 = q0 (x1 ) y1 : A Spin frame fe2 ; v1 ; e0 g is a 2-frame where J satis es the Spin Peirce Relation with respect to e2 ; e0 : Show that the Spin Peirce Relation is equivalent to either of the following identities for all ai 2 Ji ; x; y 2 J1 (for i = 2 or 0): (1) fUx(ai ); yg = fqi (x); ai ; yg, (2) fUx(ai ); yg = fai ; qi (x); yg. Exercise 11.2
The idea behind the Spin Peirce Relation is that the Peirce quadratic forms q2 ; q0 agree in their action on J1 , and represent the action of scalars (from some larger ring of scalars ) determined by an -valued quadratic form q on J1 . The primary examples of Spin frames come from reduced spin factors; hermitian algebras produce Spin frames only if they are really reduced spin factors in disguise. Example 11.3 (Spin Frame). Every reduced spin factor RedS pin(q ) = W , built as in the Reduced Spin Example 3:12 from an module W with -quadratic form q over a scalar extension of via (; w; )2 = 2 + q (w); ( + )w; 2 + q (w) satis es the Spin Peirce Relation with respect to the supplementary idempotents e2 = (1; 0; 0); e0 = (0; 0; 1), since qi (w1 ) = q (w)ei 2 ei is essentially a scalar. For any invertible v1 = (0; v; 0) (i.e. q (v ) invertible in ), we have a standard Spin frame fe2 ; v1 ; e0 g; such a frame is strong i q (v ) = 1, in which case the quadratic form is unit-valued and the connection involution is (; w; ) = ( ; q (w; v )v w; ): proof. We saw in the Reduced Spin Decomposition 8.10 that the Peirce decomposition relative to e = e2 is Ji = ei (i = 2; 0); J1 = W1 . The explicit multiplication formula shows qi (w1 ) = Ei (w12 ) = q (w)ei, so qi (w1 ) v1 = 12 q (w)v1 is truly a scalar action, so fe2 ; v1 ; e0 g is a spin frame, which is strong (v12 = 1) i q (v ) = 1, in which case (; w; ) = Uv1 (e2 + w1 + e0 ) = e0 + fq2 (v1 ; w1 ); v1 g fq0 (v1 ); w1 g + e2 [by
1. SPIN FRAMES
281
the U 1q Rule 9.5(2)] = e0 + q (v; w)v1 q (v )w1 + e2 = ( ; q (w; v )v w; ) [since q (v ) = 1 by strongness]. Example 11.4 (Jordan Matrix Frame). H2 (D; ) for an associative -algebra (D; ) as in 3:7 satis es the Spin Peirce Relation with respect to e2 = 1[11]; e0 = 1[22] i the involution is central, in which case J may be considered as a reduced spin factor over its -center , with strong Spin frame f1[11]; 1[12]; 1[22]g: proof. By Hermitian Brace Products 3.7(2) we have q2 (x[12]) = xx[11]; q0 (x[12]) = xx[22]; 2 q2 (x[12]) q0 (x[12]) y [12] = (xx)y y (xx) [12], therefore the Spin Peirce Relation 11.2 holds i all norms xx = xx are central, hence (linearizing x ! x; 1) all traces x + x are central, therefore every symmetric element a = a = 12 (a + a) lies in the -center . Then H2 (D) = H(D)[11] D[12] H(D)[22] =
e2 W e0 = RedS pin(q ) for q (w) = ww: Example 11.5 (Twisted Matrix Frame). J = H2 (D; ) for = diag( 1; 2 ) = diag (1; ) [normalized so that 1 = 1] satis es the Spin Peirce Relation with respect to e2 = 1[11] = E11 ; e0 = 1 [22] = E22 i it coincides with H2 (D; ) : = ; the involution is again central, and J is a reduced spin factor over its -center. proof. By Twisted Matrix Brace Products 7.8(2)(ii) q2 (x[12] ) = x x[11] ; q0 (x[12] ) = xx[22] , so by 7.8(2)(iv ) 2 q2 (x[12] ) q0 (x[12] ) y [12] = (x x)y y (xx) [12] , and the Spin Peirce Relation 11.2 holds i x x = xx is central for all x; here x = 1 guarantees is invertible in the center, so the Spin Relation again reduces to centrality of all norms xx = xx, hence traces x + x, hence is central, in which case is central, = , and we are back to a reduced spin factor as in 11.3.
Go ahead, be a masochist { compute the Spin Peirce Relation directly in twisted matrix algebras H2 (D; ) of 7.8 for general (un-normalized) = diagf 1 ; 2 g: (1) Show q2 (x[12]) = x 2 x 1 1[11], q0 (x[12]) = 2 x 1 1x[22]; fq2 (x[12]) q0 (x[12]), y[12]g = x 2 x 1 1 y y 2 x 1 1 x [12] vanishes i x 2 x 1 1 = 2 x 1 1 x is central for all x. (2) Linearize x ! x; 1 in this to show 2 = 1 for some central , and the condition becomes that all x 1 x 1 1 = 1 x 1 1 x be central. (3) Conclude is an isotope (x = 1 x 1 1 ) of a central involution .
Exercise 11.5
Now we gather up in three bite-sized lemmas the technical tools we need to carry out our strong spin coordinatization.
282
Spin Coordinatization
11.6 (Diagonal Commutativity). (1) If J satis es the Spin Peirce Relation with respect to a pair of supplementary orthogonal idempotents e2 ; e0 then i (qi (x1 ; ai x1 )) = i (ai )i (qi (x1 )) = i (qi (x1 ))i (ai ) for all ai 2 Ji ; x1 2 J1 . (2) When fe2 ; v1 ; e0 g is a Spin frame then we have diagonal commutativity
= 2 (J2 ) = 0 (J0 ) is a commutative associative subalgebra of End (J1 ) which is isomorphic to Ji via i : Lemma
proof. (1) The Spin Peirce Relation 11.2 just means Vq2 (x1 ) = Vq0 (x1 ) as operators on J1 , which therefore commute with all Vai on J1 by Peirce Associativity 9.3, so Vai Vqi(x) = Vqi (x) Vai = Vai Vqi (x) = Vai qi (x) [by linearizing Peirce Specialization Rules 9.1] = Vqi (x;ai x) [by the q Composition Laws 9.5(3)]. (2) Peirce Associativity 9.3 shows that i := i (Ji ) commutes with j := j (Jj ) j qj (J1 ) = i qi (J1 ) [by the Spin Relation] i qi(v; Jj v) = i Uv (Jj ) [by q-Composition Laws 9.5(3)] = i Ji [by invertibility of v ] = i , so 2 = 0 =: is a commuting module of -linear transformations, and it is a unital Jordan subalgebra closed under squares by Peirce Specialization Rules 9.1, hence since 21 lies in it is an associative -subalgebra of End (J1 ) : (ai ) (bi ) = (ai ) (bi ) = (ai bi ). Moreover, the surjection i : Ji ! i is also injective (hence Ji = +i = as algebras) by the Peirce Injectivity Criterion 9.2 and the invertibility of v . This establishes (2). Lemma 11.7 (Diagonal Spin Isotopes). If J satis es the Spin Peirce Relation with respect to e2 ; e0 then so does any diagonal isotope (u) (u = u + u ) with respect to e(u) := u 1 ; e(u) := u 1 : J 2 0 2 2 0 0 ( u ) proof. In J we know by the Diagonal Isotope Lemma 10.4 (u) that e2 is an idempotent with the same Peirce decomposition as e2 , and on J1 we have Vq((uu)) (x) = Vqj (x;ui x) Vuj [by Diagonal Isotope 10.4] j = Vqi (x;ui x) Vuj [by the Spin Peirce Relation 11.2] = Vqi (x) Vui Vuj [by Diagonal Commutativity 11.6] = Vqi(x) Uui ;uj [by Peirce Associativity 9.3], which by the Spin Relation is symmetric in the indices i and j , so the Spin Peirce Relation holds for e(2u) ; e(0u) :
11.8 (Spin Bar Relation). (1) A strong 2-frame fe2 ; v1 ; e0 g satis es the Spin Peirce Relation i the connection involution satis es Lemma
2. STRONG SPIN COORDINATIZATION
283
the Spin Bar Relation:
a2 y1 = a2 y1 : (2) The Spin Bar Relation implies the Peirce bilinear form qi is Ji linear, the involution on J1 is isometric, and the involution exchanges q2 ; q0 : ai qi (x1 ; y1) = qi (fai ; x1 g; y1); qi (x1 ) = qi (x1 ); qi (x1 ) = qj (x1 ): proof. By the Connection Involution Lemma 10.3(1) we know that bar is an involution of the Jordan algebra which interchanges e2 ; e0 , hence their diagonal Peirce spaces. The Peirce Spin Relation 11.2 implies the Spin Bar Relation since a2 y = q0 (a2 v; v ) y [by the Connection Action 10.3(2)] = q2 (a2 v; v ) y [by the Spin Relation] = (a2 q2 (v )) y [by q -Composition Laws 9.5(3)] = a2 y [by strongness q2 (v ) = e2 ]. To see that the weaker Spin Bar implies Spin Peirce will require the three intermediate steps of (2), which are of independent interest (in other words, we'll need them in the next theorem!). For the linearity step (which is a strengthening of a q -Composition Law 9.5(3)) we compute ai qi (x; y ) qi (fai ; x1 g; y ) = qi (ai x; y )+ qi(ai y; x) 2qi(ai x; y ) [by linearized q -Composition] = qi (ai y; x) qi (ai x; y ) = qi (ai y; x) qi (ai x; y ) [by Spin Bar] = 0 [by U 1q Rules 9.5(2)]. For the isometry step we compute qi (x) = qi (fti (x); v g x) [by Connection Action] = Uti (x) qi (v ) qi fti (x); v g; x + qi (x) [by q -Composition] = ti (x)2 ti (x) qi (v; x)+ qi(x) [by qi (v ) = ei and the rst linearity step] = qi (x) [by de nition of ti ]. The second and third steps are equivalent by Connection Action. From this it is immediate that Spin Bar implies Spin Peirce: the bar relation implies q2 (x) y = q2 (x) y = q0 (x) y [by the third step].
2. Strong Spin Coordinatization Now we are ready to establish the main result of this section, that Jordan algebras with strong Spin frame can be coordinatized as strongly reduced spin factors. Theorem 11.9 (Strong Spin Coordinatization). A Jordan algebra has a strong Spin frame i it is isomorphic to a reduced spin factor RedS pin(q) over a scalar extension of with standard Spin frame and a unit-valued quadratic form q . Indeed, if fe2 ; v1 ; e0 g is a strong Spin frame, v12 = e2 + e0 = 1; a2 y1 = a2 y1 ;
284
Spin Coordinatization
then in terms of the coordinate map , connection involution , quadratic form q , and scalar ring
x := Uv1 x; (ai ) = (ai ) := Vai jJ1 ; q (x1 ) := (q2 (x1 )) = (q0 (x1 ));
:= (J2 ) = (J0 ) End (J1 ) the map
a2 y1 b0 '! ( (a2 ); y1; (b0 )) = ( (a2 ); y1; (b0 )) is an isomorphism J ! RedS pin(q ). proof. By Diagonal Commutativity 11.6(2) is symmetric in the indices 2; 0 and forms a commutative associative -subalgebra of End (J1), therefore J1 is a left -module under !x1 = !(x1). By 11.8(1) the Spin Bar Relation is equivalent to the Spin Peirce Relation (q2 (x1 )) = (q0 (x1 )), so the (un-subscripted) quadratic form q is also symmetric in the indices 2; 0, and is indeed quadratic over since q (x; y ) is -bilinear: q (!x; y ) = !q (x; y ) for all x; y 2 J1 ; ! = (a2 ) 2
by Spin Bar 11.8(2)(i). To show that ' is a homomorphism, it suÆces to prove '(x2 ) = '(x) 2 for all elements x = a2 y b0 . But by the Peirce relations x2 = [a22 + q2 (y )] [2(a2 + b0 ) y ] [b20 + q0 (y )] = [a22 + q2 (y )] [ a2 + b0 y ] [b20 + q0 (y )] so '(x2 ) = a22 + q2 (y ) ; a2 + b0 y; b20 + q0 (y ) : On the other hand, by the multiplication rule 11.3 in RedS pin(q )
so
'(x)2 = (a2 ); y; (b0) 2 = (a2 )2 + q (y ); (a2 + b0 )y; (b0 )2 + q (y ) ;
'(x2 ) = '(x)2 follows from (c2i ) = (ci )2 [by Peirce Specialization 9.1] and (qi (y )) = q (y ) [by de nition (1) of q ]. Finally, to show ' is an isomorphism, it suÆces to prove it is a linear bijection on each Peirce space: by Diagonal Commutativity 11.6(2) ' is a bijection i of Ji onto for i = 2; 0, and trivially ' is a bijection 1J1 on J1 . We have seen, conversely by Spin Frame Example 11.3, that every RedS pin(q) with unit-valued q has a standard strong Spin frame, so
3. SPIN COORDINATIZATION
285
(up to isomorphism) these are precisely all Jordan algebras with such a frame.
3. Spin Coordinatization Coordinatization is most clearly described for strong Peirce frames. When the frames are not strong, we follow the trail blazed by Jacobson to a nearby isotope which is strongly connected. The strong coordinatization proceeds smoothly because the bar involution is an algebra automorphism and interacts nicely with products, and we have complete symmetry between the indices 2 and 0. For the non-strong case the map Uv is merely a structural transformation, transforming J-products into products in an isotope, and we do not have symmetry in J2 ; J0 . Rather than carry out the calculations in the general case, we utter the magic word \diagonal isotope" to convert the general case into a strong case. Our isotope preserves the idempotent e2 but shifts e0 , so we base our coordinatization on the space J2 . Notice that the main dierence between the present case and the previous strong case is that, since our frame is not strong, we do not have a connection involution, and we do not know the quadratic form q is unit-valued (takes on the value 1), all we know is that it is invertible-valued (takes on an invertible value at the connecting element v1 ). Theorem 11.10 (Spin Coordinatization Theorem). A Jordan algebra has a Spin frame i it is isomorphic to a reduced spin factor RedS pin(q) over a scalar extension of with standard Spin frame and a invertible-valued quadratic form q . Indeed, if fe2 ; v1 ; e0 g is a Spin frame
v12 = v2 + v0 for vi invertible in Ji ; q2 (x1 ) x1 = q0 (x1 ) x1 ; then in terms of the coordinate map ;, quadratic form q , scalar , and scalar ring
(a2 ) := Va2 jJ1 ; q (x1 ) := Uv 1 q0 (x1 ) ; := (v2 ) 2 := (J2 ) End (J1 )
the map
a2 y1 b0 '! (a2 ); y1 ; Uv 1 (b0 ) is an isomorphism J ! RedS pin(q ) sending the given Spin frame to a standard Spin frame,
fe2 ; v1; e0g ! f(1; 0; 0); (0; v1; 0); (0; 0; 1)g:
286
Spin Coordinatization
By the Creating Involutions Proposition 10.5, the diagonal ~ isotope J := J(u) for u := e2 + v0 1 has ~1 = e~2 + e~0 (e~2 = e2 ; e~0 = v0 1 ) has the same Peirce decomposition as J and still satis es the Spin Peirce Relation (since all diagonal isotopes do by Diagonal Spin Isotope Lemma 11.7), but now has a strong Spin frame fe~2 ; v1 ; e~0 g. Thus by the Strong Spin Coordinatization Theorem J~ = RedS pin(q ), and therefore 2) ( u 2 2 ~ J = (J) for diagonal u = e2 + v0 [by Invertible Products 6.7(3)] is isomorphic to RedS pin(~q )(~u) for u~ = (1; 0; ), which in turn is by 7.4(2)) a reduced spin Jordan algebra RedS pin(q ) (where q = q~ still takes on an invertible value on ). This completes the proof that J is reduced spin. If we want more detail about q and the form of the isomorphism, we must argue at greater length. J~ = J(u) has by the Creating Involutions 10.5 supplementary orthogonal idempotents e~2 = e2 ; e~0 = v0 strongly connected by the same old v1 , and with the same old Peirce decomposition; by the Strong Coordinatization Theorem 11.9 we have an explicit ~ isomorphism J ! RedS pin(~q ) by '~ a2 y b0 = ~ (a2 ); y; ~ (b0 ) . Here the coordinate map, scalar ring, and quadratic form in J~ can be expressed in J as ~ (a2 ) = (a2 ); ~ (b0 ) = 2 Uv 1 (b0 ) ;
~ = ; q~(y ) = 2 Uv 1 (q0 (y )) : Indeed, by Creating Involutions 10.5(1) the connection involution on J~ 0 has b0 = Uv 1 (b0 ), and ~2 (a2 ) = 2 (a2 ) [by Shifted Peirce Specialization 10.5(2)] = (a2 ) [by de nition], ~0 (b0 ) = ~2 (b0 ) = Uv 1 (b0 ) : From this, by 11.9 the scalar ring is ~ = ~2 (J~2 ) = 2 (J2 ) = as we de ned it. By Creating Involutions 10.5(3) we have q~0 = q0 and by Strong Coordinatization the quadratic form q~ is given by q~(y ) = ~0 (~q0 (y )) = 1 ~0 (q0 (y )) = Uv (q0 (y )) [by the above]. By Creating Involutions 10.5(1) the map '~ reduces to proof.
'~ : a2 y1 b0 ! (a2 ); y1; Uv 1 (b0 ) : Under this mapping '~ the original (weak) Spin frame is sent as follows in terms of := (v2 ): e2 ! (1; 0; 0); v ! (0; v; 0); e0 ! (0; 0; 1); u 2 ! (1; 0; ) since (e2 ) = 1, and taking := (v2 ) as in the de nition we have ~0 (v0k ) = Uv 1 (v0k ) = (v2 )k 1 [by Creating Involutions 10.5(2)] = k 1, so for k = 0 we have ~0 (e0 ) = 1 , and for k = 2 we have ~0 (v02 ) = , hence the diagonal element which recovers J from J~ is sent
4. PROBLEMS FOR CHAPTER 11
287
to '~(u 2 ) = '~(e2 + v02 ) = (1; 0; ). The isomorphism '~ is at the same time an isomorphism 2 ~ (u) (u 2 ) = J ~ (u ) '! J = J RedS pin(~q)(~u) (~u = (1; 0; ))
[using Jordan Isotope 7.3(4) and our de nitions]. By Quadratic Factor Isotopes 7.4(2) the map (a; y; b) ! (a; y; b) is an isomorphism RedS pin(~q) (~u) ! RedS pin(q~) = RedS pin(q) of Jordan algebras (recall our de nition of q as q~); combining this with the isomorphism '~ above gives an isomorphism J ! RedS pin (q ) given explicitly as in (2) by a2 y1 b0 ! (a2 ); y1; Uv 1 (b0 ) ! (a2 ); y1 ; Uv 1 (b0 ) , sending e2 ; v; e0 to (1; 0; 0); (0; v; 0), (0; 0; 1). We have seen, conversely by Spin Frame Example 11.3, that every RedS pin(q) with invertible-valued q has a Spin frame, so these are again precisely all Jordan algebras with such a frame. (1) Show Vx2 Ux 1 = Ux;x 1 whenever x is invertible in a unital Jordan algebra. (2) Show 2 (Uv 1 b0 ) = 2 (U (b0 )) for U = 21 Uv;v 1 ; = (E2 (v2 )). (3) Show the isomorphism and quadratic form in 11.10 can be expressed as '(a2 y1 b0 ) = 2 (a2 ); y1 ; 2 (U (b0 )) ; q(y1 ) = (U (q0 (y1 ))). Exercise 11.10*
4. Problems for Chapter 11 (1) Among the 2 choices for coordinatizing J~0 in 11.9 we used the second, moving J0 over to J2 (where ~ = ) via Uv 1 before coordinatizing it. This was our rst example of asymmetry in the indices 2; 0. Show that we could also use the direct form ~ (b0 ) = (b0 )(v0 ) 1 . (2) Among 3 choices for coordinatizing q~ stemming from 11.9, we used the expression relating the new q to the old q0 through Uv 1. Show that we could also use the direct expression q~(y) = q2 (y; v0 1 y) in terms of the index 2, as well as a direct expression q~(y) = ~0 q0 (y) (v0 ) 1 in terms of the index 0. Problem 1*.
CHAPTER 12
Hermitian Coordinatization In this chapter we give another general coordinatization theorem, this time for those 2 2 hermitian matrix algebras H2 (D; ) where the associative coordinate algebra D is symmetrically-generated in the sense that it is generated by its symmetric elements. These hermitian algebras will be separated o, not by an identity, but by the property that their o-diagonal Peirce space J1 is a cyclic J2 -module generated by an invertible (connecting) element. General hermitian algebras H2 (D; ) need not have D symmetrically generated. For example, the quaternions H under the standard involution are not generated by H(H ) = R 1. However, for the simple Jordan algebras we are interested in, the only non-symmetrically generated H2 (D; )'s (where H(D) is a division algebra NOT generating D) are those where D is a composition algebra, and these algebras are simultaneously spin factors. Our view is that it is reasonable to exclude them from the ranks of algebras of \Truly Hermitian Type" and consign them to Quadratic Form Type.
1. Cyclic Frames We begin by exploring the structural consequences that ow from the condition that there exist a cyclic 2-frame. As usual, throughout the chapter we x a Peirce decomposition with respect to an idempotent e, and set e2 := e; e0 := 1 e; i represents a \diagonal" index i = 2; 0 and j = 2 1 its complementary index; the Peirce specializations 9.1 of Ji on J1 are denoted by a subscriptless : Definition 12.1 (Cyclic Peirce Condition). A Jordan algebra J is said to satisfy the cyclic Peirce condition with respect to a pair of supplementary idempotents e2 ; e0 if the o-diagonal Peirce space J1 has a cyclic Peirce generator y1 as left J2 -module, J1
= D(y1 ) 289
290
Hermitian Coordinatization
for D the subalgebra of End (J1 ) generated by (J2 ). A cyclic 2-frame fe2; v1; e0 g for a unital Jordan algebra J consists of a pair of supplementary orthogonal idempotents e2 ; e0 and an invertible cyclic generator v1 2 J1 . As always, the frame is strong if v1 strongly connects e2 and e0 , v12 = e2 + e0 = 1:
Di denote the subalgebra of End (J1 ) generated by (Ji ), and let v be any invertible element of J1 . (1) Show D2 (v) = D0 (v); conclude fe2; v; e0 g is a cyclic frame i D0 (v) = J1 , so cyclicity is symmetric in the indices 2,0. (2) Show D2 (v) = D2 (v 1 ); conclude fe2 ; v 1 ; e0 g is a cyclic frame i fe2 ; v; e0g is Exercise 12.1* Let
cyclic.
Example 12.2 (Twisted Hermitian Cyclicity). Let J be the twisted 2 2 hermitian algebra H2 (A; ) for a unital associative -algebra A and diagonal matrix = diag f1; 2g (normalized to have its rst entry 1): Then e2 = 1[11] = E11 ; e0 = 2 1 [22] = E22 are supplementary orthogonal idempotents, with J1 = A[12] ; J2 = H(A)[11] . Under the identi cation of J1 with A and J2 with H(A), the endomorphism 2 (b[11] ) on J1 corresponds to left multiplication Lb by b on A [warning: 0 (b[22] ) corresponds to right multiplication R 2 b , not to Rb ], so D End (J1 ) corresponds to all left multiplications LB End (A) by elements of the subalgebra B = hH(A)i A generated by the symmetric elements: D(a[12] ) = (Ba)[12] : In this situation the element a[12] is a cyclic generator for the space J1 i (1) a 2 A is left-invertible: ba = 1 for some b 2 A, (2) A is symmetrically-generated: H(A) generates A as associative algebra, B = A. When A is symmetrically generated we always have a standard cyclic
2-frame
f1[11] ; 1[12] ; 2 1[22] g H2 (A; ):
Indeed, whenever a is invertible we get a cyclic 2-frame f1[11] ; a[12] ; 2 1[22] g: This frame is strong i 2 is the norm of an invertible element of A : (3)
2 1 = aa for some invertible a. In particular, when A is symmetrically-generated and 2 = 1 we have a standard strong cyclic 2-frame f1[11]; 1[12]; 1[22]g H2 (A):
1. CYCLIC FRAMES
291
Even though A = R is (trivially) symmetrically-generated under the (trivial) identity involution, and is cyclically generated by any nonzero a[12] , when we take 2 = 1 the twisted algebra H2 (A; ) has no strong frame f1[11] ; a[12] ; 2 1 [22] g because 2 1 = 1 6= aa = a2 : proof. (1-2) Since by Brace Products 7.8(2)(iii) we have (b[11] ) a[12] = ba[12] = (Lb (a))[12] , the generators (b[11] ) of D correspond to the left multiplications Lb and D corresponds to LB . Then a[12] is a cyclic generator for J1 i Ba[12] = A[12] , i.e. Ba = A. This certainly happens if a is left-invertible and A = B, since ba = 1 as in (1) implies Ba = Aa Aba = A1 = A; conversely, Ba = A implies left-invertibility ba = 1 for some b 2 B, so A = Ba = Ba1 = Baab BHB B and B = A. (3) By Brace Products 7.8(2)(ii) some a[12]2 = a 2 a[11] + aa[22] equals 1 i a 2 a = 1 = 2 aa; this forces a to have left and right inverses, therefore to be invertible (as is 2 ), in which case (a 2 )a = 1 () 1 = aa 2 () aa = 2 1 . The next several lemmas gather technical information about the coordinate algebras D of cyclic 2-frames. Throughout, let J be a unital Jordan algebra with a pair of supplementary orthogonal idempotents e2 ; e0 . Lemma 12.3 (Cyclic Diagonal Isotopes). Diagonal isotopes always inherit the cyclic Peirce condition: If fe2 ; y1; e0 g is a cyclic frame for (u) (u = u + u ) has cyclic frame fe(u) = J then any diagonal isotope J 2 0 2 (u) (u) 1 1 u2 ; y1 = y1 ; e0 = u0 g: (u) we know by the Diagonal proof. In any diagonal isotope J Isotope Lemma 10.4 that e(2u) ; e(0u) are supplementary idempotents with the same Peirce decomposition as e2 ; e0 and we have D(u) = D as subalgebras of End (J1 ) because by 10.4 (a2 )(u) = (a2 ) (u2 ) D implies D(u) D, and conversely (a2 ) = ( (a2 ) (u2 ))( (u2 2 ) (u2)) D(u) implies D D(u) . Thus J1 = D(y) is cyclic in J i J(1u) = D(u) (y) is cyclic in J(u) : Let v be any invertible element of J1 . Show for i = 2; 0; j = 2 i that (1) Uv Vai Uv 1 = VUv (ai ) Vuj on J1 (uj := Uv 1(ei )) (use the linearized Fundamental Formula), (2) Uv Di Uv 1 = Uv 1 Di Uv = Dj . (3) Conclude anew (as in Ex. 12.1) that J1 = D2 (v) = D0 (v) when fe2 ; v1 ; e0 g is cyclic.
Exercise 12.3A*
Alternately, establish the previous exercise by showing (1) fdj 2 Dj j dj (v) 2 Di (v)g is a subalgebra which contains (Jj ) = (Uv Ji ), (2) Dj (v) Di (v), hence (3) Dj (v) = Di (v): Exercise 12.3B*
292
Hermitian Coordinatization
12.4 (Coordinate Map ). When fe2 ; v1 ; e0 g is a strong cyclic 2-frame then for all elements ak 2 J2 ; (a) = Va jJ1 , we have (i) (a1 ) : : : (an )(v1 ) = an : : : a1 (v1 ); (ii) J1 = D 2 (v1 ) = D 0 (v1 ) is cyclic as left J2 or J0 -module for Di the subalgebra of End (J1 ) generated by (Ji ); (iii) d 2 D is determined by its value on v1 : d(v1 ) = 0 ) d = 0; (iv ) (d) := d(v1 ) de nes a -linear bijection : D ! J1 : proof. By the Connection Involution Lemma 10.3(1) we know that bar is an involution of the Jordan algebra which interchanges e2 ; e0 , hence their diagonal Peirce spaces. For (i) we induct on n; n = 0 being vacuous, and for n + 1 elements ai 2 J2 Lemma
(a1 ) (a2 ) : : : (an ) (an+1 )(v ) = (a1 ) (a2 ) : : : (an ) an+1 (v ) [by Connection Fixed Point 10.3(3)] = (an+1 ) (a1 ) (a2 ) : : : (an )(v ) [by Peirce Associativity 9.3 for an+1 2 J0 ] = (an+1 ) (an ) : : : (a2 ) (a1 )(v ) [by the induction step]. This shows D2 (v ) = D0 (v ) as in (ii). For (iii); d(v ) = 0 ) d(J1 ) = d(D0 (v )) (by (ii)) = D0 (d(v )) [by Peirce Associativity] = 0, hence d = 0 in End (J1 ). For (iv ); is injective by (iii) and surjective by (ii) for D = D2 . Lemma 12.5 (Reversal Involution ). When fe2 ; v1 ; e0 g is a strong cyclic 2-frame, the coordinate algebra D carries a reversal involution (written as a superscript) :
(a1 ) (a2 ) : : : (an ) := (an ) : : : (a2 ) (a1 ); which interacts with the map and the trace t2 and norm q2 on J1 by (i) d (v1 ) = d(v1 ); (d ) = (d); (ii) q2 (d(v1 ); c(v1 )) = t2 (cd (v1 )) = t2 (cd ) ; (iii) d + d = t2 (d(v1 )) = t2 ( (d)) ; (iv ) dd = q2 (d(v1 )) = q2 ( (d)) ; (v ) H(D; ) = (J2 ):
1. CYCLIC FRAMES
293
The invertibility of the coordinate map of 12.4(iv ) allows us to pull back the connection involution on J1 to an involutory linear transformation := 1 Æ Æ on D. Thus (i) holds by de nition. We claim is just reversal on D since proof.
(a1 ) (a2 ) : : : (an ) = (a1 ) (a2 ) : : : (an ) = (a1 ) (a2 ) : : : (an )(v ) = an : : : a2 a1 (v ) = (an ) : : : (a2 ) (a1 )(v ) = (an ) : : : (a2 ) (a1 ) .
[by de nition of ] [by de nition 12.4(iv ) of ] [by 12.4(i)] [since
is a Jordan algebra involution]
Since the reversal mapping is always an algebra involution as soon as it is well-de ned, we have a reversal involution of the algebra D: The salient facts about the hermitian elements with respect to now follow. (iii) follows easily: applying the bijective , it suf ces if the two sides agree on v , and d(v ) + d (v ) = d(v ) + d(v ) (by (i)) = ft2 (d(v )); vg [by Connection Involution 10.3(2) for x1 = d(v )] = (t2 (d(v )))(v ): (iv ) follows for monomials d = (a(1) ) (a(2) ) : : : (a(n) ) from
(q2 (d(v ))) = q2 ( (a1 ) (a2 ) : : : (an )(v ) = Ua1 Ua2 : : : Uan q2 (v ) [by repeated q -Composition 9.5(3)] = (a1 ) (a2 ) : : : (an )1 (an ) : : : (a2 ) (a1 ) [by repeated Peirce Specialization 9.1 and (q2 (v )) = (e) = 1] = dd . The quadratic relation (iv ) for sums of monomials will require (ii); it suÆces to establish the bilinear relation (ii) for monomials d = (a(1) ) (a(2) ) : : : (a(n) ); where
q2 (d(v ); c(v )) = q2 (a1 ) (a2 ) : : : (an )(v ); c(v ) = q2 (an ) : : : (a2 ) (a1 )(v ); c(v ) [by 12.4(i)] = q2 v; (a1 ) (a2 ) : : : (an )c(v ) [by repeated U 1q 9.5(2) for j = 2]
294
Hermitian Coordinatization
= q2 v; c (a1 ) (a2 ) : : : (an )(v )
= q2 v; c (an) : : : (a2 ) (a1 )(v ) = q2 v; cd (v )
= t2 cd (v )
[by Peirce Associativity 9.3] [by 12.4(i)]
[by (i)] [by de nition of the trace].
Once we have (ii) we can terms we have (q2 (d(v ), nish (iv ): for mixed c(v ))) = t2 (cd (v )) [by (ii)] = cd + (cd ) [by (iii)] = cd + dc [pi is an involution]. Finally, for (v ) we have d = d =) d = 12 [d + d ] = 1 2 (t2 (d(v ))) 2 (J2 ) by (iii), and the converse is clear by the de nition of reversal.
2. Strong Hermitian Coordinatization Now we are ready to establish the main result of this section, that Jordan algebras with strong hermitian frame can be coordinatized as hermitian algebras. Theorem 12.6 (Strong 2 2 Hermitian Coordinatization). A Jordan algebra J has a strong cyclic 2-frame fe2 ; v1 ; e0 g,
v12 = e2 + e0 = 1;
J1
= D(v1 );
i it is isomorphic to a hermitian 2 2 matrix algebra with standard frame and symmetrically-generated coordinate algebra. Indeed, if J has strong cyclic 2-frame then, in terms of the coordinate map , the coordinate involution , and the coordinate algebra D de ned by
(a2 ) := Va2 jJ1 ; d (v ) := Uv1 (d(v1 )); D := the subalgebra of End(J1 ) generated by (J2);
the map
' : a2 d(v1 ) b0
! (a2 )[11] + d[12] + (b0 )[22] is an isomorphism J ! H2 (D; ) sending the given cyclic 2-frame fe2; v1; e0 g to the standard cyclic 2-frame f1[11]; 1[12]; 1[22]g: proof. Bi bijectivity 12.4(iv ), every x 2 J can be written uniquely as x = a2 d(v ) b0 : To show that our map '(x) = (a2 )[11] + d[12] +
(b0 )[22] = (da2 ) (db0 ) is a homomorphism, it suÆces to prove '(x2 ) = '(x)2 . But by the Peirce Brace and Orthogonality Rules 8.5,
3. HERMITIAN COORDINATIZATION
295
Quadratic Forms 9.4, Associativity 9.3, and Coordinate Map 12.4(i)
so
x2 = [a22 + q2 (d(v ))] f(a2 + b0 ); d(v )g [b20 + q0 (d(v ))] = [a22 + q2 (d(v ))] [ (a2 )d(v ) + d (b0 )(v )] [b20 + q0 (d(v ))] = [a22 + q2 (d(v ))] [ (a2 )d + d (b0 )](v ) [b20 + q0 (d(v ))]
'(x2 ) = a22 + q2 (d(v )) [11] + (a2 )d + d (b0 ) [12] + b20 + q0 (d(v )) [22]: On the other hand, by the Basic Brace Product Rules 3.7(2) for H2 (D)
'(x)2 = (a2 )[11] + d[12] + (b0 )[22] 2 = (a2 )2 + dd [11] + (a2 )d + d (b0 ) [12] + (b0 )2 + d d [22] so '(x2 ) = '(x)2 follows from (a22 ) = (a2 )2 and (b20 ) = ((b0 )2 ) = (b0 )2 [by the fact that bar is an automorphism and by Peirce Specialization Rules 9.1] and we see (q2 (d(v ))) = dd [by the Reversal Involution Lemma 12.5(iv )] and (q0 (d(v ))) = (q2 (d(v ))) [by Connection Action 10.3(2)] = (q2 (d (v ))) [by 12.5(i)] = d d [by 12.5(iv ) again]. To show ' is an isomorphism it suÆces to prove it is a linear bijection on each Peirce space: by Reversal Involution 12.5(v ) and Peirce Injectivity 9.2(1) ' is a bijection of J2 on H2 (D; ) and a bijection Æ of J0 on H2 (D; ), and by Coordinate Map 12.4(iv ) ' is a bijection 1 : J ! D ! D[12]. 1 Finally, ' clearly sends e2 ! 1[11] ( (e2 ) = 1); e0 ! 1[22] ( (e0 ) = (e2 ) = 1); v = 1(v ) ! 1[12]. We have seen, conversely, by Twisted Hermitian Cyclicity Example 12.2 that every H2 (A) for symmetrically-generated A has a standard strong cyclic 2-frame, so these are precisely all Jordan algebras with such a frame.
3. Hermitian Coordinatization Coordinatization is most clearly described for strong frames. When the frames are not strong, we again follow the trail blazed by Jacobson to a nearby strongly connected isotope, muttering the magic word \diagonal isotope" to convert the general case into a strong case.
296
Hermitian Coordinatization
12.7 (2 2 Hermitian Coordinatization). A Jordan algebra J has a cyclic 2-frame fe2 ; v1 ; e0 g, v12 = v2 + v0 for vi invertible in Ji ; J1 = D(v1 ) i it is isomorphic to a twisted hermitian 2 2 matrix algebra with standard frame and symmetrically-generated coordinate algebra. Indeed, if J has a cyclic 2-frame then, in terms of the coordinate map , coordinate involution , and coordinate algebra D de ned by (a2 ) := Va2 jJ1 ; d (v ) := Uv1 (d(v1 1)); D := the subalgebra of End(J1) generated by (J2) the map ' : a2 d(v1 ) b0 ! (a2 )[11] + d[12] + (b0 )[22] ( := diag f1; 2g; 2 := (v2 )) is an isomorphism J ! H2 (D; ) sending the given cyclic 2-frame fe2; v1; e0 g to the standard cyclic frame f1[11] ; 1[12] ; 2 1[22] g: proof. The argument follows the same path as that of Spin Coordinatization 11.10. By the Diagonal Isotope Lemma 10.4, the diagonal isotope eJ := J(u) for u = e2 + v0 1 has the same Peirce decomposition as J and still satis es the cyclic Peirce condition (since all diagonal isotopes do by Cyclic Diagonal Isotopes 12.3), but is now strongly connected, so by the Strong Hermitian Coordinatization The(u 2 ) e ~ e orem J = H2 (D; ~ ), therefore J = J [by Jordan Isotopes 7.3(4), 2 2 ( ~ for diagonal u = e2 + v0 ] = H2 (D; ~ ) ) = H2 (D; ) [by Twisted Matrix 7.8(3)] is a twisted hermitian matrix algebra. This completes the proof that J is twisted hermitian If we want more detail about the form of the isomorphism, as in Spin Coordinatization we must argue at greater length. eJ = J(u) has, by Creating Involutions 10.5, supplementary orthogonal idempotents e~2 = e2 ; e~0 = v0 strongly connected by the same old v1 , and with the same old Peirce decomposition; by the Strong Coordinatization ~ ; ~ ) by '~(a2 Theorem 12.6 we have an explicit isomorphism Je ! H2 (D ~ ~ d(v ) b0 ) = ~ (a2 )[11] + d[12] + ~ (b0 )[22]. Here the coordinate map, coordinate ring, connection involution, and coordinate involution in eJ can be expressed in J as ~ = ; D~ = D; b2 = Uv (b2 ); b0 = Uv 1 (b0 ); ~ = : Indeed, by Strong Coordinatization 12.6 the coordinate map is ~ (a2 ) = 2 (a2 ) [by Diagonal Isotope 10.4] = (a2 ) [by (1)], so by Strong Co~ = h~ (Je2 )i = h (J2 )i = D as ordinatization the coordinate ring is D in (1). By Creating Involutions 10.5(1) the connection involution has Theorem
3. HERMITIAN COORDINATIZATION
297
b2 = Uv (b2 ); b0 = Uv 1 (b0 ); x1 = Uv fv0 1 ; x1 g. Then by Strong Coordinatization d~ (v ) = d(v ) = Uv fv0 1; d(v )g = Uv (dfv0 1; v g) [by Peirce Associativity 9.3] = Uv (d(v 1 )) = d (v ) and ~ = as asserted, because of the general relation fq0 (v1) 1; v1 g = v1 1 resulting from cancelling Uv from the equation Uv fv0 1 ; v g = Uv Vv (v0 1 ) = Uv2 ;v v0 1 = fv2 + v0 ; v0 1; v g = fv0 ; v0 1 ; v g [by Peirce Orthogonality Rules 8.5] = v [by Peirce Specialization Rules 9.1] = Uv (v 1 ). By Strong Coordinatization 12.6 the map '~ reduces to '~ : a2 d(v ) b0 ! (a2 )[11] + d[12] + Uv 1 (b0 ) [22]: Under the mapping '~ the Spin frame is sent as follows: e2 ! 1[11]; v ! 1[12]; e0 ! 2 1 [22]; u 2 ! diag f1; 2g = ( 2 := (v2 )) since (e2 ) = 1, and by Creating Involutions10.5(3) we have Uv 1 (v0k ) = 2k 1 , so for k = 0; 2 we have Uv 1 (e0 ) = 2 1 ; Uv 1 (v02 ) = 2 , hence '~(u 2 ) = '~(e2 + v02 ) = 1[11] + 2 [22] = diag f1; 2g. The isomorphism '~ is at the same time an isomorphism 2 (u 2 ) '~ : J = J(u) (u ) = Je ! H2(D~ ; ~ )( ) = H2 (D; )( ) [using Jordan Isotopes 7.3(4)] of J with a diagonal isotope of H2 (D; ). By Twisted Matrix Isotope 7.8(3) with 2 = 2 , the map L : a[11] + d[12]+b[22] ! a[11] +d[12] +b[22] is an isomorphism H2 (D; )( ) ! H2(D; ) of Jordan algebras; combining this with the isomorphism '~ gives an isomorphism J ! H2 (D; ) given explicitly as in (2) by 1 a2 d(v ) b0 ! (a2 )[11]+ d[12]+ Uv b0 [22] ! (a2 )[11] + d[12] + (Uv 1 b0 )[22] , sending e2 ; v; e0 to the standard 1[11] ; 1[12] ; 2 1 [22] . We have seen, conversely, by Twisted Hermitian Cyclicity 12.2 that every H2 (A; ) for which A is symmetrically-generated has a standard cyclic 2-frame, so these are again precisely all Jordan algebras with such a frame.
Third Phase :
Three's a Crowd
Ashes to ashes, dust to dust Peirce decompositions are simply a must
-Old Jordan folksong, c. 40 B.Z.E.
In this phase we will investigate the structure of Jordan algebras having n 3 connected supplementary orthogonal idempotents. These come in only one basic avor: the hermitian algebras Hn (D; ) of n n hermitian matrices with entries from a -algebra (D; ). We develop the necessary machinery for general Peirce decompositions with respect to a nite family idempotents, and obtain a general coordinatization theorem, the Jacobson Coordinatization Theorem, asserting that Jordan algebras with hermitian Peirce n-frames for n 3 are hermitian matrix algebras. The 5 chapters in this Phase are the heart of the classical approach, Peirce decompositions and coordinatization via hermitian matrix units. It is a very nuts-and-bolts approach: we reach deep inside the Jordan algebra, take out and examine the small Peirce parts of which it is composed, then see how they t together to create the living organism, the Jordan algebra.
CHAPTER 13
Multiple Peirce Decompositions 1. Decomposition The time has come, the walrus said, to talk of Peirce decompositions with respect to more than one idempotent. By now the reader has gained some familiarity with Peirce decompositions with respect to a single idempotent, and will not be surprised or overwhelmed when digesting the general case. This time we will immediately apply Peircers to obtain the basic facts of Peirce decompositions: as the World War I song says, \How are you going to keep them down on the farm, after they've seen Peircers?" Not only do Peircers show that Peirce decomposition works, they show why it works, as an instance of toral actions which play important roles in many areas of mathematics. Throughout this chapter we will x a set E = fe1 ; : : : ; en g of pairwise orthogonal idempotents in a Jordan algebra J; e2i = ei ; ei ej = 0 (i 6= j ): We supplement these to form a supplementary set of 1 + n orthogonal idempotents in the unital hull bJ by adjoining n X
e0 := ^1 (
i=1
ei ) 2 bJ:
13.1 (Peirce Projections). The Peirce projections Eij = Eji in J determined by an orthogonal family of idempotents E are the linear transformations on J / Jb given for i; j = 0; 1; : : : ; n by Eii := Uei ; Eij := Uei ;ej = Eji (i 6= j ): Notice that for indices i; j 6= 0 the Peirce projection Eij is intrinsically determined on J by ei and ej themselves; it is only for index i = 0 that E0j depends on on the full collection E of idempotents in order to form P ^ e0 = 1 E ei : Definition 13.2 (Multiple Peircer). For any 1 + n-tuple of scalars ~t = (t0 ; t1 ; : : : ; tn ) 2 1+n we de ne the piercer E (~t ) to be the linear transformation on J / Jb given by the U -operator of the element e(~t ) = Definition
299
300
Multiple Peirce Decompositions
Pn i=0 ti ei
2 Jb;
E (~t ) := Ue(~t ) jJ =
X 0ij n
ti tj Eij :
It is very important that the Peircer is the U -operator determined by an element, since this guarantees (by the Fundamental Formula) that it is a structural transformation on J. Proposition 13.3 (Peircer Torus). The Peircer torus of operators on J determined by E is the strict homomorphism ~t ! E (~t ) of multiplicative monoids 1+n ! End (J): the Toral Property X E (~1 ) = Eij = 1J ; E (~s ~t ) = E (~s )E (~t ); 0ij n
holds strictly in the sense that it continues to hold over all scalar extensions , i.e. holds as operators on J for any 1 + n-tuples ~s = (s0 ; s1 ; : : : ; sn); ~t = (t0 ; t1 ; : : : ; tn ) 2 1+n . proof. The most elementary route to show that e(~ s ); e(~t ) lead an associative life together uses only properties of Peirce decompositions relative to a single idempotent. If the multiplication operators E (~t ) are toral on the unital hull their restrictions will be toral on the original algebra. Because of this, it suÆces to work in the unital hull, so we again assume from the start that J = bJ is unital, and we can treat the idempotent e0 and the index 0 like all the others. Trivially the EijP are supplementary operators since P P P the ei are supplementary elements: i Eii + i<j Eij = i Uei + i<j Uei ;ej = UPi ei = Ue(~1 ) = U1 = 1J by the basic property of the unit element. We have
E (~s )E (~t ) E (~s ~t ) =
X
(si sj )(tk t` )(Eij Ek`
ij ; k`
Æik Æj` Eij );
so the Toral Property holds strictly (equivalently, holds for all indeterminates in [S; T ]), i the Peirce projections Eij satisfy Eij Ek` Æik Æj`Eij = 0, i.e. form a family of pairwise-orthogonal projections: Eij2 = Eij ; Eij Ekl = 0 if fi; j g 6= fk; `g. We now establish this projection property. If fi; j g does not contain fk; `g, set e = ei + ej (or e = ei if i = j ), so ei ; ej 2 J2 (e) and one of ek ; e` 2 J0 (e) (since at least one of k; ` doesn't appear in fi; j g); then orthogonality Eij Ek` = 0 follows from UJ2 UJ0 ;JJ UJ2 (J0 + J1 ) = 0 from Peirce Triple Rules and Orthogonality 8.5. Dually, if fi; j g is not contained in fk; `g; set e = ek + e` (or e = ek if k = `), so ek ; e` 2 J2 (e) and one of ei ; ej 2 J0 (e); then orthogonality Eij Ek` = 0 follows from UJ0 ;JUJ2 J UJ0 ;J J2 = 0:
1. DECOMPOSITION
P
301
When fi; j g = fk; `g we have Eij Ek` = Eij Eij = Eij r;s Ers (by the above orthogonality of distinct projections) = Eij 1J = Eij : Recall that one virtue of the Peircers is that they clearly reveal why the Peirce projections are a supplementary family of orthogonal projections; unfortunately, our elementary proof required us to prove this directly instead of harvesting it as a consequence. For a single idempotent Ue(~s ) Ue(~t ) = Ue(st~ ) followed immediately from Operator Power~ are all live inside a subalgeAssociativity 5.6(2) because fe(~s ; e(~t ; e(st bra [~x]. For more than one idempotent this cohabitation still holds, but to establish this we would have to digress though standard, though lengthy, arguments about scalar extensions and localizations. (Rather than cast this proof into the darkness of the Appendices, we instead will sketch it sotto voce in Section 6). Exercise 13.3A* In any associative algebra where 12
2 , show that E 2 = E; EA+
AE = 0 imply EA = AE = 0. If 62 , show that E 2 = E; EA + AE = 0; EAE = 0 imply EA = AE = 0. Use this to derive the orthogonality of the Peirce projections directly from the Jordan identity Ux2 = Ux2 and its linearizations (you can also save time by using orthogonality fJ2 J0 ; Jg = 0): if e; f; g; h ? are 2 = U , (3) U U = 0, orthogonal idempotents show (1) Ue2 = Ue , (2) Ue;f e;f e f (4) Ue Ue;f = Ue;f Ue = 0, (5) Ue Uf;g = Uf;g Ue = 0, (6) Ue;f Ue;g = 0, (7) 2 = Ue;f Ug;h = 0. Give a dierent direct proof of (2) using the general identity Ux;y Ux2;y2 + Vx;y Vy;x VUx y2 . 1 2
A slightly more elegant method for establishing the orthogonality of Peirce projections uses a non-controversial application of Macdonald's Principle, Ux Ux3 = Ux4 for x = e(~t ) 2 J[T ], but to recover the individual Eij requires a delicate combinatorial identi cation of coeÆcients of various powers of the ti : (1) Verify that no two pairs of pairs fi; j g; fk; `g can give rise to the same product ti tj t3k t3` : (2) Use this to show that identifying coeÆcients of t4i t4j in P P P ( ij ti tj Eij )( k` t3k t3` Ek` ) = pq t4p t4q Epq yields Eij2 = Eij , and that identifying coeÆcients of ti tj t3k t3` yields Eij Ek` = 0 if fi; j g 6= fk; `g. Exercise 13.3B*
Either way, elegant or not, we have our crucial decomposition of the identity operator and hence the algebra. Theorem 13.4 (Peirce Decomposition). (1) The Peirce projections with respect to an orthogonal family of idempotents form a supplementary family of projection operators on J;
1J =
X 0ij n
Eij ;
Eij Ek` = Æi;k Æj;`Eij ;
302
Multiple Peirce Decompositions
and therefore the space J breaks up as the direct sum of the ranges : we have the Peirce Decomposition of J into Peirce subspaces, J
=
M ij
Jij
for
Jij
= Jji := Eij (J):
(2) Peirce decompositions are inherited by ideals or by subalgebras containing E : we have Peirce Inheritance L K = ij Kij for Kij = Eij (K) = K \ Jij (K / J or E K J):
proof. (1) We have already decomposed the identity operator into supplementary orthogonal projections, and this always leads to a decomposition of the underlying -submodule into the direct sum of the Peirce subspaces Jij = Eij (J): (2) For Peirce Inheritance, just as in 8.2(2) the Peirce projections Eij are by de nition multiplications by 1 and the ei , so they map into itself any ideal or any subalgebra Lcontaining E , therefore induce by restriction a decomposition K = ij Kij for Kij = Eij (K) = K \ Jij (again noting if x 2 K \ Jij then x = Eij (x) 2 Eij (K)).
We will usually denote the Peirce projections by Eij ; if there were any danger of confusion (which there won't be), then we would use the more explicit notation Eij (E ) to indicate which family of idempotents gives rise to the Peirce decomposition. Theorem 13.5 (Multiple Eigenspace Laws). (1) The Peirce subspace Jij is the intersection of J with the eigenspace with eigenvalue ti tj of the indeterminate Peircer E (~t ) (~t = (t0 ; t1 ; : : : ; tn ), on J[T ] for independent indeterminates ti 2 [T ]), and also the common eigenspaces of the V -operators Vei ; Vej on J with eigenvalue 1 (or 2 if i = j ): we have the Peircer- and V-Eigenspace Laws: Jij = fx 2 J j E (~ t )x = ti tj xg; Jii = J2 (ei ) = fx 2 J j Vei (x) = 2xg; Jij = J1 (ei ) \ J1 (ej ) = fx 2 J j Vei (x) = Vej (x) = xg (i 6= j );
and all other Vek (k 6= i; j ) have eigenvalue 0 on Jii ; Jij :
= J2 (ei ) \ (\k6=i J0 (ek )) = fx 2 J j Vei (x) = 2x; Vek (x) = 0 (k 6= i)g; Jij = J1 (ei ) \ J1 (ej ) \ \k 6=i;j J0 (ek ) = fx 2 J j Vei (x) = Vej (x) = x; Vek (x) = 0 for k 6= i; j g: Jii
3. MULTIPLICATION
303
Thus on a Peirce space either some idempotent's V -operator has eigenvalue 2 (and all the rest have eigenvalue 0), or else two separate idempotents' V -operators have eigenvalue 1 (and all the rest have eigenvalue 0). We have exactly the same result for the L's, scaled by a factor 12 . proof. As in the one-idempotent case 8.4, the Peircer Eigenspace P Law follows from the de nition of the Peircer E (~t ) = ti tj Eij , since the \eigenvalues" ti tj are by construction independent. Similarly, the V{Eigenspace Law follow as in 8.4, since Vei = Uei ;1 = Uei ;Pj ej = P P Uei ;ei + j 6=i Uei ;ej = 2Eii + j 6=i Eij , noting that the eigenvalues 2; 1; 0 are distinct when 12 2 :
2. Recovery We will frequently need to focus on a single idempotent ei out of the family E , and it will be useful to recognize the Peirce decomposition relative to ei inside the general Peirce decomposition. Theorem 13.6 (Peirce Recovery). (1) The Peirce decomposition of J relative to a single idempotent ei in the family can be recovered as J2 (ei ) = Jii ; J1 (ei ) = j 6=i Jij ; J0 (ei ) = j;k 6=i Jjk ; so the Peirce 2; 1, or 0 space relative to ei is the sum of the Peirce spaces where 2; 1, or 0 of the indices equals i. (2) More P generally, the Peirce decomposition relative to any subsum eI = i2I ei for a subset I f0; 1; : : : ; ng is given by J2
(eI ) =
X
ij 2I
Jij
;
J1
(eI ) =
X
i2I;k62I
Jik ;
J0
(eI ) =
X
k`62I
Jk`
so again the Peirce 2; 1, or 0 space relative to eI is the sum of all Peirce spaces Jij where 2; 1, or 0 of the indices fall in I: proof. It suÆces to establish the general case (2), and here by the Peirce projection de nitions 8.2, 13.1 we have P P P E2 (eI ) = UeI = UPi2I ei = i2I Uei + i<j 2I Uei ;ej = ij 2I Eij ;
E1 (eI ) = UeI ;1
E0 (eI ) = U1
eI
eI
= UPi2I ei ;Pk62I ek =
= UPk62I ek =
P
i2I;k62I Uei ;ek
P kl62I Ek` :
=
P
i2I;k62I Eik ;
3. Multiplication The whole rationale for Peirce decomposition is that it breaks an indigestible algebra down into bite-sized Peirce pieces, which have simpler multiplication rules. As in the one-idempotent case, the underlying philosophy is that Peirce spaces behave like the -submodules
304
Multiple Peirce Decompositions
Eij Mn (D) and multiply like the matrix units Eij themselves. We must again keep in mind that in the Jordan case the o-diagonal spaces DEij ; DEji cannot be separated, they are lumped into a single space Jij = D[ij ] = Jji . This symmetry in indices is important to keep in mind when considering \linked" or \connected" indices. We have the following crucial rules for multiplying multiple Peirce spaces, generalizing Peirce Multiplication 8.5 for Peirce spaces relative to a single idempotent e (which is now subsumed under the general case by taking e1 = e; e0 = 1 e). Again the Peircer is eective because it satis es the Fundamental Formula. Theorem 13.7 (Peirce Multiplication). The Peirce spaces multiply according to the following rules. (1) For bilinear products we have 4 Peirce Brace Rules for distinct indices i; j; k = 6 2 2 Jii Jii ; Jij Jii + Jjj ; fJii; Jij g Jij ; fJij ; Jjk g Jik : (2) For distinct indices i; j we have 3 Peirce U -Product Rules UJii (Jii ) Jii ; UJij (Jii ) Jjj ; UJij (Jij ) Jij ; and for arbitrary i; j; k; ` we the Peirce Triple Product Rule fJij ; Jjk ; Jk`g Ji`: In particular, the diagonal spaces Jii are inner ideals. (3) We have the Peirce Orthogonality Rules fJij ; Jk`g = 0 if the indices don't link, fi; j g \ fk; `g = ;; UJij (Jk` ) = 0 unless fk; lg fi; j g; fJij ; Jk`; Jrsg = 0 if the indices cannot be linked (for possible linkages keep in mind that Jrs = Jsr ). proof. Just as in the single-idempotent case 8.5, for the multiplication rules it is crucial that we have scalars ti 1 to work with, so we 1 ] in T = ft ; t ; : : : ; t g pass to the formal Laurent polynomials J[T; T 0 1 n PM k k k 0 1 n (consisting of all nite sums N xk0 ;k1 :::kn t0 t1 : : : tn with coeÆcients xx~ from J), with the natural operations. This is an algebra over the Laurent polynomials := [T; T 1 ] (consisting of all nite sums PM k0 k1 kn N k0 ;k1 :::kn t0 t1 : : : tn with coeÆcients ~k from ), with the natural operations. In particular, is a free -module with basis of all monomials tk00 tk11 : : : tknn (ki 2 Z) over , so J is imbedded in J = 1 ]. J[T; T Since by de nition 13.2 the Peircer is a U -operator E (~t ) = Ue(~t ) restricted to J, and E (~t ) now has an inverse E (~t 1 ) since the Toral D
3. MULTIPLICATION
305
Property 13.3 continues [by strictness] to hold in J , we have for the U -rules E (~t ) Uxij (yk`) = E (~t )Uxij E (~t )E (~t 1 )(yk`) [by Toral 13.3] = UE (~t )(xij ) E (~t 1 )(yk`) [by the Fundamental Formula] = Uti tj xij tk 1 t` 1 (yk`) [by the Multiple Eigenspace Law 13.5] = t2i t2j tk 1 t` 1 Uxij (yk`), so the product is zero as in (3) unless t2i t2j tk 1 t` 1 = tr ts for some r; s, i.e.
fk; `g fi; j g:
If k = ` coincide, by symmetry we may assume k = ` = i, in which case tr ts = t2j and Uxij yii lies in the eigenspace Jjj as in the rst (if i = j ) or second (if i 6= j ) part of (2) ; if k 6= ` are distinct then so are i; j and by symmetry we may assume k = i; ` = j; tr ts = ti tj , so Uxij yij lies in the eigenspace Jij as in the third part of (2). Similarly for the trilinear products:
E (~t )fxij ; yk`; zrsg = E (~t )Uxij ;zrs E (~t )E (~t 1 )(yk`) = UE (~t )(xij );E (~t )(zrs ) E (~t 1 )(yk`), [by Toral and the Fundamental Formula again] = fti tj xij ; tk 1 t` 1 yk`; tr ts zrs g [by the Peircer Eigenspace Law 13.5 again] = ti tj tk 1 t` 1 tr ts fxij ; yk`; zrs g is zero as in (3) unless the indices can be linked j = k; ` = r so ti tj tk 1 t` 1 tr ts = ti ts , in which case the product falls in the eigenspace Jis as in (2). [Note that the middle indices k; ` can't both be only linked to the same side, for example the left, since if k = j; ` = i but k; ` are not linked to the right (k; ` 6= r; s) then xij ; yk` = yji 2 J2 (eI ) (I = fi; j g) but zrs 2 J0(eI ), so fxij ; yk`; zrsg 2 fJ2; J2; J0 g = 0 by Peirce Orthogonality 8.5.] The triple products come uniformly because the Peircer is a U operator and satis es the Fundamental Formula. There is no corresponding uniform derivation for the brace products (1), instead we derive them individually from the triple products: from (2-3)Pwe see P x2ii = Uxii 1 = Uxii ( ek ) = Uxii ei 2 Jii ; x2ij = UP xij 1 = Uxij ( ek ) = Uxij (ei + ej ) 2 Jjj + Jii ; fxij ; yk`g = fxij ; 1; yk`g = fxij ; ep ; yk`g is 0 as in (3) unless fi; j g\fk; `g 6= ; contains a linking index p, and if (by symmetry) p = j = k then fxij ; yk`g = fxij ; ej ; yj`g 2 fJij ; Jjj ; Jj` g Ji` as in the third [if i = j 6= ` or i 6= j = `] or fourth [if i; j; ` distinct] part of (1).
306
Multiple Peirce Decompositions
As in so many instances in mathematics, the picture is clearest if we step back and take a broad view: to understand radii of convergence of real power series we need to step back to the complex domain, so to understand the Peirce decomposition in J it is best to step back to J[T; T 1 ], where the Jij appear as eigenspaces of one particular operator relative to distinct eigenvalues ti tj , and their triple products are governed by the condition that the Peircer is a U -operator satisfying the Fundamental Formula.
4. The Matrix Archetype We now give examples of idempotents and their Peirce decompositions. The guiding light for Peirce decompositions is the example set by the associative matrix algebras, the generalization of Associative, Full, and Hermitian Peirce Decomposition 8.6, 8.7, 8.8. Example 13.8 (Full and Hermitian). (1) If E = fe1 ; : : : ; en g is a family of orthogonal idempotents in an associative algebra A, we have the associative Peirce decomposition A
= ni;j =0 Aij
(Aij := ei Aej )
relative to E . These satisfy the associative Peirce multiplication
rules
Aij Ak`
Æjk Ai`:
The associated Jordan algebra J = A+ has Jordan Peirce decomposition +
A
= nij =0 Jij for
Jii
:= Aii ;
Jij
:= Aij Aji (i 6= j ):
If A has an involution , and ei = ei are symmetric idempotents, then the associative Peirce spaces satisfy Aij = Aji , the Jordan algebra H(A; ) contains the family +E , and the Peirce decomposition is precisely that induced from that of A :
H(A; ) = ni;j=0 Hij for Hii = H(Aii; ); Hij = H(Aij + Aji; ) = faij + aij j aij 2 Aij g: (2) If A = Mn (D) is the algebra of n n matrices over an associa-
tive algebra D and ei = Eii are the diagonal matrix units, the Peirce spaces Aij relative to E = fE11 ; : : : ; Enng are just the matrices having all entries 0 except for the ij -entry: Aij = DEij . When J = A+ we have the Jordan Peirce decomposition Jii
= DEii ;
Jij
= DEij + DEji :
5. THE PEIRCE PRINCIPLE
307
If D carries an involution then A = Mn (D) carries the conjugatetr transpose involution X = X , and the Jordan algebra of hermitian elements J = Hn (D; ) := H(Mn (D); ) has Peirce decomposition Jii = H(D; )[ii] = faEii j a 2 H(D; )g; ji j d 2 Dg: Jij = D[ij ] = fdEij + dE The rules 13.7 are precisely the rules for multiplying matrices, and one should always think of Peirce decompositions and rules as matrix decompositions and multiplications. As with the case of a single idempotent, we are led by the archetypal example of hermitian matrices to call the Peirce spaces Jii = J2 (ei ) the diagonal Peirce spaces, and the Jij = J1 (ei ) \ J1 (ej ) (i 6= j ) the o-diagonal Peirce spaces. We cannot give an example of a multiple Peirce decomposition for a respectable quadratic factor extending Quadratic Factor and Reduced Spin Decomposition 8.9, 8.10, since by law these algebras are limited to two orthogonal idempotents to a customer. The respectable cubic factors are allowed three orthogonal idempotents (but no more), so we can give an example extending H3 (C) Decomposition 8.12, analogous to that for any associative matrix algebra 13.8(2). Example 13.9 (H3 (D) Multiple Peirce Decomposition). In a cubic factor J = H3 (D) for any alternative -algebra D with nuclear involution by the Freudenthal Construction 4:5, the three diagonal idempotents E11 ; E22 ; E33 , form a supplementary orthogonal family, with Peirce decomposition L J = 1ij 3 Jij ; Jii = H(D )[ii] = faEii j a 2 H(D )g; ji j d 2 Dg: Jij = D[ij ] = fdEij + dE In the important case of an Albert algebra H3 (O), we have H(O) = , so the three diagonal Peirce spaces Jii = Eii are one-dimensional, and the three o-diagonal Jij = O[ij ] are eight-dimensional, leading to a 27-dimensional algebra.
5. The Peirce Principle Macdonald's Principle provides us with a powerful method for verifying facts about multiple Peirce decompositions in Jordan algebras which extends the Peirce Principle 8.13 of a single Peirce decomposition. Theorem 13.10 (Multiple Peirce Principle). Any set of Peirce elements fxij g, from distinct Peirce spaces Jij relative to a family E of orthogonal idempotents, lie in a special subalgebra B of J containing E .
308
Multiple Peirce Decompositions
Therefore, all Jordan behavior of distinct Peirce spaces in associative algebras persists in all Jordan algebras: if f ; f are Jordan polynomials, a set of relations
f (e1 ; : : : ; en ; xi1 j1 ; : : : ; xir jr ) = 0 among Peirce elements xij with distinct index pairs will imply a relation f (e1 ; : : : ; en ; xi1 j1 ; : : : ; xir jr ) = 0 in all Jordan algebras if it does in all special Jordan algebras, ie. if the relations f (e1 ; : : : ; en ; ai1 j1 + aj1 i1 ; : : : ; air jr + ajr ir ) = 0 imply f (e1 ; : : : ; en ; ai1 j1 + aj1 i1 ; : : : ; air jr + ajr ir ) = 0 for Peirce elements aij 2 Aij relative to idempotents E in all unital associative algebras A. In particular, we have the Peirce Identity Principle: any Peirce identity f (e1 ; : : : ; en ; xi1 j1 ; : : : ; xir jr ) = 0 for a Jordan polynomial f will hold for Peirce elements xij with distinct index pairs in all Jordan algebras J if f (e1 ; : : : ; en ; ai1 j1 + aj1 i1 ; : : : ; air jr + ajr ir ) = 0 holds for all Peirce elements aij 2 Aij in all associative algebras A. proof. Everything takes place in the subalgebra B := [e0 ; e1 ; : : : ; en ; xi1 j1 ; : : : ; xir jr ] P = [e0 ; e1 ; : : : ; en ; x] Jb (x := i<j xij ) since by distinctness we can recover the individual components xij from the sum x by the Peirce projections, xij = Eij (x). Our goal is to show that this subalgebra B is special. A standard localization procedure (which we will explain in detail in Section 6) produces a faithful cyclifying scalar extension = [T ]S [T ] := [t0 ; t1 ; : : : ; tn ] in the sense that (i) is faithful: Jb is faithfully imbedded in Jb ; P (ii) cycli es the ei : [e0 ; e1 ; : : : ; en ] = [e(~t )] (e(~t ) := ni=0 ti ei ): Then B will be special since it imbeds in the special algebra B : B B [because of faithfulness (i)] = [e0 ; e1 ; : : : ; en ; x] [by the above expression for B] = [e(~t ); x] [by cycli cation], where the latter is generated over by two elements and hence is special by the ShirshovCohn Theorem 5.2. The Peirce decomposition of the special algebra B A+ is related to that of A by Full Example 13.8(1), so we can write xij = aij + aji for i j (if i = j we can arti cially write xii = aii = 21 aii + 21 aii ), therefore f (e1 ; : : : ; en ; ai1 j1 + aj1 i1 ; : : : ; air jr + ajr ir ) = 0 in A implies f (e1 ; : : : ; en ; xi1 j1 ; : : : ; xir jr ) = 0 in J.
5. THE PEIRCE PRINCIPLE
309
Just as in 8.13, this Principle gives yet another method for establishing the Peirce Toral Property 13.3, Decomposition 13.4, Eigenvalue Laws 13.5, and Multiplication Rules 13.7. The toral property, decomposition, and eigenvalues involve only the ei and a general x = P i;j xij . The associative Peirce decomposition 13.8 has Aij Ak` = Æjk Ai` with orthogonal Peirce projections Cij (a) = ei aej , so the Jordan Peirce projections Eii = Cii ; Eij = Cij + Cji are also supplementary orthogonal projections as in the Peirce Decomposition 13.4(1). We can recover all Peirce Brace Rules 13.7(1): we have x2ii 2 Aii Aii 2 2 Aii ; xij 2 (Aij + Aji ) = Aij Aji + Aji Aij Aii + Ajj ; fxii ; xij g 2 Aii (Aij + Aji ) + (Aij + Aji )Aii Aij + Aji ; fxij ; xk` g = 0 since (Aij + Aji)(Ak` + A`k ) = (Ak` + A`k )(Aij + Aji) = 0. Similarly we obtain Peirce Orthogonality Rules 13.7(3) for distinct index pairs; for repeated index pairs in the triple product we use Triple Switching to reduce (up to brace products) to fxij ; yk`; zij g = 0, which follows as a linearization of Uxij yk` = 0. For the Peirce U -Product Rules in 13.7(2) we get Uxij yii 2 (Aij + Aji)Aii (Aij + Aji ) Ajj , but we DO NOT GET Uxii yii 2 Jii or Uxij yij 2 Jij DIRECTLY since they involve terms xij ; yij from the same Peirce space (instead, we must get it by linearizing Uxij xij 2 Jij , or by building the U -operator out of bullets and using (1)). We obtain the Peirce Triple Product Rules in 13.7(2) for distinct index pairs by fxij ; yjk ; zk` g 2 fAij + Aji ; Ajk + Akj ; Ak` + A`k g Ai` + A`i , while for repeated pairs we use Triple Switching to reduce to linearizations of quadratic relations 13.7(2); or we could again build the triple products out of bullets and use (1). 13.11 (Indistinct). The requirement of distinct index pairs in the Peirce Principle is crucial. We have a Jordan Peirce relation Example
f (x12 ; y12 ; x13 ; y13 ; z23 ) = [Vx12 ;y12 ; Vx13 ;y13 ](z23 ) = 0 for Peirce elements (but with repeated index pairs) which holds in all associative algebras with 3 orthogonal idempotents, but does NOT hold in the Albert algebra H3 (O):
Indeed, writing xij = aij + aji; yij = bij + bji ; z23 = c23 + c32 as in 13.9 we have Vx12 ;y12 Vx13 ;y13 (z23 ) = a21 b12 c23 b31 a13 + a31 b13 c32 b21 a12 = Vx13 ;y13 Vx12 ;y12 (z23 ). Yet this is NOT a Peirce relation for ALL Jordan algebras, since if in the Albert algebra H3 (O) we take x12 = a[21]; y12 = 1[12]; x13 = c[13]; y13 = 1[31]; z23 = b[23] for arbitrary Cayley elements
310
Multiple Peirce Decompositions
a; b; c 2 O, we have Vx12 ;y12 Vx13 ;y13 (z23 ) = fa[21]; 1[12]; fb[23]; 1[31]; c[13]gg = a(1((b1)c)[23] = a(bc)[23]; and Vx13 ;y13 Vx12 ;y12 (z23 ) = ffa[21]; 1[12]; b[23]g; 1[31]; c[13]g = ((a(1b))1)c[23] = (ab)c[23]; so f (a[21]; 1[12]; b[23]; 1[31]; c[13]) = [a; b; c][23] where the associator [a; b; c] does NOT vanish on the non-associative algebra O (recall a = i; b = j; c = ` have [a; b; c] = 2k` 6= 0 in characteristic 6= 2). Such an identity distinguishing special algebras from all Jordan algebras is called a Peirce s-identity; this shows in a very painless way the exceptionality of reduced Albert algebras. Corollary 13.12 (Exceptional Albert). A reduced Albert algebra H3(O) is exceptional.
6. Modular Digression The key step in the Multiple Peirce Principle is to show that there is a faithful cyclifying extension = [T ]S [T ] := [t0 ; t1 ; : : : ; tn ] where all ei lie in the \cyclic" subalgebra [e(~t )] generated by a single element (a Peircer, no less). [Note that this digression was unnecessary in the case 8.3 of a single idempotent: [e0 ; e1 ] = [1 e; e] = [e] is already cycli ed in a unital algebra.] This is a purely module-theoretic result, having nothing to do with Jordan algebras. It is not hard to create the localization [T ]S and see why it achieves cycli cation, the only delicate point is to see that it is faithful. The argument that the imbedding is indeed faithful uses standard facts about localizing, forming rings and modules of \fractions". We will go through this in some detail in case the reader is rusty on this material, but its length should not obscure the fact that the Jordan proof has quickly been reduced to standard results about modules.
Cycli cation We have seen that J = J 1 is faithfully imbedded in J[T ] = J[T ] for the free -module [T ] of polynomials. For cycli cation, there is a standard interpolation argument allowing us to recover the individual idempotents from a distinct linear combination: if (i Q k ) 1 2 for all i 6= k , then the interpolating polynomials pi (j ) = 0 (j 6= i); pi () := k=6 i i kk 2 [] satisfy pi (i ) = 1; P and hence recover ei = pi (x) 2 [x] from x = ni=1 i ei : Indeed, by power-associativity P and de nition Pof orthogonal idempotents, for any polynomial p we have p(x) = p( k P k ek ) = k p(k )ek , in particular the interpolating polynomials produce pi (x) = k pi (k )ek = ei . To create these interpolating
6. MODULAR DIGRESSION
311
polynomials we only need an containing n scalars i with invertible dierences. For example, if we were working over a eld with at least n elements this would be automatic. To do this \generically" for independent indeterminates ti , we de ne 1 ] = [T ]j
:= [T ][ Q S (usually written just [T ]S ) ( := i<j (ti tj ); S := fn j n 0g) obtained by localizing [T ] at the multiplicative submonoid S consisting of all nonnegative powers of . The resulting set of \fractions" f (T ) n , with \numerator" f (T ) 2 [T ] forms a commutative [T ]-algebra, and the fact that is invertible guarantees each of its factors ti tj is invertible there, (ti tj ) 1 2 for all i 6= j as required to construct the interpolating polynomials. Note that ; [T ]; = [T ]jS need not themselves be integral domains in order to make invertible. Faithfulness When localizing non-integral domains, some of the original scalars are generally collapsed in the process of making invertible. We need to know that in our particular case Je Je remains faithfully imbedded in this larger Je := eJ
= eJ [T ] [T ] (by a basic transitivity property of tensor products) = Je[T ] [T ] [T ]jS = eJ[T ]jS by the basic property of localization of modules that M R RjS is isomorphic to the module localization M jS consisting of all \fractions" ms 1 with \numerator" m 2 M and \denominator" s 2 S . It is another standard fact about localizations that the kernel of the natural inclusion : Jb ! bJ[T ]jS is precisely the set of elements of bJ[T ] killed by some element of S (such elements must die in order for the s to become invertible, just as in a pride of lions some innocent pride members may be killed when new rulers depose the old). We now prove that in our case the takeover is paci c, and injective, since all elements of S are injective (non zero-divisors) on bJ: In our case, we must show the generator of S acts injectively, j (T ) = 0 =) j (T ) = 0 2 bJ[T ]: For this we mimic the standard argument used to show that in the polynomial ring in one variable a polynomial f (t) will be a non-zero-divisor in R[t] if its top term an is a non-zero-divisor in R: its product with a nonzero polynomial g(t) can't vanish since the top degree term of the product is the nonzero product an bm of the top degree terms of the two factors. In our several-variable case, to identify the \top" term we use lexicographic ordering on the monomials induced by the ordering t0 > t1 > : : : > tn of the variables. In \vector-notation" we decree ~t ~e := te00 te11 : : : tenn is bigger than ~t f~ := tf00 tf11 : : : tfnn if it has bigger exponent, f~ > f~ in the sense in the rst place they Q that ei > fi Q Q dier. In this lexicographic ordering, = 0<jn [t0 tj ] 1<jn [t1 tj ] : : : n 1<jn [tn 1 tj ] has monic lexicographically leading term tn0 tn1 1 : : : t1n 1 t0n . From this fact we can see that is nonsingular, since the lexicographically leading term of the product j (T ) is the product of the two lexicographically leading terms, which doesn't vanish in the present case (as long as j (T ) 6= 0) because the leading term of is monic. b[T ] by inducExercise 13.12* Prove in the Modular Digression is injective on J tion on n.
312
Multiple Peirce Decompositions
Conclusions The scalar extension = [T ]jS achieves cyclici cation due to invertibility of , and the imbedding bJ Jb[T ] bJ[T ]jS is faithful since acts injectively on b J[T ]. The existence of a faithful cyclifying extension establishes the Multiple Peirce Principle 13.9. The existence of also allows us to establish the Toral Property in the Peircer Torus 13.3 directly: E (~s ~t ) = Ue(~s ~t ) (by de nition 13.2) = Ue(~s )e(~t ) (by de nition of orthogonal idempotents) = Ue(~s ) Ue(~t ) (by Macdonald's Principle 5.2 since everything takes place in 0 [e(~t )] for 0 = [S ]) = E (~s )E (~t ). As usual, the Peirce decomposition 13.4(1) is an immediateP consequence of the this: identifying coeÆcients P of s s t t in s s t t E = i j k ` p q p q pq pq pq (sp tp )(sq tq )Epq = E (~s ~t ) = E (~s )E (~t ) = P P P ij si sj Eij kl tk t` Ek` = ij; kl si sj tk t` Eij Ek` gives Eij Ek` = 0 unless fi; j g = fk; `g = fp; qg, in which case Epq Epq = Epq , so the Eij are supplementary orthogonal idempotent operators.
7. Problems for Chapter 13 Another method of justifying the Toral Property 13.3 is to establish a general result that the U -operators permit composition (not just Jordan composition, as in the Fundamental Formula) for elements x; y which operatorcommute (Lx ; Lx2 commute with Ly ; Ly2 on J), generalizing the case Power Associativity 5.6(2) of elements p; q which are polynomials in a single element. (1) Prove that if elements x; y of a Jordan algebra J operator-commute then they satisfy the strong version Uxy = Ux Uy of the Fundamental Formula. (2) Conclude that if B is an operator-commutative subalgebra of J in the sense that Lb ; Lc commute for all elements b; c 2 B, then B satis es the Commutative Fundamental Formula Ubc = Ub Uc on J for all b; c 2 B. (3) Show that any two polynomials x = p(z ); y = q(z ) in a common element z operator-commute. (4) Show that any two orthogonal idempotents e; f operator-commute. (5) Prove the Peirce Commutativity Theorem: For any supplementary orthogonal family of idempotents E , the subalgebra B := e1 : : : en is operator-commutative, and satis es the Commutative Fundamental Formula Upq = Up Uq for any p; q 2 B: Problem 1*.
In general, just because x; y operator-commute on J does not guarantee they continue to operator-commute on larger J0 . (1) Show that any left, right multiplications in a linear algebra commute on the unit element, [Lx ; Ry ]1 = 0. Conclude that in any commutative algebra [Lx; Ly ]1 = 0. (2) Show that if x; y operator-commute on J, then they continue to operator-commute on bJ. (3) Show that in an associative algebra A two elements x; y commute on z 2 A+ ; [Vx ; Vy ]z = 0; i their commutator commutes with z; [[x; y]; z ] = 0. Conclude x; y operatorcommute in A+ i their commutator is central, [x; y] 2 Cent(A). (4) Give an example of x; y which operator-commute on A+ but not on a larger (A0 )+ . Can you nd an example for semiprime A; A0 ? Problem 2*.
7. PROBLEMS FOR CHAPTER 13
313
(1) If R is any commutative ring with unit and S a multiplicative submonoid of R, show that we can create a \ring RjS of S -fractions" r=s under the usual operations on fractions and the (slightly unusual) equivalence relation r=s r0 =s0 () there exists t 2 S such that (rs0 r0 s)t = 0. (2) Show this ring has the properties (a) the images of the \denominators" s 2 S all become invertible in RjS , (b) there is a universal homomorphism R ! RjS of rings with the universal ~ property that any homomorphism R f! Re, such that the image f~(S ) is invertible in Re, can be written uniquely as a composite of and a homomorphism RjS f! Re. (3) Show that the kernel of is precisely the set of elements of R killed by some element of S . Prove that injective i all elements of S are injective on R. (4) Show that, similarly, for any right R-module M we can form a \module M jS of S fractions" m=s with the properties (a) M jS is an RjS -module, (b) there is a natural homomorphism M ! M jS of R-modules whose kernel is the set of m 2 M killed by some element of S . (5) Show that the correspondences M ! M jS ; ' ! 'jS (de ned by 'jS (m=s) = '(m)=s) give a functor from the category of R-modules to the category of RjS -modules. Problem 3.
CHAPTER 14
Multiple Peirce Consequences 1. Jordan Coordinate Conditions Once more we stop to gather consequences from the multiple Peirce decomposition. As a rst step we can use the Peirce decomposition and Peirce Orthogonality to establish the Jordan criterion for the hermitian matrix algebras. Earlier (cf. II.2.10-13) we indicated, by ingenious substitutions into the linearized Jordan identities, why alternativity or associativity of the coordinates is necessary in order for the hermitian matrices to form a Jordan algebra. Now that we are more sophisticated, in particular know about Peirce decompositions in Jordan algebras, we can see the necessity of these conditions another way. Theorem 14.1 (Jordan Coordinates). If the hermitian matrix algebra Hn (D; ) for n 3 satis es the Jordan identities, then the coordinate -algebra (D; ) must be alternative with nuclear involution, H(D; ) Nuc(D), and must even be associative if n 4: proof. We show (1) [a; b; c] = 0 (a; b; c 2 D; n 4) (2) [a; a; b] = 0 (a; b 2 D) (3) [; b; c] = 0 (b; c 2 D; 2 H(D)): These are enough to derive the theorem: (1) shows D must be associative for n 4, (3) and (2) show left alternativity [a; a; b] = [(a + a); a; b] [a; a; b] = 0 [since a + a = 2 H(D)], hence right alternativity by the involution, which is just the condition in Alternative Category De nition 2.1 that D be an alternative algebra. Then (3) shows H(D) lies in the nucleus by Nucleus De nition 2.2. We claim that for a; b; c 2 D and 2 H(D) we have (10 ) 0 = 2fb[13]; a[21]; c[34]g = [a; b; c][24] (n 4) (2") 0 = 2Ua[12] (b[23]) = [a; a; b][23] (30 ) 0 = 2fb[21]; [22]; c[13]g = [; b; c][23]: 315
316
Multiple Peirce Consequences
In fact, all these disconnected triple products must vanish by Peirce Orthogonality 13.7(3). On the other hand, using the formula 5.7 2Ux y = (Vx2 Vx2 ) y = fx; fx; y gg fx2 ; y g and its linearization, together with the Basic Products 3.7(2) for brace multiplication in hermitian matrix algebras, (10 ) becomes fb[13]; fa[21]; c[34]gg + ffa[21]; b[13]g; c[34]g fa[21]; fb[13]; c[34]gg = 0 + (ab)c[24] a(bc)[24] = [a; b; c][24]; (20 ) becomes fa[21]; fa[12]; b[23]g fa[12]2 ; b[23]g = a(ab)[23] faa[11] + aa[22]; b[23]g = [a; a; b][23]; (30 ) becomes fb[21]; f[22]; c[13]gg + ff[22]; b[21]g; c[13]g f[22]; fb[21]; c[13]gg = 0 + (b)c[23] (bc)[23] = [; b; c][23]: This nishes the proof. The above conditions are in fact necessary and suÆcient for Hn (D) to form a linear Jordan algebra. The converse when n 4 is easy: whenever D is associative, Hn (D) is a special Jordan subalgebra of Mn(D)+. The converse when n = 3 is considerably messier, so we leave it to Appendix C; in our work we will need only the easier case of a scalar involution.
Carry out a direct frontal attack on the Jordan identity without the aid of Peirce decompositions. (1) Multiply 1.10(JAx200) by 8 to turn all associators into brace associator [x; y; z ]0 := ffx; yg; zg fx; fy; zgg. For x = c[21]; z = b[32]; w = 1[11]; y = a[j 3] (j = 3 or 4) show fx; zg = bc[31]; fz; wg = 0; fw; xg = c[21], and that the brace version of (JAx200 ) reduces to [a; b; c][j 1] = 0. (2) When n 4 we can take an arbitrary a 2 D; conclude 14.1(1) holds, hence D must be associative. (3) When n = 3 we must take j = 3 and y = [33] 2 Hn (D; ) for any 2 H(D; ). to obtain 14.1(3). (4) Use a dierent substitution x = a[12]; z = b[23]; y = 1[33] in the brace version of (JAx20 ) [x2 ; y; z ]0 + [fx; zg; y; x]0 = 0 to get [a; a; b][23] = 0 as in 14.1(2). Conclude as in 14.1 that in this case D is alternative with nuclear involution. Exercise 14.1A
2. Peirce Specializations The basic facts about Peirce specializations and associativity follow directly from the results for a single idempotent. Throughout the rest of this chapter we continue the conventions of the previous chapter: we x a set E = fe1 ; : : : ; en g of pairwise orthogonal idempotents in a
2. PEIRCE SPECIALIZATIONS
317
Jordan algebra J with supplementation fe0 ; e1 ; : : : ; en g in bJ, and denote the Peirce subspaces by Jij . With this understanding, we will not explicitly mention the family E again. Proposition 14.2 (Peirce Specialization). The Peirce specialij ii izations Jii ! End(Jik ), Jii +Jij +Jjj ! End(Jik +Jjk ) (i; j; k 6= ) are Jordan homomorphisms: ii (a) := Va jJik ; ij (x) := Vx jJik +Jjk ; ii (ei ) = Vei jJik = 1Jik ; Vx2 = Vx2 ; Vx;y = VxVy ; VUx y = Vx Vy Vx on Jik + Jjk : proof. By Peirce Recovery 13.6 this follows directly from the single-idempotent case Peirce Specialization 9.1 (taking e = ei ; e = ei + ej respectively), noting that by the Peirce Multiplication Rules 8.5 Jik J1 (e) is invariant under VJii , is killed by VJjj , and is mapped into Jjk by VJij : Just as in 9.1, establish 14.2 directly from the Peirce relations and Peirce Orthogonality in 13.7, using the Specialization Formulas 5.7(FIII)0 and Triple Switch (FIVe). Exercise 14.2
Already we can see one bene t of Peirce analysis: we see that the diagonal Peirce subalgebras have a more associative nature { they act associatively on the o-diagonal spaces by Peirce specialization. In respectable cases, where these specializations are faithful, this makes the diagonal subalgebras special Jordan algebras, so the only possible exceptionality resides in the o-diagonal part of the algebra. Once more, the diagonal Peirce spaces Alphonse (Jii ) and Gaston (Jjj ) courteously commute with each other as they take turns feeding on Jij . Proposition 14.3 (Peirce Associativity). When fi; j g \ fk; l g = ; the Peirce specializations of Jij and Jk` on Jjk commute: we have the operator relations Vxij Vzkl (yjk ) = fxij ; yjk ; zkl g = Vzk` Vxij (yjk ) for elements wrs 2 Jrs, equivalently the elemental relations [xij ; yjk ; zk`] = 0; (xij yjk ) zk` = xij (yjk zk` ): proof. This follows immediately from the Peirce Identity Principle 13.9 (by a somewhat messy computation), or from the singleidempotent case of Peirce Associativity 9.3 relative to e = ei (if i = j ) or e = ei + ej (if i 6= j ) [since then by Peirce Recovery 13.6 x 2
318
Multiple Peirce Consequences
(e); y 2 J1 (e), and z 2 J0 (e)], or best of all from Triple Switching [fxij ; fyjk ; zk` gg = fxij ; yjk ; zk`g + fxij ; zk`; yjk g = fxij ; yjk ; zk` g by Peirce Orthogonality 13.7(3) when fi; j g; fk; `g can't be linked]. J2
3. Peirce Quadratic Forms Again, the diagonal behavior of o-diagonal spaces Jij is captured in the Peirce quadratic form qii . Proposition 14.4 (q -Properties). For o-diagonal elements xij 2 Jij (i 6= j ) we de ne qii : Jij ! Jii by qii (xij ) := Eii (x2ij ) = Uxij ej : (1) We have Cube Recovery and recovery of fourth powers by x3ij = fqii (xij ); xij g; Eii (x4ij ) = Uxij qjj (xij ) = qii (xij )2 : (2) The operator Uxij can be expressed in terms of q 's: for yij 2 Jij ; aii 2 Jii ; ajj 2 Jjj we have the Uijq Rules Uxij yij = fqii (xij ; yij ); xij g fqjj (xij ); yij g; Uxij ajj = qii (xij ; ajj xij ); fxij ; ajj ; zij g = qii(fxij ; ajj g; zij ) = qii (xij ; fajj ; zij g): (3) The Peirce quadratic forms permit composition with braces: for i; j; k 6= we have the q -Composition Formulas qii (faii ; xij g) = Uaii qii (xij ); qii (faii ; xij g; xij ) = Vaii qii (xij ); qii (fajj ; xij g) = Uxij (a2jj ); qii (fxij ; yjk g) = Uxij qjj (yjk ): (4) We have a q -Nondegeneracy condition: if the algebra J is nondegenerate then the Peirce quadratic forms qii are nondegenerate (in the sense that Rad(qii ) = 0): proof. Everything except (4) and the rst and third parts of (2),(4) follows easily from the Peirce Identity Principle 13.9 (noting qii (xij ) = aij aji if xij = aij + aji in A). Alternately, everything but the fourth part of (3) follows from the q -Properties 9.5 and q -Nondegeneracy 9.6(2) for the single-idempotent case since it takes place in the Peirce subalgebra J2 (ei + ej ) of bJ where e = ei ; 1 e = ej are supplementary orthogonal idempotents with xij 2 J1 (ei ) (note by Diagonal Inheritance 10.1(1) that this Peirce subalgebra inherits nondegeneracy from J). For the fourth part of (3),
4. CONNECTED IDEMPOTENTS
319
if we eschew the Peirce Identity Principle we are reduced to calculating: 2Uxij qjj (yjk ) = 2qii (xij ; qjj (yjk ) xij ) [by the second part of (2)] = 2 x ) [since q (y ) acts trivially by Peirce Orthogonality] 2qii (xij ; yjk ij kk jk 2 ; x g) = q (x ; fy ; fy ; x gg) [by Peirce Specialization = qii (xij ; fyjk ij ii ij jk jk ij 14.2] = qii (fxij ; yjk g; fyjk ; xij g) [by symmetry in the third part of (2) with ei ; ej replaced by ej + ek ; ei ) = 2qii (fxij ; yjk g], and we scale by 21 .
As with products in matrix algebras (cf. 3.7), most of these formulas are more natural for brace products than bullet products. Establish the fourth part of 14.4(3) without invoking 12 by setting x = xij ; y = yjk in the identity UxUy + Uy Ux + Vx Uy Vx = Ufx;yg + UUx (y);y applied to the element 1.
Exercise 14.4
4. Connected Idempotents Orthogonal idempotents ei ; ej which belong together in the same \part" of the algebra are \connected" by an o-diagonal element uij 2
invertible in Jii + Jij + Jjj . Invertibility of an o-diagonal element is equivalent to invertibility of its square, which is a diagonal element. We will frequently be concerned with diagonal elements in a Peirce decomposition, and it is important that the only way they can be invertible is for each diagonal entry to be invertible. Pn Lemma 14.5 (Diagonal Invertibility). If x = i=0 xii for xii 2 Jii is a diagonal element in a unital Jordan algebra J, then x has an inverse 1 x 1 in P J i each component xii has an inverse xii in Jii , in which case 1 x 1 = i xii is also diagonal. proof. It will be easier for us to work with the quadratic conditions 1 6.1(Q1-2). P 1 If each xii has inverse xii in Jii kthen 2it kis easy to see y = xii is the (unique) inverse for k = 1; 2. Pof x : Ux y = x Conversely, if x has inverse y = ij yij then Ux is invertible on J, and since Ux leaves each Peirce subspace invariant, each restriction Ux jJii = Uxii and Ux jJij = Uxii ;xjj (i 6= j ) is invertible. From x = Ux y we see xii = Uxii yii and 0 =PUxii ;xjj yij for i 6= j , which by invertibility P 2 2 on Jij forces yij = 0; y = yii ; then P 1 = Ux y = Ux ( yii) implies each ei = Uxii yii2 , so yii = xii 1 and y = xii 1 : Jij
Prove Diagonal Invertibility using the Linear Inverse Conditions P 6.6 (L1-2) x y = 1; x2 y = x. (1) If each xii has inverse yii in Jii , show y = i yii P acts as inverse in J. (2) If x has inverse y = ij yij show each yii must be the P inverse of xii ; conclude by (1) that y0 = i yii is an inverse of x. (3) Conclude y = y0 . Exercise 14.5*
320
Multiple Peirce Consequences
Using the Peirce decomposition we can introduce an equivalence relation among orthogonal idempotents, which we will use in our structure theory to lump together idempotents which fall in the same \simple chunk". Definition 14.6 (Connection). Two orthogonal idempotents ei ; ej are (resp. strongly) connected if they have a (resp. strong) connecting element, an element vij 2 Jij which is invertible (resp. an involution) in Jii + Jij + Jjj = J2 (ei + ej ). To indicate the connection together with the connecting element we use the notation v
ei ij ej : Lemma 14.7 (O-Diagonal Invertibility). We can detect odiagonal invertibility of vij 2 Jij by diagonal invertibility: vij is connecting () qii (vij ); qjj (vij ) are invertible in Jii ; Jjj ; vij is strongly connecting () qii (vij ) = ei ; qjj (vij ) = ej : (2) We can also detect o-diagonal invertibility by diagonal action: for vij 2 Jij the following conditions are equivalent:
(i) vij is connecting (ii) Uvij (Jii ) = Jjj and Uvij (Jjj ) = Jii (iii) ej 2 Uvij (Jii ) and ei 2 Uvij (Jjj ): proof. (1) For the rst equivalence, by Invertible Powers 6.7(2) vij is invertible in B := Jii + Jij + Jjj i vij2 = qii (vij ) + qjj (vij ) is, which is equivalent by Diagonal Invertibility 14.5 to each piece being invertible. For the second equivalence, clearly vij2 = ei + ej i qii (vij ) = ei ; qjj (vij ) = ej . (2) (i) ) (ii) is clear by invertibility of Uvij on B and the Peirce Multiplication Rules 13.7(2), (ii) ) (iii) is obvious, and nally (iii) ) (i) by Invertibility 6.2(1)) since it guarantees the unit ei +ej of B is in the range of Uvij . 14.8 (Connection Equivalence). Connectivity and strong connectivity are equivalence relations on orthogonal idempotents: if E = fe1; : : :, eng is a set of pairwise orthogonal idempotents in a Jordan algebra J, the (resp. ( strong) connection relation (i = j ) : ei = ej ei ej () (i 6= j ) : ei ; ej (resp. strongly) connected is an equivalence relation on the set E : Lemma
ei
ej ; ej vjk ek =) ei vik ek (vik := fvij ; vjkg; i; j; k 6=)
vij
4. CONNECTED IDEMPOTENTS
321
(and if the connnecting elements vij ; vjk are strong, so is vik ): proof. By de nition is re exive and symmetric, so we need only verify transitivity. To establish transitivity in the weak connection case, by O-Diagonal Invertibility 14.7(1) we must show qii (vik ) is invertible in Jii [then dually for qkk ]. But qii (vik ) = qii (fvij ; vjk g) = Uvij qjj (vjk ) [by q -Composition 14.4(3)] is invertible by Diagonal Invertibility 14.5 in B := Jii + Jij + Jjj as the ii-component of the invertible diagonal element Uvij (qjj (vjk )+ ei) (which in turn is invertible by Invertible Products 6.7(1), since vij is invertible in B by hypothesis and the diagonal element qjj (vjk ) + ei is invertible by Diagonal Invertibility again since each term is). To show transitivity in the strong case, by O-Diagonal Invertibility 14.7(1) we must show qii (vik ) = ei . But again by q -Composition qii (vik ) = Uvij qjj (vjk ) = Uvij ej [by strongness of vjk ] = qii (vij ) = ei [by strongness of vij ]. Once more, strong connection leads to involutions. Proposition 14.9 (Connection Involution). (1) If orthogonal idempotents e1 ; e2 in an arbitrary Jordan algebra J are strongly connected, 2 = e + e for an element v in the Peirce space J , then the element v12 1 2 12 12 u := 1 e1 e2 + v12 is an involution in bJ, and Uu on Jb induces an involution on J which interchanges e1 and e2 : ei = ej (i = 1; 2; j = 3 i); = Uv12 on J2 (e1 + e2 ) = J11 + J12 + J22 ; = Vv12 on J1 (e1 + e2 ) = J10 + J20 ; = 1J00 on J0 (e1 + e2 ) = J00 : (2) On J2 (e1 +e2 ) we have a ner description of the action on the Peirce subspaces: if as in the single-variable case 9:3 we de ne the trace by ti (x) = qi (v12 ; x) (i = 1; 2; j = 3 i) then aii = tj (aii v ) on Jii ; x12 = fti (x12 ); v12 g x12 on J12 ; qi (x12 ) = qj (x12 ): (3) The xed set of on J12 is fJii ; v12 g : fxii; v12 g = fxii ; v12g = fxii ; v12g; v12 = v12 : proof. (1) We have u = e0 + v where e0 = 1 e1 e2 2 Jb00 is an idempotent orthogonal to e1 ; e2 , and therefore by Peirce Orthogonality 13.7(1) u2 = e20 + v 2 = e0 + e1 + e2 = 1. Therefore by the Involution
322
Multiple Peirce Consequences
Lemma 6.9 the map Uu is an involutory automorphism on bJ, and it (and its inverse!) leave the ideal J invariant, so it induces an involutory automorphism on J. Making heavy use of the Peirce Recovery formulas 13.6(2) for e = e1 + e2 , we see = Uu = Ue0 +v reduces by Peirce Orthogonality 13.7(3) to Ue0 = 1J0 on J0 (e), to Ue0 ;v = U1;v = Vv on J1 (e) (note Ue;v (J1 (e)) UJ2 (J1 ) = 0 by Peirce Orthogonality Rules 8.5), and to Uv on the Peirce subalgebra J2 (e) (where v is involutory, so ei = ej from Connection Action 10.3(2)). Similarly, (2),(3) follow from Connection Involution 10.3(2),(3) applied to J2 (e): And once more we can strengthen connection by means of a diagonal isotope. We derive only the form we will need later, and leave the general case as an exercise. Lemma 14.10 (Diagonal Isotope). (1) Connection can be strengthened by passing to a diagonal isotope: if E = fe1 ; : : : ; en g is a supplementary set of pairwise orthogonal idempotents in a unital Jordan algebra J with e1 connected to ej by connecting elements v1j 2 1 the diagonal elJ1j (j 6= 1), then for u11 := e11 ; ujj := qjj (v1j ) Pn ement u = j =1 ujj is invertible, and the diagonal isotope J(u) has a supplementary set E (u) = fe(1u) ; : : : ; e(nu) g of orthogonal idempotents e(ju) := qjj (v1j ) strongly connected to e(1u) := e1 via the now-involutory v1j 2 J(iju) ; v1(2j;u) = e(1u) + e(ju) : (2) This isotope has exactly the same Peirce decomposition as the original, ( u ) J (E (u) ) = Jij (E ); ij
and the new quadratic forms and Peirce specializations are given (for 1; i; j 6=) by
(u) q11 (x1j ) = q11 (x1j ); qii(u) (xij ) = qii (xij ; qjj (v1j ) 1 xij ); (u) (a11 ) = (a11 ); (u) (aii ) = (aii ) (qii (v1i )) 1 : proof. We can derive this from the single-idempotent case Diagonal Isotope 10.4, noting that by Peirce Orthogonality the space J2 (e1 + ej ) is a subalgebra of eJ := J(u) coinciding with J2 (e(1u) + e(ju) )(e1 +ujj ) . Alternately, we can derive it from our multipleP Peirce information as follows. (1) We know u is invertible with u 1 = ni=1 uii 1 by Diagonal InP vertibility 14.5, and in eJ has ~1 := 1(u) = u 1 = nj=1 e~i for e~i := e(iu) := uii 1 orthogonal idempotents since e~~2i = Uuii 1 u = Uuii 1 (uii ) = uii 1 = e~i
4. CONNECTED IDEMPOTENTS
323
and e~i ~e~j = 12 fe~i ; u; e~j g 2 fJii ; u; Jjj g = 0 when i 6= j by Peirce orthogonality and diagonality of u. The new idempotents are strongly connected by the old v1j : v1(2j;u) = Uv1j (u11 + ujj ) [by Peirce Orthog onality] = Uv1j (e1 + qjj (v1j ) 1 ) = qjj (v1j ) + Uv1j Uv1j e1 1 [inverses in the subalgebra J11 + J1j + Jjj ] = qjj (v1j ) + Uv1j Uv1j1 e1 [by Invertible Products 6.7(1)] = e(ju) + e(1u) . (2) The new Peirce spaces are eJij = U~e~i ;e~j eJ UJii ;Jjj (J) Jij by L L the Peirce U and Triple Products 13.7(2), so Je = Jeij Jij = J implies eJij = Jij for each i; j . The new Peirce Quadratic Forms 14.4(1) and Peirce Specializations 14.2 take the form E~ii (x~2ij ) = Eii (Uxij u) [using (1)] = Uxij ujj = qii (xij ; ujj xij ) [by Uijq Rules 14.4(2)] and V~aii = Vaii ;uii [by Jordan Isotope 7.3(2)] = Vaii Vuii [by Peirce Specialization 14.2)]. In particular for i = 1; u11 = e1 we have q~11 (x1j ) = q11 (x1j ) and V~aii = Vaii , while for i 6= 1 we have q~ii (xij ) = qii (xij ; qjj (v1j ) 1 xij ) and V~aii = Vaii Vqii (v1i ) 1 = Vaii Vqii1(v1i ) = (aii ) (qii (v1i )) 1 : Let E = fe1 ; : : : ; en g be as in 14.10, and ujj be arbitrary invertible P elements in Jjj . (1) Show u = nj=1 ujj is invertible, and the diagonal isotope J(u) has a supplementary set E (u) = fe(1u); : : : ; e(nu)g of orthogonal idempotents e(ju) := (E (u) ) ujj1. (2) Show the Peirce decompositions in J(u) and J coincide: J(u) = ij (E ) J . (3) Show the new quadratic forms and Peirce specializations are given by Exercise 14.10
ij
qii(u) (xij ) = qii (xij ; ujj xij ); (u) (aii ) = Va(iiu) = Vaii Vuii = (aii )(uii ) on Jij .
CHAPTER 15
Hermitian Symmetries 1. Hermitian Frames In the Hermitian and Spin Coordinatization Theorems, we needed 2 connected idempotents and a spin or hermitian condition. Once we get to 3 or more connected idempotents we don't need to impose additional behavioral conditions { the algebras all have a uniform pattern, and the key to having hermitian structure is the mere existence of hermitian frames. Definition 15.1 (Hermitian n-Frame). For n 3, a hermitian n-frame in a unital Jordan algebra J is a family of hermitian n n matrix units H = fhij j 1 i; j ng, consisting of orthogonal idempotents h11 ; : : : ; hnn together with strongly connecting elements hij = hji (i 6= j ) satisfying: P (1) the Supplementary Rule: ni=1 hii = 1; (2) the Hermitian Product Rules for distinct indices i; j; k 6= h2ii = hii ; h2ij = hii + hjj ; fhii ; hij g = hij ; fhij ; hjk g = hik ; (3) the Hermitian Orthogonality Rule fhij ; hk`g = 0 if fi; j g \ fk; `g = ;: Note also that by de nition a hermitian n-frame is always strong. In contrast to the case of 2-frames in Chapter 12, we will not need to impose any cyclicity conditions, hence our coordinate algebras will not have to be symmetrically-generated. Example 15.2 (Special). (1) The standard hermitian n-frame for Hn (D; ) consists of the standard hermitian matrix units hii := 1[ii] = eii ; hij := 1[ij ] = eij + eji = hji : (2) In general, if E a = feij j 1 i; j ng is a family of n n associative matrix units for an associative algebra A, in the usual sense that 1=
n X i=1
eii ;
eij ek` = Æjk ei` ; 325
326
Hermitian Symmetries
then the associated family of symmetric matrix units H(E a) : hii := eii; hij := eij + eji is a hermitian n-frame for the Jordan algebra A+ (or any special subalgebra J A+ containing H); if an involution on A has eij = eji for all i; j , then H is a hermitian n-frame for H(A; ): (3) Conversely, in any unital special Jordan algebra 1 2 J A+ these are the only hermitian families: H hermitian in J =) H = H(E a ) for eii := hii ; eij := eii hij ejj : proof. (1),(2) are straightforward associative calculation; the condition eij = eji in (2) guarantees the hij are hermitian, H(E a ) H(A; ). To see the converse (3), we use the Peirce Decomposition + 13.4 for A : by the hermitian matrix units condition 15.1 the eii := hii are supplementary associative idempotents, and they are associatively orthogonal since (as we have seen before) fe; xg = 0 ) exe = 1 (fe; fe; xgg fe; xg) = 0 ) ex = e(ex + xe) = efe; xg = 0 and 2 dually. Then the associative Peirce components eij := eii hij ejj satisfy hij = eij + eji , and by 15.1(2) h2ij = eii + ejj ) eij eji = eii for i 6= j; fhij ; hjk g = hik ) eij ejk = eik for i; j; k 6=; eij 2 Aij ) eij ek` = 0 if j 6= k, so E a = feij g1i;j n forms a family of associative matrix units.
Thus another way to say L that H is a hermitian n-frame is that J L contains a copy of Hn () = ( ni=1 eii ) ( i<j (eij + eji)) such that H is the copy of the standard symmetric matrix units. Having each pair hii ; hjj strongly connected by hij is not enough: the connecting elements themselves must interact according to fhij ; hjk g = fhik g. In case the hij are all tangled up, we can start all over again and replace them by an untangled family built up only from the connecting elements h1j : Lemma 15.3 (Hermitian Completion). Any supplementary family E = fe1 ; : : : ; eng of n orthogonal idempotents which are strongly connected can be imbedded in a hermitian n-frame: if e1 is strongly connected to each ej (j = 2; 3; : : : ; n) via v1j 2 J1j , then these can be completed to a family H = fhij j 1 i; j ng of hermitian matrix units by taking (for distinct indices 1; i; j ) hii := ei ; h1j := v1j =: hj 1 ; hij := fv1i ; v1j g =: hji : proof. Let Jij denote the Peirce spaces with respect to the orthogonal idempotents fei g. By the Peirce Identity Principle 13.9 it suÆces to prove this in special algebras. But the supplementary ekk := ek are associative idempotents, and by the Special Example 15.2(3) in
2. HERMITIAN SYMMETRIES
327
each Bij = J11 + J1j + Jjj we have v1j := e1j + ej 1 for 2 2 associative matrix units e11 ; e1j ; ej 1; ejj , so we obtain a full family of n n associative matrix units E a = feij := ei1 e1j g because eij ek` = ei1 e1j ek1 e1` = Æjk ei1 e11 e1` = Æjk ei1 e1` = Æjk ei` . Hence the associated symmetric matrix units H(E a ) = feii ; eij + eji g of the Special Example 15.2(2) are a hermitian family, given for i = j by hii = eii = ei , for i = 1 6= j by h1j = e1j + ej 1 = v1j , and for distinct 1; i; j 6= by hij = eij + eji = ei1 e1j + ej 1 e1i = fe1i + ei1 ; e1j + ej 1 g = fv1i ; v1j g. Thus our de nitions above do lead to a hermitian completion. Give a direct Jordan proof of Hermitian Completion by tedious calculation, showing that for i; j 6= 1 the new hij := fhi1 ; h1j g continue to satisfy h2ij = hii +hjj ; fhii ; hij g = hij ; fhij ; hjk g = hik ; fhij ; hk` g = 0 if fi; j g\fk; `g = ;. After all the calculations are over, step back and survey the carnage, then wistfully appreciate the power of the Peirce Identity Principle. Exercise 15.3*
2. Hermitian Symmetries The Coordinatization Theorem we are aiming for involves 2 aspects: nding the coordinate algebra D, and using it to coordinatize the entire algebra. The rst involves Peirce relations just in the \Northwest" 3 3 chunk of the algebra, while the second involves constructing the connection symmetries that guarantee all o-diagonal spaces look like D and all diagonal ones like H(D). In this section our main goal is to construct Hermitian Symmetries corresponding to each permutation of the indices, which ow directly out of Hermitian Involutions corresponding to the transpositions (ij ): Lemma 15.4 (Hermitian Involutions). (1) A hermitian n-frame H = fhij g gives rise to a family U (H) of hermitian involutions U ij , involutory automorphisms of J given by
U ij = U ji := Uuij
for uij := 1
(hii + hjj ) + hij (i 6= j )
which permute the hermitian matrix units, involutions, and the Peirce spaces according to the transposition = (ij ) on the index set: for i 6= j we have (2) the Action Formulas
U ij
=1
on
= Vhij on = Uhij on
P
k;l6=i;j Jk`
P
k6=i;j (Jki + Jkj )
Jii
+ Jij + Jjj ;
328
Hermitian Symmetries
(3) the Index Permutation Principle U ij (hkk ) = h (k) (k) ; U ij (hk`) = h (k) (`) ; U ij (uk`) = u (k) (`) ; U ij (Jk`) = J (k) (`) ; (4) and interaction of the involutions by the Fundamental Invo-
lution Formula
U ij U k` U ij = U (k) (`)
( := (ij )): 2 proof. (1-2) Note that the elements uij do satisfy uij = uji ; uij = 1, so by the Connection Involution Lemma 14.9(1) they determine involutory automorphisms U ij = U ji on J; U 2ij = 1J as in (1), with action given by (2) using the PeirceP Recovery 13.6(2) to identify J0 (hii + hjj ) = P k;`6=i;j Jk` ; J1 (hii + hjj ) = k6=i;j (Jki + Jkj ); J2 (hii + hjj ) = Jii + Jij + Jjj . (3) The rst action U ij (hkk ) = h (k) (k) of (3) follows for k 6= i; j since (k) = k and U ij xes hkk 2 J0 by (2), and for k = i by U ij (hii ) = hjj from Connection 14.9(1) again. The second action of (3) on matrix units hk` is trivial if k; ` are distinct from i; j (then xes the indices k; ` and U ij xes the element hk` 2 J0 by (2) again); if k; ` = i; j agree entirely then by symmetry we can assume (k; `) = (i; j ) where U ij (hij ) = hij [by Connection 14.9(3)] = hji = h (i) (j) ; while if k; ` agree just once with i; j we may assume k = i; ` 6= i; j; k, in which case U ij (hi` ) = Vhji (hi` ) (by (2)) = hj` [by the Hermitian Product Rule 15.1(2)] = h (i) (`) . The third action of (3) on involutions uk` then follows since they are linear combinations of matrix units. The fourth action of (3) on spaces follows from the fact that an automorphism U takes Peirce spaces wherever their idempotents lead them, U (Ue;f J) = UU (e);U (f ) (J): (4) follows directly from the de nition of the hermitian involutions and the Fundamental Formula: U ij U k` U ij = Uuij Uuk` Uuij [by de nition of U ] = UUuij (uk` ) [Fundamental Formula] = UUij (uk` ) [de nition again] = Uu (k) (`) (by (3)) = U (k) (`) [by de nition yet again]. Exercise 15.4A* Give an alternate proof using (as far you can) the Peirce Identity
Principle 13.9 and calculations with associative matrix units of Special Example 15.2(3). Establish 15.4 without using any Principles whatsoever, by direct calculation using Peirce Orthogonality on uij = u0 + hij for u0 2 J0 (ei + ej ). Exercise 15.4B*
Just as the transpositions generate all permutations, so the hermitian involutions generate all hermitian symmetries.
2. HERMITIAN SYMMETRIES
329
15.5 (Hermitian Symmetries). (1) If H = fhij g is a hermitian n-frame, and U ij are the associated hermitian involutions U ij = Uuij 2 Aut(J); U 2ij = 1J; then we have a monomorphism ! U of the symmetric group Sn ! Aut(J) extending the map (ij ) ! U (ij) := U ij , U 1 = 1J; U Æ = U Æ U (; 2 Sn): (2) These hermitian symmetries naturally permute the hermitian matrix units and involutions and Peirce spaces by the Index PermutaTheorem
tion Principle
U (hii ) = h(i)(i) ; U (uij ) = u(i)(j) ;
U (hij ) = h(i)(j) ; U (Jij ) = J(i)(j) :
(3) Furthermore, we have the Agreement Principle U = 1 on Jij if xes both i and j : (i) = i; (j ) = j; U = U on Jij if (i) = (i); (j ) = (j ): proof. The hard part is showing (1), that the map (ij ) ! U (ij ) extends to a well-de ned homomorphism ! U on all of Sn . Once we have established this, the rest will be an anticlimax. For (2) it suÆces if Index Permutation holds for the generators U (ij ) , which is just the import of 15.4(3). For the First Agreement Principle in (3), if xes i; j it can be written as a product of transpositions (k`) for k; ` 6= i; j , hence [by (1) and well-de nedness] U is a product of U (k`) , which by the Action Formula 15.4(2) are all the identity on Jij . From this the Second Agreement Principle follows: (i) = (i); (j ) = (j ) =) 1 (i) = i; 1 (j ) = j =) U 1 U = U 1 Æ [by (2)] = 1 on Jij [by First Agreement] =) U = U on Jij . The key to (1) is that Sn can be presented by generators tij = tji for i 6= j = 1; 2; : : : ; n and t-relations t2ij = 1; tij tk` tij = t (k) (`) ( = (ij )); which are precisely the relations which we, with admirable foresight, have established in the Fundamental Involution Formula 15.4(3). Let F be the free group on generators tij = tji for i 6= j = 1; 2; : : : ; n, and K the normal subgroup generated by the t-relations, i.e. by all elements t2ij ; tij tk` tij t (k) (`) . By the universal property of the free group, the set-theoretic map tij ! U (ij ) extends to an epimorphism '0 : F ! U Aut(J) for U the subgroup generated by the hermitian involutions 15.1(4). By 15.4(1), 15.4(3) the generators of K lie in the kernel of ', so we have an induced epimorphism ' : F =K ! U . We use t~ to denote
330
Hermitian Symmetries
the coset of t 2 F modulo K. Furthermore, we have a restriction homomorphism : U ! Symm(H) = Sn for H = fh11 ; : : : ; hnng, since by the Index Permutation Principle 15.4(3) the U (ij ) (hence all of U ) stabilize H; the image of contains by 15.4(1) the generators (ij ) of Sn , hence is all of Sn , so is an epimorphism too. The composite ~ := Æ ' : F =K ! U ! Symm(H) = Sn sends the coset t~ij ! U (ij) ! (ij ). To show the epimorphisms '; are isomorphisms (so 1 is an isomorphism Sn ! U as desired) it suÆces if the composite is injective, i.e. ~ : F =K ! Sn via t~ij ! (ij ) is a presentation of Sn : This is a moderately \well-known" group-theoretic result, which we proceed to prove sotto voce.
Suppose ~ is NOT injective, and choose a product p = t~i1 j1 : : : t~iN jN 6= 1 of SHORTEST LENGTH N in Ker(~ ). We can rewrite p as p(n) p(n 1) : : : p(2) without Q ( k ) changing its length, where p = j s : : : > q 2 (p(k) 6= 1): BY MINIMALITY OF N , the j 's in p(r) MUST BE DISTINCT, p(r) = t~rjm : : : t~rj1 =) jk 6= j` since any repetition would lead to a sub-expression Q Q t~rj r>k6=j t~rk t~rj = t~rj r>k6=j t~rk t~rj1 (by the t-relations) Q
= r>k6=j Q = r>k6=j
t~rj t~rk t~rj1 ( conjugation by t~rj is a group automorphism) t~rj t~rk t~rj
(by the t-relations)
Q = r>k6=j t~jk (by the t-relations, since k 6= r; j is xed by ) and hence to an expression for p of length N 2, contrary to minimality of N . Applying ~ to the p-product gives 1 = ~(p) = ~(p(r) )~(p(s) ) : : : ~(p(q) ) in Sn ; now ~(p(i) ) for i < r involves only transpositions (ik) for r > i > k, all of which x the index r, so acting on r gives r = 1(r) = ~(p(r))~(p(s) ) : : : ~(p(q) )(r) (by pproduct and ~(p) = 1) = ~(p(r) )(r) = rjm : : : rj1 (r) [by p-product] = j1 [since BY
3. PROBLEMS FOR CHAPTER 15
331
DISTINCTNESS all other r; jk 6= j1 have rjk xing j1 ], contrary to r > j1 . Thus we have reached a contradiction, so no p 6= 1 in Ker(~) exists, and the epimorphism ~ is an isomorphism. Thus the t-relations are indeed a presentation of Sn by generators and relations.
3. Problems for Chapter 15 (1) Show that the scalar annihilators Ann (J) := f 2 j J = 0g form an ideal in , and that J is always a -algebra for = =(Ann (J). We say acts faithfully on J if the scalar annihilators vanish, Ann (J) = 0. (2) For a unital Jordan algebra show that the scalar annihilators concide with the scalars that annihilate 1; f 2 j 1 = 0g, so acts faithfully on J i it acts faithfully on the unit 1; J = 0 , 1 = 0. (3) Show that if H is a family of n n hermitian matrix units in J 6= 0, and acts faithfully on J, then the units are linearly independent (in particular, nonzero) and J contains an isomorphic copy of Hn (). (4) Conclude that in general J contains a family of n n hermitian matrix units i it contains an isomorphic copy of Hn () (in the category of unital algebras). Problem 1.
CHAPTER 16
The Coordinate Algebra In this chapter we take up the task of nding the coordinate algebra D. This is strictly a matter of the \upper 3 3 chunk" of the algebra. The coordinates themselves live in the upper 2 2 chunk. Recall from the 2 2 coordinatization in Chapters 11 and 12 that spin frames never did develop a coordinate algebra, and hermitian frames were only tted un-natural coordinates from End (J1 (e)) upon the condition that J1 (e) was cyclic as a J2 (e)-module. But once a frame has room to expand into the Peirce spaces J13 and J23 , the coordinate algebra J12 automatically has a product of its very own.
1. The Coordinate Triple In order to coordinatize hermitian matrices, the coordinates must consist of a unital coordinate algebra with involution, plus a designated coordinate subspace for the diagonal matrix entries. Luckily, in the presence of 21 this diagonal coordinate subspace must be just the full space of all symmetric elements, but if we did not have 12 our life would be more complicated and there could be many possibilities for the diagonal subspace: over an imperfect eld of characteristic 2 (i.e. 2 < ) with identity involution we would have to accept the simple algebra of symmetric n n matrices with diagonal entries only from 2 , and in general we would have to accept the prime ring of symmetric n n matrices over Z[t] with diagonal entries only from Z[t]2 = Z[t2] + 2Z[t]. Definition 16.1 (Hermitian Coordinate -Algebra). If H = fhij g is a hermitian n-frame for n 3 in a unital Jordan algebra J, then the hermitian coordinate -algebra D for J determined by H is de ned as follows. (1) The coordinate algebra is the space D := J12 with product a b := ffa; h23 g; fh31 ; bgg = fU (23) (a); U (13) (b)g (a; b 2 D = J12 ; U (ij ) = Uuij for uij = 1 (ei + ej ) + hij ) (noting that U (ij ) = Vhij on Jik for i; j; k 6= by the Action Formula 15:4(2)). 333
334
The Coordinate Algebra
(2) The coordinate involution on D is a := Uh12 (a) = U (12) a (noting that U (12) = Uh12 on J12 by the Action Formula 14:4(2)): (3) The diagonal coordinate space is the image D0 D of the space J11 under the diagonalization map Æ0 = Vh12 : D0 := Æ0 (J11 ) (Æ0 (a11 ) := fa11 ; h12 g = Vh12 (a11 )): By abuse of language, we identify the -algebra (D; ) with D. In keeping with our long-standing terminology, we want D to stand for the basic object, the coordinate -algebra; rather than introduce a new name for the coordinate algebra shorn of its involution (the set J12 with merely a product), we use the same symbol for both the shorn and unshorn versions. To see that we have captured the correct notion, let us check our de nition when the Jordan algebra is already a hermitian matrix algebra. Example 16.2 (Hermitian Matrix). If J = H n (D; ) as in Hermitian Matrix Example 3:7, and hij = 1[ij ] are the standard hermitian matrix units, then the coordinate algebra is the space J12 = D[12] under the product a[12] b[12] = ffa[12]; 1[23]g; f1[31]; b[12]gg = fa[13]; b[32]g = ab[12] by Basic Brace Products 3:7(2); the diagonal coordinate space is fH(D; )[11]; 1[12]g = H(D; )[12] by Basic Brace Products 3:7(2), with coordinate involution a[12] = U1[12] (a[12]) = (1a1)[12] = a[12] by Basic U Products 3:7(3). Thus the coordinate -algebra is canonically isomorphic to D under the natural identi cations, with D0 identi ed with H(D): More generally, if 1 2 J A+ is any unital special algebra then by the Special Example 15:2(3) any hermitian family in J has the form H = H(E a ) for a family E a = feij g of associative matrix units, and the product in the coordinate algebra D is a b = (a12 + a21 ) (b12 + b21 ) = (a12 e23 )(e31 b12 ) + (b21 e13 )(e32 a21 ) = a12 e21 b12 + b21 e12 a21 ; a b = a12 e21 b12 + b21 e12 a21 ; the involution is a = e12 a21 e12 + e21 a12 e21 ; and the diagonalization map is Æ0 (a11 ) = a11 e12 + e21 a11 : The proof of the pudding is in the eating, and of the coordinate triple in the ability to coordinatize hermitian matrix algebras. We must now verify that the the coordinate triple lives up to the high standards we have set for it. By Jordan Coordinates 14.1 we already
1. THE COORDINATE TRIPLE
335
know it must be alternative with nuclear involution (even associative once n 4).
16.3 (Coordinate Algebra). Let D be the coordinate algebra of J with respect to a hermitian n-frame (n 3) H = fhij g in a unital Jordan algebra J. (1) The algebra D is a unital linear algebra with unit 1 = h12 , whose product mimics the brace product f; g : J13 J32 ! J12 , and whose coordinate involution is a true involution on the coordinate algebra: Theorem
(i) (ii) (iii)
d h12 = h12 d = d; fx13 ; y32g = U (23) (x13 ) U (13) (y32); d b = b d; d = d; h12 = h12 :
(2) The diagonal coordinate space is the space of symmetric elements under the coordinate involution, whose action on D mimics that of J11 on J12 and which is isomorphic as Jordan algebra to J11 under the diagonal coordinate map Æ0 : J11 ! D+0 , where norms and traces given by the q - and t-forms: (i) (ii) (iii) (iv )
= H(D) = fa 2 D j a = ag; fa11 ; dg = Æ0 (a11 ) d (a11 2 J11; d 2 D); Æ0 (a211 ) = Æ0 (a11 ) Æ0 (a11 ) (a11 2 J11 ); d d = Æ0 (q11 (d)); d + d = Æ0 (t11 (d)) = Æ0 (q11 (d; h12 )):
D0
proof. (1) The Peirce relations show the product gives a wellde ned bilinear map D D ! D, and h12 is a left unit as in (i) since h12 b = fU (23) (h12 ); U (13) (b)g [by de nition 16.1(1)] = fh13 ; U (13) (b)g [by Index Permutation 15.4(3)] = U (13) (U (13) (b)) [by the Action Formula 15.4(2)] = b [by the involutory nature 15.4(1) of the hermitian involutions U (ij ) ]. Dually h12 is a right unit (or we can use the result below that is an involution with h12 = h12 ). (ii) is just a reformulation of the de nition 16.1(1) of the product: U (23) (x13 ) U (13) (y32 ) = fU (23) (U (23) (x13 )); U (13) (U (13) (y32 ))g = fx13 ; y32g [since the U (ij) are involutory]. Certainly in 16.1(3) is linear of period 2 since it is given by the hermitian involution U (12) , and is an anti-homomorphism (iii) since d b = U (12) (fU (23) (d); U (13) (b)g) [by the de nitions 16.1(1),(3)]
336
The Coordinate Algebra
= fU (12) U (23) (d); U (12) U (13) (b)g [since the hermitian involutions are algebra automorphisms] = fU (13) U (12) (d); U (23) U (12) (b)g [by the Agreement Principle 15.5(3) since (12)(23) = (13)(12); (12)(13) = (23)(12)] = fU (23) (U (12) (b)); U (13) (U (12) (d))g = b d [by the de nitions 16.1(1),(3) again]. The involution must x h12 since it is the unit element, but we can also see it directly: h12 = U (12) (h12 ) = h12 by Index Permutation 15.4(3). (2) This follows via the Peirce Identity Principle 13.9 from the Special Example 15.2(3), with the product, involution, and Æ0 given as in Hermitian Matrix Example 16.2. In (i) we certainly have D0 H(D) since, by the formula for in 16.2, Æ0(a11 ) = e12(e21 a11 )e12 + e21 (a11 e12 )e21 = a11 e12 + e21 a11 = Æ0 (a11 ). The reverse inclusion will follow from the trace relation in (iv ) since when 21 2 all symmetric elements are traces, d = d = 1=2(d + d); the trace relation will result by linearizing d ! d; h12 in the norm relation (iv ). To establish the norm relation that norms fall in D0 , we compute: dd = a12 e21 (e12 a21 e12 ) + (e21 a12 e21 )e12 a21 = (a12 a21 )e12 + e21 (a12 a21 ) = Æ0 (a12 a21 ). For the action (ii) of D0 on D we have, by the formula for in 16.2, Æ0 (a11 ) d = (a11 e12 )e21 a12 + a21 e12 (e21 a11 ) = a11 a12 + a21 a11 = fa11 ; a12 + a21 g = fa11; dg. Finally, Æ0 is a Jordan homomorphism as in (iii) since Æ0 (a11 ) Æ0 (a11 ) = (a11 e12 )e21 (a11 e12 ) + (e21 a11 )e12 (e21 a11 ) = a211 e12 + e21 a211 = Æ0 (a211 ). The map Æ0 is linear, and by de nition a surjection; it is injective since a11 e12 + e21 a11 = 0 () a11 = (a11 e12 + e21 a11 )e21 = 0. Thus Æ0 is an isomorphism. The results in (2) can also be established by direct calculation, without recourse to Peirce Principles. Even without checking h12 is symmetric in (1), show that if e is a left unit in a linear algebra D with involution then e must be a two-sided unit and must be symmetric. Exercise 16.3A*
Establish the results in 16.3(2) by direct calculation. (1) Show directly from the de nition. (2) Show the norm relation directly: d d = Æ0 (q11 (d)). (3) Establish the action of D0 directly. (4) Establish directly that Æ0 is a homomorphism. (5) Verify that Æ0 is injective. Exercise 16.3B* D0
H(D)
CHAPTER 17
Jacobson Coordinatization We are now ready to establish the most powerful Jordan coordinatization theorem, the Jacobson Coordinatization Theorem for algebras with 3 or more supplementary connected orthogonal idempotents. It is important to note that no \nondegeneracy" assumptions are made (which means the theorem can be used to reduce the study of bimodules for such an algebra to the study of bimodules for its coordinate algebra, a topic we won't broach in this book).
1. Strong Coordinatization As with the degree 2 coordinatization theorems (but with even more justi cation), the calculations will all be made for strongly connected algebras, then the magic wand of isotopy will convert a merelyconnected algebra to a strongly-connected one. So we begin with the conceptually simpler strongly connected case. Theorem 17.1 (Jacobson Strong Coordinatization). Any unital Jordan algebra J with a family H of hermitian n n matrix units (n 3) is an algebra of n n hermitian matrices: there is an isomorphism Æ : J ! Hn (D; ) taking the given family of matrix units to the standard family, Æ (hii ) = 1[ii] = Eii ; Æ (hij ) = 1[ij ] = Eij + Eji : Here (D; ) is an alternative algebra with nuclear involution, which must be associative if n 4. Indeed, if J has merely a supplementary family of orthogonal idempotents fei g with e1 strongly connected to ej by v1j 2 J1j , then there is an isomorphism Æ : J ! Hn (D; ) with Æ (ei ) = 1[ii]; Æ (v1j ) = 1[1j ]: proof. By Hermitian Completion 15.3 any such family fei ; v1j g can be completed to a Hermitian family fhij g, so we may assume from the start that we have a complete family H. By the Coordinate Algebra Theorem 16.3 the hermitian 3 3 subfamily gives rise to a unital coordinate -algebra D with unit 1 = h12 ; so far this is merely a unital linear algebra, but we can use it to build a unital linear matrix algebra Hn(D; ) as in Hermitian Matrix Example 3.7, which is a direct sum 337
338
Jacobson Coordinatization
of o-diagonal subspaces D[ij ] (i 6= j ) and diagonal subspaces D0 [ii]. (Once we know Hn (D) is Jordan, these will in fact be the Peirce spaces relative to the standard idempotents 1[ii]). Step (1): The Linear Coordinatization We want to introduce coordinates into the Jordan algebra; all the odiagonal Peirce spaces Jij will be coordinatized by D (a copy of J12 ), and all diagonal spaces Jii will be coordinatized by D0 (a copy of J11 in D under Æ0 = Vh12 ). To do this we use the Hermitian Symmetries U of 15.5, which allow us to move freely from any diagonal (resp. o-diagonal) Peirce space to any other. Since J12 and J11 are \selfcoordinatized", it is natural to de ne the Peirce coordinatization maps for i 6= j; Æij : Jij ! D = J12 is given by Æij = U for any with (i) = 1; (j ) = 2; for i = j; Æii : Jii ! D0 = Æ0 (J11 ) is given by Æii = Æ0 Æ U for any with (i) = 1: By the Agreement Principle 15.5(3) these are independent of the particular choice of (any one will do equally well, giving the same map into D or D0 ). We glue these pieces together to obtain a global coordinatization map
Æ:J
! Hn(D;
)
via
Æ(
X ij
xij ) =
X ij
Æij (x)[ij ]
which is automatically a linear bijection because J is the direct sum of its Peirce spaces Jij (by Peirce Decomposition 13.4), Hn (D) is the direct sum of Peirce spaces D[ij ] (i 6= j ); D0 [ii] (by 3.6), and the U and Æ0 are bijections. Step (2): The Homomorphism Conditions We must prove that Æ is a homomorphism of algebras. If we can do this, everything else will follow: we will have the isomorphism Æ of algebras, and it will take hij to 1[ij ] as in the theorem since for i 6= j by de nition Æij (hij ) = U (hij ) = h(1)(j ) [by Index Permutation 15.5(2)] = h12 [by choice of ] = 1 2 D, while for i = j by de nition Æii (hii ) = Æ0 (U (hii )) = Æ0 (h(1)(1) ) = Æ0 (h11 ) [by choice of ] = h12 = 1 2 D, so in either case Æ (hij ) = Æij (hij )[ij ] = 1[ij ]. Moreover, the alternativity or associativity of D will follow from the Jordan Coordinates Theorem 14.1 once we know Hn (D) is Jordan (being isomorphic to J). While we have described Æ in terms of Peirce spaces with indices i j , it is very inconvenient to restrict ourselves to such indices; it
1. STRONG COORDINATIZATION
339
is important that Æ preserves the natural symmetry Jji = Jij ; d[ji] = d[ij ] :
Æ (xij ) = Æij (xij )[ij ] = Æji(xij )[ji]; i.e. Æji (xij ) = Æij (xij ): This is trivial if i = j since Æii maps into D0 = H(D); if i 6= j and is any permutation with (j ) = 1; (1) = 2 then = (12) Æ is a permutation with (j ) = 2; (1) = 1, so by de nition we have Æji = U ; Æij = U , hence Æji (xij ) = U (12) (Æji (xij )) [by the de nition 16.1(3) of the involution] = U (12) U (xij ) = U (xij ) [by homomorphicity Hermitian Symmetries 15.5(1) of U ] = Æij (xij ): Step (3): Peirce Homomorphism Conditions Thus it all boils down to homomorphicity Æ (fx; yg) = fÆ (x); Æ (y )g. It suÆces to prove this for the spanning set of Peirce elements x = xij ; y = ykl . Both A = J and A = Hn (D) have (I) the same symmetry in the indices, Æji (xij )[ji] = Æji(xij )[ij ] [by Step (2) and d[ji] = d[ij ] by 3.7(1)], (II) the same 4 basic products, A2ii Aii ; A2ij Aii +Ajj ; fAii ; Aij g Aij ; fAij ; Ajk g Aik for linked products (i; j; k 6=) [by the Basic Brace Rules 13.7(1) and the 4 Basic Brace Products 3.7(2)], and (III) the same basic orthogonality relations fAij ; Ak` g = 0 if the indices cannot be linked [by Peirce Orthogonality 13.7(3) and Basic Brace Orthogonality 3.7(2)]. Thus homomorphicity reduces to 4 nontrivial Basic Product Conditions: (i) (ii) (iii) (iv )
Æii (a2ii ) = Æii (aii ) Æii (aii ) Æii (qii (xij )) = Æij (xij ) Æji(xij ) Æij (faii ; xij g) = Æii (aii ) Æij (xij ) Æij (fyik ; zkj g) = Æik (yik ) Ækj (zkj )
for distinct indices i; j; k. Step 4: As Easy as 1,2,3 But everything can be obtained from J11 or J12 by a hermitian symmetry, so these reduce to the case of indices i = 1; j = 2; k = 3: (i0 ) (ii0 ) (iii0 ) (iv 0 )
Æ11 (a211 ) = Æ11 (a11 ) Æ11 (a11 ); Æ11 (q11 (x12 )) = Æ12 (x12 ) Æ21 (x12 ); Æ12 (fa11 ; x12 g) = Æ11 (a11 ) Æ12 (x12 ); Æ12 (fy13 ; z32 g) = Æ13 (y13 ) Æ32 (z32 )
340
Jacobson Coordinatization
which we already know by the Coordinate Algebra Theorem 16.3 (2)(iii), (2)(iv ), (2)(ii), (1)(ii) since by de nition and symmetry Æ11 = Æ0 ; Æ12 = 1J12 ; Æ21 = U (12) = ; Æ13 = U (23) ; Æ32 = U (13) : To see the reduction in detail, choose any so that (1) = i; (2) = j; (3) = k; then by the Index Permutation Principle 15.5(2) we can nd elements a11 ; x12 ; y13 ; z23 so that (since the hermitian symmetries are algebra automorphisms ) aii = U (a11 ); a2ii = U (a211 ); xij = U (x12 ); qii (xij ) = U (q11 (x12 )); faii; xij g = U (a11 ; x12 ); yik = U (y13 ); zkj = U (z32 ); fyik ; zkj g = U (fy13 ; z32 g) where by de nition and the Agreement Principle 15.5(3) Æii U = Æ11 ; Æij U = Æ12 ; Æji U = Æ21 ; Æik U = Æ13 ; Ækj U = Æ32 ; hence Æii (aii ) = Æ11 (a11 ); Æii (a2ii ) = Æ11 (a211 ); Æii (qii (xij )) = Æ11 (q11 (x12 )); Æij (xij ) = Æ12 (x12 ); Æji (xij ) = Æ21 (x12 ); Æij (faii ; xij g) = Æ12 (fa11 ; x12 g); Æik (yik ) = Æ13 (y13 ); Ækj (zkj ) = Æ32 (z32 ); Æij (fyik ; zkj g) = Æ12 (fy13 ; z32 g); and both sides in (i-iv ) reduce to (i0 -iv 0 ).
2. General Coordinatization We obtain the general coordinatization by reduction to an isotope. Theorem 17.2 (Jacobson Coordinatization). Any unital Jordan algebra J with a supplementary family of n 3 connected orthogonal idempotents fei g (it suÆces if e1 is connected to each ej ) is isomorphic to an algebra of n n twisted hermitian matrices: there is an isomorphism ' : J ! Hn (D; ) taking the given family of idempotents to the standard family, '(ei ) = 1[ii] = Eii : Here D is an alternative algebra with nuclear involution, which must be associative if n 4. proof. Suppose e1 is connected to ej by v1j 2 J1j ; then by de nition 14.6 of connection and O-Diagonal Invertbility 14.7(1), P qjj (v1j ) 1 1 is invertible in Jjj . Set ujj = qjj (v1j ) ; u11 = e1 = u11 ; u = j ujj . By the Diagonal Isotope Lemma 14.10 the isotope eJ := J(u) has supplementary orthogonal idempotents e~j = ujj1 yielding the same Peirce
2. GENERAL COORDINATIZATION
341
decomposition Jeij = Jij as the original fej g, but with v1j strongly connecting e~1 to e~j . By the Strong Coordinatization Theorem we have an isomorphism Æ~ : Je ! Hn (D) sending e~j ! 1[jj ] and v1j ! 1[1j ]. In particular, D is alternative with symmetric elements in the nucleus: D0 Nuc(D). Furthermore, by Jordan Isotope Symmetry 7.3(4) we (u 2 ) can recover J as J = eJ = L Æ Æ~ = Hn (D)( ) = Hn (D; ) via ' P 2 since [by the Diagonal Isotope Lemma 14.10 again] u = j ujj2 2 P to a diagonal (hence, by the above, nuclear) elej Jjj corresponds P ment 2 j D0 [jj ]. Under this isomorphism ej goes to an invertible idempotent in Hn (D; )jj , so it can only be 1[jj ]. We can also verify this directly: by the Jordan Isotope inverse recipe 7.3(3), ej has inverse Uujj1 (ej 1 ) = ujj2 in J(jju) = J(jjujj ) , so their images are inverses in Hn (D); but Æ~(ujj2 ) = j [jj ]; Æ~(ej ) = j [jj ] implies j ; j are inverses in D, so '(ej ) = L (Æ~(ej )) = (j [jj ]) = j j [jj ] = 1[jj ] = Ejj as desired. This completes our Peircian preliminaries, preparing us for the nal push towards the structure of algebras with capacity.
Fourth Phase:
Structure
In the nal phase we will investigate the structure of Jordan algebras having capacity. We begin with basic facts about regularity and simple inner ideals, then we use these to show that a nondegenerate algebra with d.c.c. on inner ideals has a nite capacity. From this point on we study nondegenerate algebras with nite capacity in detail. Simple Peirce arguments show such an algebra is a direct sum of simple algebras with capacity, so we narrow our focus to simple algebras. We divide the simple algebras up according to their capacity, capacity 1 being an instantaneous case (division algebras). In capacity three or more we can use the Jacobson Coordinatization (augmented by the HK-O Theorem to tell us what coordinates are allowable) to show the algebras are matrix algebras over a division or composition algebra. In capacity two we must rst prove the Capacity Two Theorem which says that such an algebra satis es either the Spin Peirce identity or the hermitian Peirce condition; once that is established we can use Spin or Hermitian Coordinatization to describe the simple possibilities.
CHAPTER 18
Von Neumann Regularity 1. vNr Pairing In order to run at full capacity, we need a steady supply of idempotents. Before we can show in the next section that mimimal inner ideals always lead to idempotents, we will have to gather some general observations about von Neumann regularity. Definition 18.1 (vNr). An element x 2 J is a von Neumann regular element or vNr1 if x = Ux y for some y 2 J ( i.e. x 2 Ux J); and x is a double vNr if x = Ux Ux y for some y 2 J ( i.e. x 2 Ux Ux J): A Jordan algebra is vNr if all its elements are. Elements x; y are regularly paired if they are mutually paired with each other, x = Ux y; y = Uy x; in which case we call (x; y ) a vNr pair and write x ./ y: (1) Show that 0, all invertible elements, and all idempotents are always vNr, but that nonzero trivial elements never are: invertibles are paired with their inverses (u ./ u 1 ), and idempotents are narcissistically paired with themselves (e ./ e). (2) Show that a vNr algebra is nondegenerate. (3) Show that if (x; y) is a vNr pair so is (x; 1 y) for any invertible 2 . Exercise 18.1
In associative algebras (especially matrix algebras), such a y paired with x is called a generalized inverse of x. While inverses are inherently monogamous, generalized inverses are bigamous by nature: a given x usually has several dierent generalized inverses. For example, while the matrix unit x = E11 is happily paired with itself, it is also regularly 1In the literature this is usually just called regular element, but then, so are
lots of other things. To avoid the overworked term \regular", we identify it by its originator, John von Neumann. But it becomes too much of a mouthful to call it by its full name, so once we get friendly with it we call it by its nickname \vNr", pronounced vee-nurr as in schnitzel; there will be no confusion between \Johnny" and \Norbert" in its parentage.
343
344
Regularity
paired with all the y = E11 + E12 as ranges over . Any one of x's generalized inverses is equally useful in solving equations: when x 2 A is invertible the equation x(u) = m for a xed vector m in a left 1 A-module M is always solvable uniquely for u = x (m) 2 M , but if x has only a generalized inverse y then x(u) = m is solvable i e(m) = m for the projection e = xy , in which case one solution (out of many) is u = y (m). Lemma 18.2 (vNr Pairing). (1) Any vNr can be regularly paired: if x = Ux z then x ./ y for y = Uz x. (2) vNrs are independent of hats: x is a vNr in bJ i it is a vNr in J: in the notation of the Principal Inners Proposition 5:9 x is a vNr () x 2 (x) = Ux J () x 2 (x] = Ux Jb: proof. (1) We work entirely within the subalgebra [x; z ] of J generated by the two elements x and z , so by the Shirshov-Cohn Principle 5.4 we may assume we are living in H(A; ) for an associative -algebra A where xzx = x. The element y = zxz still satis es xyx = x(zxz )x = (xzx)zx = xzx = x, and in addition satis es yxy = (zxz )x(zxz ) = zxyxz = zxz = y , so x ./ y: (2) If x is a vNr in the unital hull, x = Ux z^ 2 Ux bJ = (x] for some z^ 2 bJ, then by (1) x = Ux y is a vNr in the original algebra for y = Uz^x 2 J, so x 2 Ux J = (x). The converse is clear, x 2 (x) =) x 2 (x]:
(1) Prove the vNr Pairing Lemma using only the Fundamental Formula, with no reference to associative algebra. (2) Prove x is regular i all principal inner ideals (x) = [x) = (x] = [x] agree. Exercise 18.2*
A nice system (say, nite-dimensional semi-simple) is always vNr, so every element can be regularly paired. In many ways a regular pair behaves like an idempotent. In the associative case if x = xyx then e = xy; f = yx are true idempotents, but in a special Jordan algebra + J A the elements e; f may no longer exist within J, only inside the associative envelope A. If an element is a double vNr, it is invertible in its own little part of the algebra. Lemma 18.3 (Double vNr). If b is a double vNr then its principal inner ideal (b] = Ub bJ is a unital Peirce subalgebra J2 (e) in which b is invertible: if b = Ub Ub a then b is invertible in (b] = J2 (e) for the idempotent e = Ub2 Ua b2 : proof. First we work entirely within the subalgebra [a; b], so by the Shirshov-Cohn Principle 5.4 we may assume we are living in
1. VNR PAIRING
345
H(A; ).
In A we have b = b2 ab2 and e = (b2 ab2 )ab2 = bab2 , hence be = b(bab2 ) = b2 ab2 = b (dually, or via ; eb = b), so b = ebe = Ue b and e2 = e(bab2 ) = bab2 = e. Thus we deduce that in J we have the Jordan relation b = Ue b 2 J2 (e) for e2 = e: If we want results about principal innner ideals we cannot work just within [a; b], we must go back to J. By the nature of inner ideals, e = Ub2 Ua b2 2 Ub2 J; b2 = Ub ^1 2 Ub bJ; b = Ue b 2 Ue bJ imply (e] (b2 ] (b] (e], so (b] = (b2 ] = (e] = J2 (e): By the Invertibility Criterion 6.2(1)(iii); b is invertible in (b] since its U -operator is surjective on the unital subalgebra (b] : Ub (b] = Ub Ub bJ = Ub2 Jb = (b2 ] = (b]: Thus we see that little units e come, not from the stork, but from double vNrs. (1) As a lesson in humility, try to prove the vNr Pairing and Double vNr Lemmas strictly in Jordan terms, without using Shirshov-Cohn to work in an associative setting. You will come away with more respect for Shirshov and Cohn. Exercise 18.3A
Exercise 18.3B (1) Show directly that for a vNr pair (x; y ) the operators E2 (x; y )
:= UxUy ; E1 (x; y) := Vx;y 2UxUy ; E0 (x; y) := Bx;y = 1J Vx;y + UxUy are supplementary orthogonal projection operators. (2) Alternately, show that the Ei are just the Peirce projections corresponding to the idempotent x in the homotope J(y) (cf. Jordan Isotope Theorem 7.3(1)). (3) If x; y are a vNr pair in a special algebra + J A , show e = xy; f = yx are idempotents in A, and in J there is a ghostly \Peirce decomposition," J = eJf + ((1 e)Jf + eJ(1 f )) + (1 e)J(1 f ), and a similar one with e; f interchanged. Show ezf = E2 (x; y)z; (1 e)zf + ez (1 f ) = E1 (x; y)z; (1 e)z (1 f ) = E0 (x; y)z . Thus in the Jordan algebra itself, the pair of elements x; y acts in some sense like an idempotent. This rich supply of \idempotents" was one of the sparks leading to Loos' creation of Jordan pairs (overcoming the severe idempotent-de ciency of Jordan triple systems); in the Jordan pair (J; J) obtained by doubling a Jordan algebra or triple system J, the \pair idempotents" are precisely the regular pairs (x; y), leading to pairs of Peirce decompositions Ei (x; y); Ei (y; x), one on each copy of J.
The Invertibility Criterion 6.2(1) assumes the algebra is unital. Now we can show that the existence of a surjective U -operator not only creates an inverse in the presence of a unit element, it even creates the unit element.
346
Regularity
18.4 (Surjective Unit). If a Jordan algebra J contains an element with surjective U -operator, Ub J = J, then J has a unit element and b is invertible. proof. Such a b is a double vNr, b 2 J = Ub J = Ub (Ub J), so by the Double vNr Lemma 18.3 J = (b] is unital with b invertible. Lemma
Strengthen the Jordan Isotope Theorem 7.3(2) to show that if J is an arbitrary Jordan algebra and u an element such that the homotope J(u) is unital, then J was necessarily unital to begin with, and u was an invertible element. Exercise 18.4
2. Structural Pairing When x and y are regularly paired their principal inner ideals (x] and (y ] are also paired up in a natural structure-preserving way, which is most clearly understood using the concept of structural transformation. This is a more general notion than that of automorphism because it includes \multiplications" such as U -operators, and is often convenient for dealing with inner ideals. Taking the Fundamental Formula as our guide, we consider operators which interact nicely with the U operators. (We have crossed paths with these several times before.) Definition 18.5 (Structural Transformation). (1) A linear transformation T on J is weakly structural if it is structurally linked to some linear transformation T on J in the sense that UT (x) = T Ux T on J for all x 2 J: (2) T is structural if it is weakly structural and remains so on the unital hull, more precisely T is structurally linked to some T on J such that T; T have extensions to Jb (by abuse of notation still denoted T; T ) which are structurally linked on bJ; UT (^x) = T Ux^ T on Jb for all x^ 2 Jb: (3) A structural pair (T; T ) consists of a pair of structural transformations linked to each other, UT (^x) = T Ux^ T ; UT (^x) = T Ux^ T on bJ for all x^ 2 Jb; equivalently T is structurally linked to T and T is structurally linked to T = T . For unital algebras all weakly structural T are automatically structural (with extension 0 T to bJ = e0 J). In non-unital Jordan algebras we usually demand that a transformations respect the square x2 as well as the quadratic product Ux y , ie. be structural on the unital
2. STRUCTURAL PAIRING
347
hull, so we usually work with structural rather than weakly structural transformations. For Jordan triples or pairs, or for non-invertible structural T in Jordan algebras, the \adjoint" T need not be uniquely determined. The accepted way to avoid this indeterminacy is to explicitly include the parameter T and deal always with the pair (T; T ). Again, T need not be structural in general, but since it is for all the important structural T the accepted way to avoid this nonstructurality is to impose it by at, and hence we consider structural pairs instead of mere structural transformations. Clearly any scalar multiplication by 2 determines a structural pair (1J ; 1J), and any automorphism T = ' determines a structural pair (T; T ) = ('; ' 1 ) (i.e. ' = ' 1 ). The Fundamental Formula says all multiplications Ux determine a structural pair (Ux ; Ux) (i.e. (Ux ) = Ux ), but the Vx and Lx are usually not structural. It can be shown (cf. Exercise 5.9 and IV.1.3) that the Bergmann operators (B;x;y ; B;y;x ) form a structural pair for any 2 ; x 2 J; y 2 Jb, in particular, the operator U^1 x = B;x;1 from the unital hull is structural on J. If J = A+ for an associative algebra A, then all left and right associative multiplications Lx ; Rx determine structural pairs, (Lx ; Rx ); (Rx ; Lx ) (i.e. (Lx ) = Rx and (Rx ) = Lx ), and so again Ux = Lx Rx is structural with Ux = Ux , but Vx = Lx + Rx is usually not structural. Lemma 18.6 (Structural Innerness). (1) If T is a structural transformation on J then its range T (J) is an inner ideal. (2) More generally, the image T (B) of any inner ideal B of J is again an inner ideal. (3) If T; S are structural so is their composite T Æ S with (T Æ S ) = S Æ T : proof. The rst assertion follows from the second since J is trivially an inner ideal. For the second, if B is inner in J, UB Jb B, then so is the -submodule T (B) : UT (B) Jb = T UB T (Jb) [by structurality 18.5(2) on Jb] T (UB Jb) T (B). The third assertion is a direct calculation, UT (S (x)) = T US (x) T = T (SUx S ) T by 18.5(2) for T and S: (1) If T is weakly structural and B is a weak inner ideal, show T (B) is also weakly inner. Show that (T x) T (x) for weakly structural T , and (T x] T (x] ; [T x] T [x] for structural T . (2) If T; S are weakly structural show T Æ S is too. (3) De ne a weakly structural pair (T; T ), and show that if (T; T ); (S; S ) are weakly structural pairs the so is their product (T Æ S; S Æ T ). (4) If T is weakly structural and both T; T are invertible on J, show that T 1 Exercise 18.6
348
Regularity
is also weakly structural with (T 1) = (T ) 1 . If (T; T ) is a weakly structural pair with both T; T invertible, then the inverse pair (T 1; (T ) 1 ) is also weakly structural. (5) Show that if (x; y) is a vNr pair then so is (T (x); S (y)) for any structural pairs (T; T ); (S; S ) such that T Sy = y; S T x = x (in particular if S = (T ) 1 where both T; T are invertible).
18.7 (Structurally Paired). Two inner ideals B; D in J are structurally paired if there exist structural transformations T; S on J which are inverse bijections between B and D: (1) T (B) D; S (D) B; (2) T Æ S = 1D on D; S Æ T = 1B on B: Structural pairing for inner ideals is a more general and more useful concept than being conjugate under a global isotopy of the algebra; it is a \local isotopy" condition, yet strong enough to preserve the lattice of inner ideals. Lemma 18.8 (Structural Pairing). If inner ideals B; D of J are structurally paired, then there is an isomorphism between the lattices of inner ideals of J contained in B, and those contained in D: proof. If B; D are structurally paired by T; S then T jB ; S jD are inverse bijections of modules which preserve inner ideals by Structural Innerness 18.6(2): if B0 B is inner then D0 = T (B0 ) is again an inner ideal of J and is contained in D = T (B). Thus the pairing sets up inverse order-preserving bijections between the lattices of inner ideals in B and those in D, in short, a lattice isomorphism. Definition
Lemma 18.9 (Principal Pairing). If elements b; d are regularly paired in a Jordan algebra, then their principal innner ideals (b]; (d] are strucd b turally paired by Ud ; Ub : (b] U! (d] and (d] U! (b] are inverse structural bijections preserving inner ideals. b into (b], proof. Ub is certainly structural on b J and maps all of J dually for Ud , as in Structural Pairing 18.7(1). For Ub a^ 2 Ub Jb = (b] we have Ub Ud (Ub a^) = UUb d a^ (by the Fundamental Formula) = Ub a^ (by pairing), so Ub Ud = 1(b] , and dually for Ud Ub , as in Structural Pairing 18.7(2).
These will be important in the next chapter, where we will be Desperately Seeking, not Susan, but Idempotents. The good minimal inner ideals are those governed by a division idempotent. At the other extreme are the minimal inner ideals governed by trivial elements, and these are so badly behaved that we will pass a law against them (the
3. PROBLEMS AND QUESTIONS FOR CHAPTER 18
349
Nondegeneracy Law). The remaining minimal inner ideals B are nilpotent, so there is no hope of squeezing idempotents out of them. Structural Pairing shows that the image of a minimal inner ideal under a structural transformatiion T is again minimal, and luckily the nilpotent Bs run around with respectable T (B)s having idempotents. In this way the minimal inner ideals will provide us enough fuel (division idempotents) to run our structure theory.
3. Problems and Questions for Chapter 18 (1) (cf. Problem 7.2(1)) If T is structural and invertible on a unital algebra, show the adjoint T is uniquely determined as T = T 1UT (1) = UT 1 (1) T 1. (2) If J = A+ for A the unital associative algebra of upper triangular 2 2 matrices over , show T = UE11 is structural for any T = T + S as long as S (A) AE22 , so T is far from unique. [In upper triangular matrices E22 AE11 = 0:] Problem 2. Let J be a possibly-non-unital Jordan algebra (cf. Problem 7.2(2) for the case of a unital algebra). (1) If T is structurally linked to T and both T; T are invertible on J, show that T 1 is structurally linked to (T ) 1 . (2) If (T; T ) is an invertible structural pair (a structural pair with both T; T invertible), show that the inverse structural pair (T 1; (T ) 1 ) is also structural. (3) Show that the set of invertible structural pairs forms a subgroup S tr(J) End(J ) (End(J ) )op , called the structure group of J. Show that in a natural way this contains a copy of the automorphism group Aut(J) as well as all invertible Bergmann operators B;x;y . [There are usually many invertible Bergmann operators, corresponding to quasi-invertible pairs x; y, but there are invertible operators Uv only if J is unital, the situation of Problem 7.2.] Question 1. (1) If T is invertible and structurally linked to T , is T necessarily invertible? (2) If (T; T ) is structural and T is invertible, is T necessarily invertible too? Equivalently, in a unital algebra must T (1) or T 1(1) be invertible? Question 2*. In order for a weakly structural transformation T linked to T to become strong, T; T must extend to (Tb; Tb) on the unital hull, i.e. they must decide what to do to the element 1: Tb(1) = t^ = 1 t; Tb(1) = t^ = 1 t . (1) If is cancellable (e.g. if is a eld and 6= 0) show that = , so it is natural to impose this as a general condition. (2) Assuming = , nd conditions on ; t; t which are necessary and suÆcient for T structurally linked to T to extend to Tb structurally linked to Tc on bJ. (3) If T; T are structurally linked to each other, nd necessary and suÆcient conditions on the = ; t; t of (1) for (Tb; Tb) to be structurally linked to each other (i.e. (T; T ) is a structural pair). Question 3. vNr-ity is de ned as x 2 (x). Investigate what happens if we choose the other principal inner ideals. Is x vNr i x 2 (x]? I x 2 [x]? What happens if we can generate the square: what is the connection between x being vNr and Problem 1.
350
Regularity
x2 being in (x)? Or in (x]? Or in [x]? Either prove your assertions or give counterexamples. + Question 4. Can you think of any conditions on the element x in J = A for an associative algebra A in order for Vx = Lx +Rx to be a structural transformation?
CHAPTER 19
Inner Simplicity The main purpose of simple inner ideals is to provide us with simple idempotents, the fuel of classical structure theory. Definition 19.1 (Simple Inner). A nonzero inner ideal B of J is minimal if there is no inner ideal 0 < C < B of J properly contained inside it. An inner ideal is simple if it is both minimal and non-trivial, UB Jb 6= 0. An element b of J is simple if (b] is a simple inner ideal of J containing b; since b 2 (b] i b 2 (b) by vNr Pairing 18:2(2), this is the same as saying that b is a vNr and generates a simple inner ideal (b]. An idempotent e is a division idempotent if the Peirce subalgebra (e] = J2 (e) is a division algebra.1 Lemma 19.2 (Simple Pairing). (1) If inner ideals B; D in J are structurally paired, then B is minimal (resp. simple) i D is minimal (resp. simple): (2) If elements b; d are regularly paired, then b is simple i d is simple. proof. (1) Preservation of minimality follows from the lattice isomorphism in Structural Pairing 18.8. Simplicity will be preserved b b b if triviality is, and B trivial =) UB J = 0 =) UT (B) J = T UB T (J) T UB (Jb) = 0 =) T (B) trivial. (2) If b; d are regularly paired then by Principal Pairing 18.9 the inner ideals (b]; (d] are structurally paired, so by (1) (b] is minimal () (d] is minimal, and since b; d are both automatically vNr if they are regularly paired, we see b is simple () d is simple.
1. Minimal inner ideals The d.c.c. on inner ideals produces minimal inner ideals. We classify all minimal inner ideals, showing they are either simple (closely related to simple idempotents) or consist entirely of trivial elements. 1In the literature this is often called completely primitive, but the terminology is
not particularly evocative. In general, one calls e a (something-or-other)-idempotent if the subalgebra Ue J it governs is a (something-or-other)-algebra. A good example is the notion of an abelian idempotent used in C -algebras.
351
352
Inner Simplicity
19.3 (Minimal Inner Ideal). (1) The minimal inner ideals B in a Jordan algebra J over are of the following Types: b = 0); Trivial: B = z for a trivial z (Uz J Idempotent: B = (e] = Ue J for a simple idempotent e (in which case B is a division subalgebra); Nilpotent: B = (b] = Ub J for a simple nilpotent b (then B is a trivial subalgebra, B2 = UB B = 0): Any (b] of Nilpotent Type is structurally paired with a simple inner ideal (e] for a simple idempotent e. (2) An idempotent e is simple i it is a division idempotent. (3) An inner ideal is simple i it is of Idempotent Type for a division idempotent, or of Nilpotent Type structurally paired with a division idempotent. proof. (1) Let B be a minimal inner ideal of J. We will break the proof into several small steps. Theorem
Step 1: The case where B contains a trivial element. If B contains a single nonzero trivial element z , then z is a nonzero inner ideal of J contained in B, so by minimality of B we must have B = z and B of Trivial Type is entirely trivial. FROM NOW ON WE ASSUME THAT B CONTAINS NO TRIVIAL ELEMENTS, in particular UB Jb 6= 0 implies B is simple (not just minimal). Step 2: Two properties. The absence of trivial elements guarantees that b 6= 0 in B implies (b] = Ub bJ 6= 0 is a nonzero inner ideal of J contained in B, so again by minimality we have (I) B = (b] for any b 6= 0 in bB; (II) all b 6= 0 in B are simple. Step 3: The case where b2 6= 0 for all b 6= 0 in B. Here b 2 B = Ub2 Jb[using (I) for b2 , because inner ideals are also subalgebras] = Ub Ub Jb = Ub B [by (I)] implies all Ub are surjective on B, so by the Surjective Unit Lemma 18.4 B = (e] is a unital subalgebra and all b 6= 0 are invertible. Then B is a division algebra with e a simple idempotent, and B is of Idempotent Type.
1. MINIMAL INNER IDEALS
353
Step 4: The case where some b2 = 0 for b 6= 0 in B. Here B2 = UB B = 0, since B2 = (Ub Jb)2 [by (I)] = Ub UbJ b2 [by the Fundamental Formula acting on 1] = 0, and once all b2 = 0 we have all Ub B = 0 [if Ub B 6= 0 then, since it is again an inner ideal of J contained in B by Structural Innerness 18.6(2), by minimality it must be all of B; B = Ub B = Ub (Ub B) = Ub2 B = 0, a contradiction], and B is of Nilpotent Type. Step 5: The Nilpotent Type in more detail. By (II) all b 6= 0 in B are simple, hence regular, so by Regular Pairing 18.2 we have b regularly paired with some d (which by Simple Pairing 19.2(2) is itself simple, so (d] is again simple). If d2 6= 0 then by the above (d] = (e] for a simple idempotent e, and we have established the nal assertion of (1). So assume b2 = d2 = 0. Then (b] can divorce (d] and get structurally paired with (e], where e := U 12 ^1+d (b) ./ b is a simple (but honest) idempotent in J regularly paired with b. Indeed, by the Shirshov-Cohn + b , where the Principle 5.3 we can work inside [b; d] = H(A; ) A b has element u := 21 ^1 + d 2 A (III) bub = b;
(IV) bu2 b = b
since bub = 12 b2 + bdb = 0+ b and bu2 b = 14 b2 + bdb + bd2 b = bdb = b when b2 = d2 = 0. But then e2 = (ubu)(ubu) = u(bu2 b)u = ubu [using (IV)] = e is idempotent, and it is regularly paired with b since beb = bubub = bub = b [using (III) twice] and ebe = (ubu)b(ubu) = u(bubub)u = u(b)u [above] = e: This establishes the tripartite division (1) into types. For (2), the unital subalgebra e is a division algebra i it contains no proper inner ideals by the Division Algebra Criterion 6.3, i.e. is simple. For (3), by (1) a simple inner ideal has one of these two types; conversely, by (2) the Idempotent Types are simple, and by Simple Pairing 19.2 any inner ideal (of nilpotent type or not) which is structurally paired with a simple (e] is itself simple. Note that we have not claimed all z of Trivial Type are minimal, nor have we said intrinsically which nilpotent b are simple. every trivial element z determines a trivial minimal inner ideal B = z = [z ]. (2) Show that for a general ring of scalars , B = z for a trivial z is minimal i z ? = fa 2 A j az = 0g is a maximal ideal M / , so B = 0 z is 1-dimensional over the eld 0 = =M . (3) Show that a Exercise 19.3A (1) Show that when is a eld,
354
Inner Simplicity
trivial element is never simple ((z ) = (z ] = 0), and an inner ideal B = z is never simple. (1) In the Nilpotent case b2 = d2 = 0, show b is regularly paired with c := U1+db ./ b satisfying c2 = 2c, and e := 21 c is a simple idempotent with (e] = (c] structurally paired with (b]. (2) Give a proof of the Nilpotent case b2 = d2 = 0 strictly in terms of the Fundamental Formula, with no reference to associative algebras. Exercise 19.3B*
Example 19.4 (Simple Inner Examples). (1) In the Jordan matrix algebra J = Mn ()+ or Hn (; ) for any associative division algebra with involution, the diagonal matrix unit b = Eii is a simple idempotent and the principal inner ideal B = UEii (J) = Eii or H()Eii is a simple inner ideal of Idempotent Type. The same holds in any diagonal isotope Hn (; ) since the isotope J(iiuii ) = H()( i ) Eii is a division algebra i H()Eii is by Jordan Isotope 7:3(3): (2) In Mn ()+ the o-diagonal matrix unit b = Eij is a simple nilpotent with principal inner ideal B = Ub (J) = Eij simple of Nilpotent Type, paired naturally with the nilpotent d = Eji, hence also with the simple idempotents e = U 21 +d b = 12 Eii + 14 Eij + Eji + 12 Ejj and e0 = 1 U b = 1 E + E + E + E . If contains an element of norm ij ji jj 2 1+d 2 ii
= 1, then in Hn (; ) the element b = Eii + Eij + Eji Ejj is a simple nilpotent paired with the simple idempotent c = Eii . (3) If J = A+ for A = T n () the upper-triangular n n matrices over , then the o-diagonal matrix units Eij (i < j ) are trivial elements, and the inner ideal B = Eij is a minimal-but-not-simple inner ideal of Trivial Type. (4) The reduced spin factor RedS pin(q ) of Example 3:12 has by construction that Jii = ei = + for i = 1; 2. Thus the ei are simple idempotents (Jii is a division algebra) i is a eld. (5) In the Jordan matrix algebra Hn (C; ) for any composition algebra C (division or split) over a eld , the diagonal matrix unit b = Eii is a simple idempotent and the principal inner ideal Jii = H(C)( i ) Eii = Eii is a simple inner ideal of Idempotent Type.
2. Problems for Chapter 19 Let T be a structural transformation on a nondegenerate algebra Show that for any simple inner B; T (B) is either simple or zero. If b is simple, conclude T ( (b] ) is either simple or zero. (2) De ne the socle of a nondegenerate Jordan algebra to be the sum of all its simple inner ideals. Show that this is invariant under all structural transformations, hence is an ideal. A deep analysis of the socle by Loos showed it encompasses the \ nite-capacity" parts of a nondegenerate Problem 1. J.
2. PROBLEMS FOR CHAPTER 19
355
algebra, and provides a useful tool to streamline some of Zel'manov's structural arguments.
CHAPTER 20
Capacity 1. Capacity Existence In this section we will show that nondegenerate Jordan algebras with d.c.c. on inner ideals necessarily have nite capacity. This subsumes the classi cation of algebras with d.c.c. under the (slightly) more general classi cation of algebras with nite capacity, the ultimate achievement of the Classical Theory. Recall the notion of capacity that we are concerned with in this Phase. Definition 20.1 (Capacity). A Jordan algebra has capacity n if it has a unit element which is a sum of n pairwise orthogonal simple idempotents: 1 = e1 + : : : + en
(ei simple orthogonal):
has connected or strongly connected capacity n if it has such a decomposition where ei ; ej are (resp. strongly) connected for each pair i 6= j . An algebra has ( nite) capacity if it has capacity n for some n (a priori there is no reason an algebra couldn't have two dierent capacities at the same time). Straight from the de nition and the Minimal Inner Ideal fact 19.3(2) that e is simple i it is a division idempotent, we have Theorem 20.2 (Capacity 1). A Jordan algebra has capacity 1 i it is a division algebra. Thus Jordan division algebras have capacity 1 (and are nondegenerate). As we saw in the Simple Inner Examples 19.4, the reduced spin factors RedS pin(q ) of quadratic forms q over elds have connected capacity 2 unless q = 0 and they collapse to e1 e2 of disconnected capacity 2 (and these are nondegenerate when q is nondegenerate), and the Jordan matrix algebras Mn ()+ and Hn (; ) for an associative division algebra , or the Hn (C; ) for a composition algebra C over a eld, have connected capacity n (and are nondegenerate). The goal of this Phase is the converse: that any nondegenerate Jordan algebra with connected capacity is one of these types. Let's show that algebras with d.c.c. have capacity. J
357
358
Capacity
20.3 (Capacity Existence). If J is nondegenerate with minimum condition on inner ideals, then J has a nite capacity. proof. We tacitly assume J 6= 0. Since J is nondegenerate, there are no simple inner ideals of trivial type, so the Simple Inner Ideal Theorem 19.3 guarantees there exist simple idempotents in J. Among all idempotents e = e1 + : : : + en for ei simple orthogonal idempotents, we want to choose a maximal one and prove e = 1; we don't have maximum condition on inner ideals, so we can't choose e with maximal J2 (e), so instead we use the minimum condition to choose e with minimal Peirce inner ideal J0 (e). Here J0 (e) inherits nondegeneracy and minimum condition from J by Diagonal Inheritance 10.1, so IF J0 (e) 6= 0 it too has a simple idempotent en+1 (which by Diagonal Inheritance is simple in J as well). Now en+1 2 J0 (e) is orthogonal to all ei 2 J2 (e) by Peirce Orthogonality Rules 8.5, so e~ = e + en+1 = e1 + : : : + en + en+1 is a bigger sum of simple orthogonal idempotents, with smaller Peirce space J0 (~e) = J00 < J00 J0;n+1 Jn+1;n+1 = J0 (e) [applying Peirce Recovery 13.6(2) to Ee = fe1 ; : : : ; en ; en+1 g] since en+1 2 Jn+1;n+1. But this contradicts minimality of J0 (e), so we MUST HAVE J0 (e) = 0. But then e = 1 by the Idempotent Unit Lemma 10.2. Theorem
2. Connected Capacity Once J has a capacity, we can forget about the minimum condition: we discard it as soon as we have sucked out its capacity. To analyze algebras with capacity, we rst we break them up into connected components. To insure the components are mutually orthogonal we need to know what connection and non-connection amount to for simple idempotents, and here the key is what invertibility and non-invertibility amount to. Theorem 20.4 (Simple O-Diagonal Non-Invertibility Criterion). Let e1 ; e2 be orthogonal simple idempotents in J. Then the following 5 conditions on an element x12 of the o-diagonal Peirce space J12 are equivalent: (i) x12 is not invertible in J2 (e1 + e2 ) = J11 + J12 + J22 ; (iia) Ux12 J11 = 0; (iib) Ux12 J22 = 0; (iiia) q22 (x12 ) = 0; (iiib) q11 (x12 ) = 0; (iv ) x212 = 0: proof. The reader will have to draw a diagram for this play - watch our moves! We rst show (iv ) () (iiia) AND (iiib), and (i) () (iiia) OR (iiib). Then we establish a \left hook" (iiib) =) (iia) =) (iiia), so dually we have a \right hook" (iiia) =) (iib) =) (iiib), and
2. CONNECTED CAPACITY
359
putting the two together gives a \cycle" (iia) () (iib) () (iiia) () (iiib) showing all are equivalent [knocking out the distinction between \and" and \or" for (iiia); (iiib)], hence also equivalent to (i) and to (iv ): From x212 = q22 (x12 )+ q11 (x12 ) we immediately see \and", and to see \or" note x12 not invertible () x212 = q22 (x12 )+ q11 (x12 ) not invertible [by Invertible Powers 6.7(2)] () q22 (x12 ) or q11 (x12 ) not invertible [by Diagonal Invertibility 14.5] () q22 (x12 ) = 0 or q11 (x12 ) = 0 [since Jii are division algebras]. For the left hook, (iiib) passes to (iia) by (Ux12 J11 )2 = Ux12 UJ11 x212 [by the Fundamental Formula] = Ux12 UJ11 q11 (x12 ) [by Peirce Orthogonality] = 0 [by (iiib)] =) Ux12 J11 = 0 [by the fact that the division algebra J22 has no nilpotent elements], and (iia) hands o to (iiia) by the de nition 14.4(1) of q . Is that footwork deft, or what! The crucial fact about connectivity between simple idempotents is that it is an all-or-nothing aair. Lemma 20.5 (Simple Connection). If e1 ; e2 are orthogonal simple idempotents in a nondegenerate Jordan algebra J, then either e1 ; e2 are connected or else J12 = Ue1 ;e2 J = 0: proof. e1 ; e2 not connected () no x12 2 J12 is invertible () qii (J12 ) = UJ12 Jii = 0 for i = 1; 2 [by the Criterion 20.4(iiab); (iiiab)] () UJ12 (J11 +J12+J22 ) = 0 [by the Uijq-Rules 14:4(2)] () UJ12 J = 0 [by Peirce Orthogonality 13.7(3)] () J12 = 0 [by nondegeneracy].
From this the decomposition into a direct sum of connected components falls right into our laps. Theorem 20.6 (Connected Capacity). A nondegenerate Jordan algebra with capacity splits into the direct sum J = J1 : : : Jn of a nite number of nondegenerate ideals Jk having connected capacity. P proof. Let 1 = i2I ei for simple ei as in the Capacity De nition 20.1. By Connection Equivalence 14.8, connectivity of the ei de nes an equivalence relation on the index set I : i j i i = j or ei ; ej are connected. If we break I into connectivity classes K and let the P fPK = fei ji 2 K g be the class sums, then for K 6= L; UfK ;fL J = ;e` J = 0 by Simple Connection 20.5, so that J = U1 J = k2K;`2L UekP UPK fK J = K UfK J = K JK is [by Peirce Orthogonality 8.5] an algebra-direct sum of Peirce subalgebras JK = UfK J (which are then automatically ideals) having unit fK with connected capacity. An easy argument using Peirce decompositions shows that
360
Capacity
20.7 (Simple Capacity). A nondegenerate algebra with capacity is simple i its capacity is connected. proof. If J is simple there can only be one summand in the Connected Capacity Theorem 20.5, so the capacity must be connected. To see connection implies simplicity, suppose K were a nonzero ideal in J where any two ei ; ej are connected by some vij . Then by Peirce Inheritance 13.4(2) we would have the Peirce decomposition K = ij Kij where some Kij 6= 0 is nonzero. We can assume some diagonal Kii 6= 0, since if an o-diagonal Kij 6= 0 then also Kii = K \ Jii qii (Kij ; Jij ) 6= 0 by q -Nondegeneracy 14.4(4). But Kii = K \ Jii is an ideal in the division algebra Jii , so Kii = Jii and for all P j 6= i K P Uvij Kii = Uvij Jii = Jjj [by connectivity] as well, so K Jkk ek = 1 and K = J: Theorem
3. Problems for Chapter 20 In the above proofs we freely availed ourselves of multiple Peirce decompositions. Your assignment, should you choose to accept it, is to exercise mathematical frugality (avoiding the more cumbersome notation and details of the multiple case) and derive everything from properties of single Peirce decompositions. (Capacity Existence) Show, in the notation of the proof of 20.3, (1) J0 (~ e) J0 (e), (2) the inclusion is proper. Problem 2*. (Simple Connection) Show Ue1 ;e2 (J) = 0 for disconected e1 ; e2 as in 20.5. (1) Show it suÆces to assume from the start that J is unital with 1 = e1 + e2 , where e = e1 is an idempotent with 1 e = e2 ; J1 (e) = Ue1 ;e2 (J). (2) Show J2 (e); J0 (e) are division algebras, in particular have no nilpotent elements. (3) Show q2 (x1 ) = 0 () q0 (x1 ) = 0. (4) If the quadratic form q2 (J1 ) vanishes identically, show J1 vanishes. (5) On the other hand, if q2 (J1 ) 6= 0 show there exists an invertible v. Problem 3*. (Connected Capacity) Show connection is transitive as in 20.6. Assume vij connects ei ; ej and vjk connects ej ; ek . (1) Show v = fvij ; vjk g 2 J01 (e) for J0 = J2 (ei + ek ); e = ei ; 10 e = ej . (2) Show both J02 (e); J00 (e) are division algebras, so v will be invertible if v2 = q2 (v) + q0 (v) is invertible, hence if q2 (v) (dually q0 (v)) is nonzero. (3) Show q2 (v) = E2 Uvij (aj ) where (vjk )2 = aj + ak for nonzero ar 2 J2 (er ). (4) Conclude q2 (v) is nonzero because Uvij is invertible on the nonzero element aj 2 J2 (ei + ej ): Problem 4*. (Simple Capacity) To see connection implies simplicity in 20.7, suppose K were a proper ideal. (1) Show Uei (K) = 0 for each i. (2) Then show Uei ;ej (K) = 0 for each pair i 6= j . (3) Use (1) and (2) to show K = U1 (K) = 0. Problem 1*.
CHAPTER 21
Herstein-Kleinfeld-Osborn Theorem In classifying Jordan algebras with nondegenerate capacity we will need information about the algebras that arise as coordinates. Recall that the Hermitian and Jacobson Coordinatization Theorems have, as their punch line, that the Jordan algebra in question looks like Hn(D; ) for an alternative algebra D with involution whose hermitian elements are all nuclear. Here D coordinatizes the o-diagonal Peirce spaces and H(D) coordinatizes the diagonal spaces. The condition that Hn(D; ) be nondegenerate with capacity n relative to the standard Eii reduces by Jordan Matrix Nondegeneracy 5.14 to the condition that the coordinates be nondegenerate and the diagonal coordinates be a division algebra. In this chapter we will give a precise characterization of these algebras. Along the way we need to become better acquianted with alternative algebras, gathering a few additional facts about their nuclei and centers, and about additional identities they satisfy.
1. Alternative Algebras Revisited Alternative algebras get their name from the fact that the associator is an alternating function of its arguments. Alternative algebras satisfy the Moufang Laws, which play as important a role in the theory as do the alternative laws themselves. Lemma 21.1 (Moufang). (1) An alternative algebra is automatically
exible, [x; y; x] = 0; so alternativity is the condition that the associator [x; y; z ] be an alternating multilinear function of its arguments, in the sense that it vanishes if any two of its variables are equal (equivalently, in the presence of 12 , the condition the the associator is a skew-symmetric functions of its arguments). (2) Alternative algebras automatically satisfy the Left, Right, and
Middle Moufang Laws x(y (xz )) = (xyx)x; ((zx)y )x = z (xyx); 361
(xy )(zx) = x(yz )x;
362
Herstein-Kleinfeld-Osborn
as well as the Left Bumping Formula [x; y; zx] = x[y; z; x]: (3) Any linear algebra satis es the Teichmuller Identity [xy; z; w] [x; yz; w] + [x; y; zw] = [x; y; z ]w + x[y; z; w]: proof. (1) Flexibility comes from [x; y; x] = [x; y; x] + [y; x; x] [by right alternativity] = 0 [by linearized left alternativity]. Thus we are entitled to write xyx without ambiguity in place of (xy )x and x(yx). (2) Left Moufang is equivalent to Left Bumping since x(z (xy )) (xzx)y = x (zx)y [z; x; y ] [x; zx; y ] + x((zx)y ) = x[y; z; x] + [x; y; zx] (alternativity); which in turn is equivalent to Middle Moufang since x[y; z; x] + [x; y; zx] = x(yz )x + x(y (zx)) + (xy )(zx) x(y (zx)) = x(yz )x + (xy )(zx): Dually for Right Moufang, so all the Moufang Laws are equivalent to Left Bumping. To see that Left Bumping holds, note [x; y; x2 ] = 0 [Lx commutes with Rx by exibility, hence also with Rx2 = Rx2 by right alternativity], so linearizing x ! x; z in this shows [by repeated use of alternativity] [x; y; zx] = [x; y; xz ] [z; y; x2 ] = +[x; xz; y ] [x2 ; z; y ] = (x2 z )y x((xz )y ) (x2 z )y + x2 (zy ) = x((xz )y ) + x(x(zy ) = x[x; z; y ] = x[y; z; x]: (3) Teichmuller can be veri ed by direct calculation to hold in all nonassociative algebras: [xy; z; w] [x; yz; w] + [x; y; zw] = ((xy )z )w (xy )(zw) (x(yz ))w + x((yz )w) + (xy )(zw) x(y (zw)) = ((xy )z )w (x(yz ))w + x((yz )w) x(y (zw)) = [x; y; z ]w + x[y; z; w]:
Alternative implies Moufang, but in characteristic 2 situations the left alternative law is not strong enough to guarantee left Moufang (resulting in Exercise 21.1*
1. ALTERNATIVE ALGEBRAS REVISITED
363
some pathological [= non-alternative] simple left alternative algebras; the proper notion is that of a left Moufang algebra, satisfying both the left alternative and left Moufang law, and the satisfying theorem is that every simple left Moufang algebra is actually alternative. (1) Show that the left alternativity [x; x; y] = 0 plus
exibility [x; y; x] = 0 implies alternativity. (2) Show that the left alternativity implies the Left Moufang Law (x(yx))z = x(y(xz )) if 12 2 [you must be careful not to assume exibility, so carefully distinguish x(yx) from (xy)x!]
Emil Artin gave a beautiful characterization of alternative algebras: they are precisely the nonassociative algebras in which every subalgebra generated by two elements is associative (just as the power-associative algebras are precisely the nonassociative algebras in which every subalgebra generated by one element is associative). This gives us an analogue of Macdonald's Theorem: a polynomial in two variables will vanish in all alternative algebras if it vanishes in all associative algebras. Theorem 21.2 (Artin). A linear algebra A is alternative i every subalgebra generated by two elements is associative. In particular, alternative algebras satisfy every identity in two variables that is satis ed by all associative algebras. proof. The proof requires only the alternative laws, Middle Moufang, and Teichmuller. Certainly, if the subalgebra [x; y ] A is associative, then in particular we have [x; x; y ] = [y; x; x] = 0 for all x; y 2 A, therefore A is alternative by Alternative De nition 2.1. The nontrivial part is the converse: if A is alternative then every [x; y ] is associative, equivalently [p(x; y ); q (x; y ); r(x; y )] = 0 for all nonassociative polynomials p; q; r in two variables. By linearity it suÆces to prove this for monomials p; q; r, and we may induct on the total degree n = @p + @q + @r (using the obvious notion of degree for a nonassociative monomial). The result is vacuous for n = 0; 1; 2 (if we are willing to work in the category of unital algebras where [x; y ] is understood to contain the \trivial" monomial 1 of degree 0, the result is no longer vacuous, but it is still trivial, since if the three degrees sum to at most 2 then at least one factor must be 1, and any associator with term 1 vanishes). The result is also trivial if n = 3: if none of the factors is 1, all must be degree 1 monomials x or y , and two of the three must agree, hence the associator vanishes by the alternative and
exible laws. Assume now we have proven the result for monomials of total degree < n, and consider p; q; r of total degree = n. The induction hypothesis that lower associators vanish shows, by the usual argument (Generalized Associative Law), that we can rearrange parentheses at will inside any monomial of degree < n. We may assume p; q; r are all of degree
364
Herstein-Kleinfeld-Osborn
1 (the associator vanishes trivially if any term has degree 0, i.e.
is 1); then each of p; q; r has degree < n, and therefore we can rearrange parentheses at will inside p; q; r. We have a general Principle: the associator [p; q; r] vanishes if one term begins with a variable x or y , and another term ends in that same variable. To see this, by rearranging (using alternation of the associator, and symmetry in x and y ) we may suppose that p begins b with x and r ends with x: then [p; q; r] = [xp0 ; q; r0x] (for p0 ; q 0 2 A monomials of degree 0, using the induction hypothe 0associative 0 0 0 x) = sis to rearrange parentheses in p; r ) = ( xp ) q ( r x ) ( xp ) q ( r 0 )x) [again using the induction associativity x(p0 q ) (r0 x) (xp0 ) (qr hypothesis] = x (p0 q )r0 x x p0 (qr0 ) x [by the Middle Moufang Law] = x[p0 ; q; r0]x = 0 [by the induction hypothesis again]. Suppose [p; q; r] 6= 0, where by symmetry we may assume that p = xp0 begins with x. Then by the Principle neither q nor r can end in x, they must both end in y . But then by the Principle q (resp. r) ending in y forces r (resp. q ) to begin with x. But then [p; q; r] = [xp0 ; q; r] = [x; p0 q; r] [x; p0 ; qr] + [x; p0; q ]r + x[p0 ; q; r] [by Teichmuller], where the last two associators vanish because they are of lower degree < n, and the rst two vanish by the Principle since in both of them x ends with x (!!!) while in the rst r, and in the second qr, begins with x. Thus [p; q; r] 6= 0 leads to a contradiction, and we have established the induction step for degree n:
2. A Brief Tour of the Alternative Nucleus Alternative algebras come in only two basic avors, associative (where the nucleus is the whole algebra) and octonion (where the nucleus reduces to the center). This incipient dichotomy is already indicated by basic properties of the nucleus, Lemma 21.3 (Alternative Nucleus). (1) If D is alternative with nucleus N = Nuc(D), then for any elements n 2 N ; x; y; z 2 D we have the nuclear slipping formula (i) n[x; y; z ] = [nx; y; z ] = [xn; y; z ]: The nucleus is commutator-closed (ii) [N ; D] N : We have relations (iii) [N ; x]x N ; [N ; x][x; y; z ] = 0 for any x; y; z 2 D: Nuclear commutators absorb D and kill associators,
2. A BRIEF TOUR OF THE ALTERNATIVE NUCLEUS
(iv )
[N ; N ]D N ;
365
[N ; N ] [D; D; D] = 0:
(2) Indeed, any nuclear subalgebra closed under commutators automatically satis es these last two inclusions, and if it contains an invertible commutator then the nuclear subalgebra must be the whole algebra: (i) if B satis es [B; D] B N ; then it also has : (ii) [B; B]D + 2[B; x]x B for all x 2 D; (iii) if b 2 [B; B] has b 1 2 D then D = B is associative. Notice that all of these properties are true in associative algebras (where [x; y; z ] = 0 and N = D) and in octonion algebras (where N = Cent(D); [N ; D] = 0). (1) A nuclear element slips in and out of parentheses, so n[x; y; z ] = [nx; y; z ] and [xn; y; z ] = [x; ny; z ] in any linear algebra; what is dierent about alternative algebras is that nuclear elements can hop because of the alternating nature of the asociator: n[x; y; z ] = n[y; x; z ] = [ny; x; z ] = [x; ny; z ] = [xn; y; z ] as in (i). Subtracting the last two terms in (i) gives 0 = [[n; x]; y; z ], so [[n; x]; D; D] = 0, which implies [n; x] 2 N in an alternative algebra, establishing (ii). Since [n; x]x 2 N , [[n; x]x; D; D] = 0 , [n; x][x; D; D] by slipping (i) for the nuclear [n; x] by (ii), we see the two parts of (iii) are equivalent; the second holds since [n; x][x; y; z ] = nx[y; z; x] xn[y; z; x] [by nuclearity of n and alternativity] = n[x; y; zx] x[ny; z; x] [bump and slip] = [x; ny; zx] [x; ny; zx] [hop and bump] = 0. Linearizing x ! x; m for m 2 N in the two parts of (iii) gives (iv ) : [n; m]x = [n; x]m + N N m + N (using (ii)) N for the rst, and [n; m][x; y; z ] = [n; x][m; y; z ] = 0 (for nuclear m) for the second. (2) The rst part of 1(iv ) is also a special case of the rst part of 2(ii), which follows easily via nuclearity of b from [b; c]x = bcx cxb + cxb cbx = [b; cx] c[b; x] B BB (by 2(i)) B. Then the implication 2(iii) holds for such commutators b since B bD b(b 2 (bD)) = (bb 2 b)(D) [by Left Moufang] = 1D = D, and N B D N implies N = B = D is associative. Finally, for the second part of 2(ii) (with the annoying factor 2) we have 2[b; x]x = ([b; x]x+x[b; x])+([b; x]x x[b; x]) = (bxx xbx+xbx xxb)+[[b; x]; x] 2 [b; x2 ] + [[B; D]; D] B [using nuclearity and applying 2(i) thrice]. proof.
Enlarge on the Alternative Nucleus Lemma 21.3. (1) Show in alternative algebras we have unrestricted nuclear slipping n[x; y; z ] = [nx; y; z ] = Exercise 21.3A
366
Herstein-Kleinfeld-Osborn
[xn; y; z ] = [x; ny; z ] = [x; yn; z ] = [x; y; nz ] = [x; y; zn] = [x; y; z ]n, in particular the nucleus commutes with associators. (2) Show that nuclear commutators annihilate associators, [n; m] [x; y; z ] = 0: Enlarging on 21.3(2)(iii), if all nonzero elements of H are invertible show than any skew element b = b 2 B = hHi (the subalgebra generated by hermitian elements) which is invertible in D is actually invertible in B itself. Exercise 21.3B*
(1) Show [x; yz ] = [x; y]z + y[x; z ] [x; y; z ] + [y; x; z ] [y; z; x] in any linear algebra A; conclude that adx : y ! [x; y] is a derivation of A for any nuclear element x. (2) In an alternative algebra show adx is a derivation i 3x 2 Nuc(A) is nuclear, in particular adx is always a derivation in alternative algebras of characteristic 3, and in algebras without 3-torsion adx is only a derivation for nuclear x. (3) Show adx is a derivation of Jordan structure A+ for any alternative algebra: [x; y2 ] = fy; [x; y]g; [x; yzy] = y[x; z ]y + fy; z; [x; y]g (where fx; y; zg := x(yz ) + z (yx) = (xy)z + (zy)x is the linearization of xyx). Exercise 21.3C
Next we look inside the nucleus at the center. In a unital algebra we can always replace the current ring of scalars by the center, since multiplication is still bilinear over the center. A unital -algebra becomes an algebra over the -center H(Cent(D); ) (in order that the involution remain linear). Theorem 21.4 (Central Involution). Every unital alternative algebras with central involution has norms n(x) := xx and traces t(x) := x + x in the -center satisfying (1) t(x) = t(x); n(x) = n(x) (Bar Invariance) (2) t(xy ) = n(x; y) = t(yx) (Trace Commutativity) (3) t((xy )z ) = t(x(yz )) (Trace Associativity) (4) n(xy ) = n(x)n(y ) (Norm Composition) (5) x(xy ) = n(x)y = (yx)x (Kirmse Identity) (6) x2 t(x)x + n(x)1 = 0 (Degree 2 Identity): Every alternative algebras with central involution and no nil ideals of index 2 (i.e. ideals where every element z satis es z 2 = 0) is a composition algebra with standard involution x = t(x)1 x over its -center. proof. Note that since x + x 2 Cent(D) Nuc(D) we can remove a bar anywhere in a commutator or associator for the price of a minus sign, e.g. [a; x] = [a; x]; [a; b; x] = [a; b; x]. For Bar Invariance (1), the involution trivially leaves the trace invariant, t(x) = t(x), and it also leaves the norm invariant: n(x) = xx = (t(x) x)x = t(x)x x2 =
3. HERSTEIN-KLEINFELD-OSBORN THEOREM
367
x(t(x) x) = xx = n(x). Along the way we established the Degree 2 Identity (6). The linearization of the norm is a twist of the trace bilinear form: n(x; y) = xy + yx = xy + xy [since bar is an involution] = t(xy ) [by de nition]. Trace Commutativity (2) t([x; y ]) = 0 and Trace Associativity (3) t([x; y; z ]) = 0 follow since commutators and associators are skew, [x; y ] = [y ; x] [by the involution] = [y; x] [removing two bars] = [x; y ], and [x; y; z ] = [z ; y; x] [by the involution] = +[z; y; x] [removing 3 bars] = [x; y; z ] [by alternativity]. From Artin's Theorem 21.2 we know that x; y; x; y all lie in a unital associative subalgebra B generated by two elements x; y over the -center. Inside the associative subalgebra B, the calculations for norm composition (4) and Kirmse (5) and are trivial (compare the blood, sweat, and tears of Ex. 21.4 using only alternativity!): n(xy )1 = (xy )(xy ) = (xy )(y x) = x(y y)x = x(n(y )1)x = n(y )xx = n(y )n(x)1 = n(x)n(y )1 and n(x)y = (xx)y = x(xy ), dually on the right. Finally, we check that absence of ideals nil of index 2 guarantees n is nondegenerate. The radical of n is an ideal: it is clearly a -submodule by de nition of quadratic form, and it is a left (dually right) ideal since linearizing x ! x; 1; y ! y; z in n(xy ) = n(x)n(y ) gives n(xy; z ) + n(xz; y ) = t(x)n(y; z ), hence for radical z we see n(xz; y ) = 0 and xz 2 Rad(n). Its element z all have t(z) = n(z; 1) = 0; n(z) = 21 n(z; z) = 0 and so square to 0 by the Degree 2 Identity (6). If there are no nil ideals of index 2 then radical Rad(n) vanishes, and n is nondegenerate permitting composition, so we have a composition algebra. Establish Norm Composition and the Kirmse Identity in the Central InvolutionTheorem (4),(5) without invoking Artin's Theorem, expanding n(xy) n(x)n(y) 1 for norm composition and n(x)y x(xy) for Kirmse. Exercise 21.4*
In studying the octonions and the eight-square problem, J. Kirmse in 1924 considered linear -algebras satisfying x (xy ) = n(x)y = (yx)x for a quadratic form n. In 1930 Artin and his student Max Zorn (he of the famous lemma) dropped the involution condition and invented the category of alternative algebras.
3. Herstein-Kleinfeld-Osborn Theorem Now we have enough information about alternative algebras to embark on a proof of our main result.
368
Herstein-Kleinfeld-Osborn
21.5 (Herstein-Kleinfeld-Osborn). D is a nondegenerate unital alternative -algebra all of whose nonzero hermitian elements are invertible in the nucleus i it is either (I) the exchange algebra E x() of a noncommutative associative division algebra ; (II) an associative division -algebra with noncentral involution; (III) a composition algebra C k ( ; 1 ; : : : ; k ) of dimension k = 1; 2; 4, or 8 over its -center with standard central involution: the eld , a quadratic extension, a quaternion algebra, or an octonion algebra. In particular, D is automatically -simple, and is associative unless it is an octonion algebra. We can list the possibilities in another way: D is either (I0 ) the exchange algebra E x() of an associative division algebra ; (II0 ) an associative division -algebra (; ); (III0 ) a split quaternion algebra of dimension 4 over its center ; equivalently, 2 2 matrices M2 ( ) under the symplectic involution xsp := sxtr s 1 for symplectic s = ( 01 10 ) ; (IV0 ) an octonion algebra. proof. We will break the proof into bite-sized steps, each one of some intrinsic interest. Step 1: Reduction to the -simple case. We claim D is AUTOMATICALLY -simple1: a proper -ideal I / D would consist entirely of trivial elements, so by nondegeneracy no such ideal exists. Indeed, I 6= D contains no invertible elements, hence no nonzero hermitian elements; yet for z 2 I; x 2 D the elements z + z; z z; xz + zx are hermitian and still in I, so they vanish: I is skew [z = z ], nil [z 2 = z z = 0], and -commutes with D [xz = z x]. But then all z 2 I are trivial: z (xz ) = z (z x) = z 2 x [using alternativity] = 0. FROM NOW ON WE ASSUME D IS -SIMPLE. Step 2: Reduction to the simple case over a eld. We can easily take care of the case where D is -simple but NOT simple: this is exactly Type I by a general fact about -simple linear algebras (alternative or not). Theorem
1We only want to classify simple algebras with capacity, so we could have as-
sumed from the start that D is -simple. Instead we have shown that nondegeneracy implies -simplicity in coordinates where hermitians are invertible, analogous to the Simple Capacity result 20.7 that connection implies simplicity in Jordan algebras with capacity.
3. HERSTEIN-KLEINFELD-OSBORN THEOREM
369
21.6 (-Simple). (1) If (A; ) is a -simple linear algebra, then either (1) A is simple, or (2) A is the direct sum A = B B of an ideal B and its star, in which case (A; ) = E x(B) = (B Bop ; ex) ex with exchange involution (b; c) = (c; b) for a simple algebra B. In the latter case H(A; ) = B, so if the hermitian elements are all nuclear then B is associative, and if the nonzero hermitian elements are all invertible then B is a division algebra. (2) If A is a simple (resp. -simple) unital algebra, then its center (resp. -center) is a eld. proof. (1) If A is NOT simple it has a proper ideal B; since B \ B < A and B + B > 0 and both are -ideals, by -simplicity they must be 0 and A respectively: B \ B = 0; B + B = A, so (A; ) = (B B ; ) = (B Bop ; ex) since B = Bop for any involution, and = ex since (x y ) = y x . Here B is simple as an algebra in its own right: if K / B were a proper ideal, then K K / A would be a proper -ideal, contrary to -simplicity. If the (resp. nonzero) hermitian elements of (B Bop; ex) are all nuclear (resp. invertible), then all (b; b) being nuclear (resp. invertible) in B Bop implies all b are nuclear (resp. all b 6= 0 are invertible) in B, ie. B is associative (resp. a division algebra). (2) The center Cent(A) Nuc(A) and -center H(Cent(A); ) are unital commutative associative -algebras. They are elds because every nonzero element c has an inverse d in the center or -center. Indeed, if c 6= 0 then I := cA = Ac 6= 0 [by unitality] is an ideal (resp. -ideal) of A: it is a left (dually right) ideal since AI = A(Ac) = (AA)c Ac = I [by nuclearity of c], and a -ideal if c = c since I = (Ac) = c A = cA = I. Then by simplicity (resp. -simplicity) I = A contains 1, cd = 1 for some d. Then d = c 1 is central too: any a 2 A can be written as a = ca0 = a0 c; so [d; x] = d(cx0 ) (x0 c)d = (dc)x0 x0 (cd) [by nuclearity of c] = 1x0 x0 1 = 0; while from Teichmuller 2.1(3) [d; x; y ] = [d; cx0 ; y ] = [dc; x0 ; y ] + [d; c; x0 y ] d[c; x0 ; y ] [d; c; x0 ]y = 0 since c; dc = 1 are nuclear, and similarly [x; d; cy 0] = [x; y 0 c; d] = 0: If c = c then its inverse is also hermitian: d d = (d d )(cd) = (dc d c )d = (1 1 )d = 0: Lemma
We may assume D is noncommutative in Type I, since if D =
is a eld then D Dop = op is merely a split 2-dimensional composition algebra over its -center (1 1), and can be included under Type III. FROM NOW ON WE ASSUME D IS SIMPLE with its center and -center both elds. Step 3: Reduction to the case of non-central H:
370
Herstein-Kleinfeld-Osborn
Next we take care of the case of a central involution where the hermitian elements all lie in the center (hence in the -center): this is exactly Type III. Indeed, by simplicity D contains no ideals nil of index 2, so by the Central Involution Theorem 21.4, (3)
If D has central involution it is a composition algebra over its -center with standard involution. By Hurwitz' Theorem 2.13 we know the composition algebras over a eld are of dimension 1; 2; 4, or 8 as in Type III. FROM NOW ON WE ASSUME THE HERMITIAN ELEMENTS ARE NOT CENTRAL. Step 4: Reduction to the associative, hermitian-generated case. Since hermitian elements are nuclear by the hypothesis of a nuclear involution, they are central as soon as they commute with D, so by our non-centrality assumption we must have [H; D] 6= 0. We will go partway to establishing Type II by showing that D is associative and hermitian-generated, if D is simple with [H; D] 6= 0; then D =< H > is associative and hermitian-generated. Let B N denote the subalgebra generated by H. Note that if x = x; y = Æy , then [x; y ] = [y; x] = [x; y] = Æ [x; y ], so if we denote the skew elements under the involution by S we have [H; H] S ; [H; S ] H. We automatically have [H; H] B since B is a subalgebra, and [H; S ] H B, so [H; D] B; thus commutation by D maps H into B, hence maps the nuclear subalgebra B generated by H back into B, and B itself is invariant under commutation: [B; D] B: Thus B satis es the hypotheses of Alternative Nucleus 21.3(2)(i), so by 21.3(2)(iii) we will have D = B = N associative if there is an invertible B-commutator b 2 [B; B]. We now exhibit such a b. By non-centrality (4) 0 6= [H; D] = [H; S + H], so either (Case 1) [H; S ] 6= 0, or (Case 2) [H; S ] = 0 but [H; H] 6= 0. In (Case 1) some b = [h; s] is a nonzero hermitian element (hence by hypothesis invertible), which is also a B-commutator because we have bs = [h; s]s 2 [B; s]s 1 WITH 21.3(2)(ii), so s = b 1 (bs) [by nuclearity of b 2 H] B USING 2 2 b 1B B [since b 1 2 H B] and b = [h; s] 2 [B; B]. In (Case 2) 0 6= [H; H] contains a nonzero B-commutator b 2 [H; H] [B; B] \ S , which we claim is invertible in D since it lies in the center of D, which is a eld by simplicity of D and the -Simple Lemma 21.6(2). To see centrality, [b; H] [S ; H] = 0 by hypothesis in (4)
3. HERSTEIN-KLEINFELD-OSBORN THEOREM
371
Case 2, and [b; S ] [[H; H]; S ] = [[H; S ]; H] + [H; [H; S ]] [by Jacobi's Identity and nuclearity of H] = 0 by our hypothesis again, so b commutes with S + H = D and therefore is central (recall that b 2 H is nuclear by hypothesis). Thus we have our invertible B-commutator b, so by Alternative Nucleus 21.3(2)(iii) D = B is associative and hermitian-generated as claimed in (4). FROM NOW ON ASSUME D IS SIMPLE, ASSOCIATIVE, HERMITIAN-GENERATED, WITH NONCENTRAL INVOLUTION. Step 5: Reduction to the division algebra case. Let Z be the set of non-invertible elements. We want these rowdy elements to vanish, so that everybody but 0 is invertible and we have an associative division algebra with non-central involution as in Type II. We know that Z misses the hermitian elements, H \ Z = 0, since nonzero hermitian elements are invertible; we want to show it kills hermitian elements in the sense that (5)
z 2 Z =) z Hz = zHz = 0:
To see this, remember that in H each element must be invertible or die, and if 0 6= z 2 Z then z z; zz 2 H can't both be invertible [z za = 1 = bzz ) z has right, left inverses ) z is invertible, contrary to z 2 Z ], and as soon as one dies the other does too [if z z = 0 then zz kills z 6= 0; z (z z ) = (z z)z = 0z = 0, is thereby barred from invertibility, and so is condemned to die]. Once zz = z z = 0 the hermitian elements zhz; zhz 2 H both kill z 6= 0 [by(zhz)z = z (z hz ) = 0] and likewise join their comrades in death, establishing (5). We claim that once z has tasted the blood of hermitian elements it will go on to kill all elements, (6)
z 2 Z =) z Dz = 0:
For d 2 D the element w := zdz is skew, w = z dz = z (d + d)z zdz = 0 w by (5), and for h 2 H the commutator h0 = [h; w] 2 [H; S ] H is not invertible since it kills z; h0 z = [h(zdz) (zdz)h]z = hzd(z z )+ zd(z hz ) = 0 by (5). Therefore h0 is condemned to death, and h0 = 0 means w commutes with H . BECAUSE D IS HERMITIANGENERATED this implies w lies in the center, yet is nilpotent since w2 = zd(z z )dz = 0 by (5). The center is a eld, so like H its elements must invert or die, therefore each w = zdz must die, establishing (6). But in any simple (even prime) associative algebra xDy = 0 implies x =
372
Herstein-Kleinfeld-Osborn
b xD b ; D; Iy := 0 or y = 0 [if x; y 6= 0 then the three nonzero ideals Ix := D b yD b have zero product Ix DIy = 0], so we see z = 0. Thus Z = 0, D and we return to a tranquil life in a division algebra with non-central involution. Step 6: Converse. For the converse, clearly Types I-III are nondegenerate with invertible hermitians: the hermitians in Type I are all Æ Æ op which are thus isomorphic to + , in Type II they are in + , and in Type III they are equal to , so in all cases are invertible and nuclear. Step 7: Alternate List. Finally, we check the alternate list where we divvy up the composition algebras in Type III and parcel them among the other Types. The composition algebras of dimension 1 and the non-split ones of dimension 2; 4 are associative division algebras and so go into Type II0 (where of course we now drop the assumption of non-central involution). A split composition algebra of dimension 2 is just op , so it can be included in Type I0 (where of course we must drop the assumption of noncommutativity). A split composition algebra of dimension 4 is a split quaternion algebra, isomorphic to M2 ( ), and the standard involution corresponds to the involution on matrix units given by Eii = Ejj ; Eij = Eij (i = 1; 2; j = 3 i), which is just the symplectic involution xsp = sxtr s 1 . This particular situation gets its own Type III0 . The lone nonassociative algebra, the octonion algebra of dimension 8 (split or not), gets its own Type IV0 .
In the Hermitian Coordinatization of Chapter 12 a large role was played by the fact that the coordinate algebra was hermitian-generated (generated as associative algebra by its hermitian elements). The above proof can be modi ed (cf. Problem 2 below) to show that every division algebra with non-central involution is automatically hermitiangenerated, so that the crucial issue is whether the involution is central or not.
4. Problems and Questions for Chapter 21 In the -Simple Lemma 21.6 we saw that the center of a simple unital algebra is a eld (and similarly in the case). Non-unital algebras may not have a center at all, but they always have a centroid [cf. 1.11]. (1) Recalling the Centroid Theorem 1.12, de ne the concept of -centroid for a linear -algebra, and show that the -centroid of any -simple linear algebra is a eld, and the -center of a -prime algebra (one with no nonzero -ideals I; K which are orthogonal IK = 0) Problem 1*.
4. PROBLEMS AND QUESTIONS FOR CHAPTER 21
373
is always a domain acting faithfully. Give an example of a -simple algebra whose centroid is not a eld. (2) Show that a linear -algebra is semiprime (no nonzero -ideals which are trivial, I = I ; II = 0) i it is semiprime (no nonzero trivial ideals II = 0); in this case show that its centroid has no nonzero nilpotent elements (equivalently, no nonzero " with "2 = 0). Establish the Hermitian Generation Theorem: In the HersteinKleinfeld-Osborn Theorem, every -algebra D of Type I or II is hermitian-generated, and an algebra D of Type III is hermitian-generated i D has dimension 1: (I) Show that D = E x() for an associative division algebra is hermitiangenerated i the involution is non-central i is noncommutative. (1) Show the center of any D = E x(B) is for the center of B, and the exchange involution is central, i.e. H = f(b; b) j b 2 Bg lies in the center, i B = is commutative. Conclude the involution is noncentral i B is noncommutative. (2) If B is commutative show H = f(b; b)g does not generate all of D, only itself. (3) Show that if B = is noncommutative then D = E x() is hermitian-generated. (4) Conclude D = E x() is hermitian-generated i D is noncommutative. (II) Show that if D = is an associative division algebra with involution, then D is hermitian-generated i the involution is either trivial (so is commutative) or non-central (so is noncommutative). (III) Show that a simple D with central involution is hermitian-generated i D = has trivial involution (hence is a composition algebra of dimension 1). Problem 2*.
Show that unital alternative algebras have a well-de ned notion of inverse. (1) If the element x has y with xy = yx = 1, show that y := x 1 is unique, and that Lx 1 = (Lx ) 1 ; Rx 1 = (Rx ) 1 . (2) Conclude bD B =) D b 1 B B if b 1 belongs to the subalgebra B. (3) Show x has an inverse , 1 2 Range(Lx) \ Range(Rx) , 1 2 Range(Ux) , Lx; Rx are invertible , Ux invertible, in which case Ux 1 = (Ux) 1 : (4) Show if x; y are both invertible then xy is invertible; show the converse is not true, even in associative algebras (even if xy = 1!!). (5) Show that if xy invertible then x has a right inverse and y has a left inverse; show that if xy; zx are invertible then x is invertible. Problem 3*.
Our proof of the Herstein-Kleinfeld-Osborn Theorem 21.5 is \slick", but a more leisurely proof may put it in a broader perspective. Show 21.5(3) in Step 3 works for -simple D (even for -semiprime (1) If D is alternative with central involution and norm Q (Q(x)1 = xx), show that for z 2 Rad(Q); x; y 2 D we have z 2 = 0; z = z; zx = xz; zxz = 0. (2) For z 2 Rad(Q) show Z := z D = Dz is a trivial -ideal ZZ = 0. (3) Conclude that if D has no trivial -ideals then Rad(Q) = 0, and if D is -simple then Q is nondegenerate over a eld : Problem 4*. D).
Enlarge on 21.5(4) in Step 4. Let A be any linear algebra. (1) Use the Teichmuller Identity [xy; z; w] [x; yz; w] + [x; y; zw] = x[y; z; w] + [x; y; z ]w to Problem 5*.
374
Herstein-Kleinfeld-Osborn
b = A b [A; A; A] =: A(A) forms an ideal (the associator ideal show that [A; A; A]A generated by all associators [x; y; z ]). (2) Show that if A is simple then either A(A) = A is \totally nonassociative", or A(A) = 0 and A is associative. (3) Show that if A is totally nonassociative simple then no nuclear element n 6= 0 can kill all associators n[A; A; A] = [A; A; A]n = 0. (4) Conclude that if A is simple alternative but not associative then [N ; N ] = 0. (5) If A is simple alternative with nuclear involution (H N ), show either (i) A is associative, (ii) the involution is central ([H; D] = 0), or (iii) [H; S ] 6= 0. Conclude that a simple alternative algebra with nuclear involution either has central involution or is associative. Problem 6A*. In 21.1(5), let D be a unital associative algebra. (1) Prove that you can't be invertible if you kill someone (nonzero, i.e. who is not already dead, as pointed out by Hercule Poirot in Murder on the Orient Express). Even more, if xy = 0 then not only x, but also yx can't be invertible. (2) Prove again that if an element has a right and a left inverse, these coincide and are the unique two-sided inverse. Conclude that the set Z of non-invertible elements is the union Z = Zr [ Z` for Zr the elements with no right inverse and Z` the elements with no left inverse. (3) Show that if an element has a right multiple which is right invertible, the the element itself is right invertible (and dually for left). Conclude that Zr D Zr ; DZ` Z` . (4) Conclude that if an element has an invertible right multiple and an invertible left multiple, then it is itself invertible. (5) When H = H(D) is a division algebra, show z 2 Z ) z z = zz = 0. Use this to show z Hz = 0. (6) Show that in any prime algebra (one with no orthogonal ideals), xDy = 0 ) x = 0 or y = 0. Problem 7B. In 21.5(5), reveal the Peirce decomposition lurking behind Step 5. Assume the set of non-invertible elements is Z 6= 0. (1) Show Z = fz j z z = 0g = fz j zz = 0g has 0 < Z < D. (2) Show Z is invariant under multiplications from D (in particular from ), but Z cannot by simplicity be an ideal, so it must fail the only other ideal criterion: Z + Z 6 Z. Conclude there exist non-invertible zi 2 Z with z1 + z2 = u invertible. (3) Show ei = zi u 1 are conjugate supplementary orthogonal idempotents. (4) Show the o-diagonal Peirce spaces Dij = ei Dej = ei Dei are skew, and H(D) = faii + aii j aii 2 Dii g commutes with I = D12 D21 + D12 + D21 + D21 D12 . (5) Show I < D is the ideal generated by the o-diagonal Peirce spaces, so I = 0 by simplicity; but then D = D11 D22 contradicts simplicity too, a contradiction. Problem 8. (1) By \universal nonsense", in any linear algebra an automorphism takes nucleus into nucleus and center into center. Show directly from the de nition that the same is true of any derivation: Æ(Nuc(A)) Nuc(A) and Æ(Cent(A)) Cent(A). (2) Show the same using \in nitesimal nonsense": show that in the algebra of dual numbers we have Nuc(A["]) = Nuc(A)["] and ' = 1A + "Æ is an automorphism, and use these to show again that Æ preserves the nucleus and center.
4. PROBLEMS AND QUESTIONS FOR CHAPTER 21
375
(1) Show that for any invertible elements u; v in a unital alternative algebra the product x u;v y = (xu)(vy) gives a new alternative algebra A(u;v) , the elemental isotope, with unit v 1 u 1. (2) Show that in a composition algebra with norm Q, the isotope determined by invertible elements u; v is again a composition algebra with norm Q(u;v) = Q(u)Q(v)Q. (3) If A is associative, show A(u;v) = A(uv) is the usual associative isotope A(w) : x w y = xwy: Problem 10. In coordinatizing projective planes, an important role is played by isotopy. An isotopy of a linear algebras A ! A0 is a triple (T1 ; T2 ; T3) of invertible linear transformations A ! A0 with T1 (x y) = T2(x) 0 T3(y) for all x; y 2 A. (1) Show that the composition of two isotopies A ! A0 ! A00 is again an isotopy, as is the inverse of any isotopy, and the identity isotopy (1A ; 1A ; 1A ) is always an autotopy (= self-isotopy) of A. (2) Show that if A is unital then necessarily T2 = RT31(1) T1 ; T3 = LT21(1) T1, so the isotopy takes the form T1(xy) = RT31(1) (T1 (x)) 0 LT21(1) (T1 (y)). (3) In a unital alternative algebra (satisfying the left and right inverse properties Mx 1 = Mx 1 for M = L; R), show that isotopies are precisely the maps T satisfying T (xy) = T (x)u vT (y) for some invertible u; v and all x; y. (4) Show that the isotopies (T1 ; T2; T3 ) of unital alternative algebras 0 0 are in 1-1 correspondence with the isomorphisms T : A ! (A0 )(u ;v ) onto elemental isotopes. (5) Use the Moufang identities to show that for an invertible element u of an alternative algebra the triples (Uu ; Lu; Ru ); (Lu ; Uu ; Lu 1 ); (Ru ; Ru 1; Uu ) are autotopies. (6) Establish the Principle of Triality: if (T1 ; T2; T3 ) is an autotopy of a unital alternative algebra A, then so are (T2 ; Ru21T2 ; Uu2 T3) and (T3 ; Uu3 T2 ; Lu31T3 ) for u2 := T3 (1) 1 ; u3 := T2(1) 1 : Question 1. Does the Fundamental Formula UUx y = Ux Uy Ux hold for the operator Ux := LxRx in all alternative algebras? In all left alternative algebras? + Question 2. (1) Does an alternative algebra A become a Jordan algebra A via 1 x y := 2 (xy + yx)? Does this hold for left or right alternative algebras? (2) Does a unital alternative algebra become a Jordan algebra A+ via Ux y := x(yx)? Does this work for left or right alternative algebras? Are the resulting algebras ever special? Problem 9.
CHAPTER 22
Osborn's Capacity 2 Theorem The hardest case in the classi cation is the case of capacity two; once we get to capacity three or more we have a uniform answer, due to the Jacobson Coordinatization Theorem. For capacity two we have two coordinatization theorems, and the hard part will be proving that at least one of them is applicable.
1. Commutators Zel'manov has shown us that commutators [x; y ], despite the fact that they don't exist, are crucial ingredients of Jordan theory. While they don't exist in a Jordan algebra itself, they lead an ethereal existence lurking at the fringe of the Jordan algebra, just waiting to manifest themselves. They do leave footprints: double commutators, squares of commutators, and U -operators of commutators do exist within the Jordan algebra. In associative algebras we have [[x; y ]; z ] = (xy yx)z z (xy yx) = (xyz + zyx) (yxz + zxy ) [x; y ]2 = (xy yx)2 = (xy + yx)2 2(xyyx + yxxy ) [x; y ]z [x; y ] = (xy yx)z (xy yx) = (xy + yx)z (xy + yx) 2(xyzyx + yxzxy ) which motivates the following. Definition 22.1 (Commutators). In a Jordan algebra double commutators and squares of commutators are de ned by [[x; y ]; z ] := [Vx ; Vy ]z = Dx;y (z ); (Double Commutator) 2 [x; y ] := fx; y g2 2 Ux (y 2) + Uy (x2 ) (Commutator Square) U[x;y]z := Ufx;yg 2fUx ; Uy g z (Commutator U ): Note that the usual inner derivation Dx;y determined by x and y is in some sense Ad[x;y]: 377
378
Osborn's Theorem
By our calculation above, in special Jordan algebras these ctitious Jordan commutators reduce to products of actual associative commutators.1
(1) Show we have an alternate expression [x; y]2 = fx; Uy xg Uxy2 Uy x2 for Commutator Square. (2) Show we have operator identities Wx;y := 2 Ux;y Ux2;y2 = Vx;y Vy;x VUx y2 = Vx Uy Vx UUx y;y in any Jordan algebra; show in special algebras this operator acting on z reduces to the \pentad" fx; y; z; x; yg. (3) Show we have the alternate expression U[x;y] = Wx;y UxUy Uy Ux for Commutator U . (4) Show in any Jordan algebra that U[x;y]1 = [x; y]2 . Exercise 22.1A
2 J2 ; x1 2 fx1 ; [a2 ; b2 ]; y1 g.
Exercise 22.1B* (1) If e is an idempotent in a Jordan algebra and a2 ; b2
show we have q0 ([[a2 ; b2 ]; x1 ]; y1 ) = q0 (x1 ; [[a2 ; b2 ]; y1 ]) = (2) Conclude q0 ([[a2 ; b2 ]; x1 ]) = Ux1 [a2 ; b2 ]2 . (3) Show that q2 ([[a2 ; b2]; x1 ]; x1 ) = [[a2 ; b2 ]; q2 (x1 )] two dierent ways: rstly use q-Properties 9.5 twice and subtract, secondly note that D = Da2 ;b2 is a derivation with D(e0 ) = 0, so apply it to q2 (x1 ). (4) Show q2 ([[a2 ; b2 ]; x1 ]) = U[a2 ;b2 ] q2 (x1 ). J1 ,
Throughout the rest of the chapter we will be working with a connection involution x determined by a strong connecting element as in the Connection Involution Lemma 10.3. Our proof of Osborn's Theorem will frequently depend on whether the elements at issue are skew or symmetric, and it will be convenient to introduce some notation for these. Definition 22.2 (Skewtraces). (1) Let e2 is an idempotent in a Jordan algebra strongly connected to e0 via v1 2 J1 , with connection involution x 7! x := Uv (x). We de ne the trace and skew-trace (pronounced tur and skewtur) of an element x by T r(x) := x + x; Sktr(x) := x x Because we have the luxury of a scalar 12 , our module decomposes into symmetric and skew submodules with respect to the involution: every symmetric element is a trace, every skew element is a skewtrace, and every element x = 12 (x + x ) + (x x ) = 21 T r(x) + 12 Sktr(x) can be represented as the average of its symmetric and skew parts (2) We will also save ourselves a lot of V 's and subscripts by using the notation of the Peirce specializations 9:1, (ai ) = Vai jJ1 of J2 + J0 on 1Commutators provide a clear example of an s-identity. I once blithely took it for granted while writing a paper that U[x;y], like any self-respecting U -operator, belonged to the structure semigroup: UU[x;y] z = U[x;y]Uz U[x;y]. Armin Thedy, coming
from right alternative algebras, showed that this is NOT an identity for all Jordan algebras, indeed T11 : UU[x;y] z w = U[x;y]Uz U[x;y]w is the most natural s-identity separating special from exceptional Jordan algebras. I quickly emended my paper, and no trace of my youthful indiscretion appears in the public record.
2. CAPACITY TWO J1
379
for an idempotent e, and de ne corresponding action of skewtraces
tr(ai ) := (ai
ai ) = VSktr(ai ) jJ1 :
Notice that the Spin Peirce relation of the reduced spin factors is equivalent to the Spin Bar relation 11.8, which is just the skewtrace condition tr(J2 )J1 = 0: Lemma 22.3 (Peirce Commutator). If e2 is an idempotent in a Jordan algebra strongly connected to e0 via v1 2 J1 , with connection involution x 7! x, then
tr(a2 )(v1 ) = 0;
tr(a2 )( (b2 )(v1 )) = [[a2 ; b2 ]; v1 ]:
The rst holds since (a2 )(v ) = (a2 )(v ) by the Connection Fixed Points 10.3(3). Then the second results by taking M = (b2 ) in the relation tr(a2 ) (b2 )(v ) = [tr(a2 ); (b2 )](v ) [by the rst part] = [ (a2 ); (b2 )](v ) [since (a2 ) 2 (J0 ) commutes with (b2 ) by Peirce Associativity 9.3] = [[a2 ; b2 ]; v ] [by Peirce Specialization Rules 9.1]. proof.
One suspects that commutators will help future generations understand the classical structure theory in a new light. (1) Show that the Spin Peirce Relation 11.2 can be formulated as [[e; y]; [e; x]2 ] = 0 for all x; y 2 J1 (e). (2) Show that U[[a2 ;b2 ];x1 ] = U[a2 ;b2 ] Ux1 on J0 ; and q2 ([[a2 ; b2]; x1 ])) = U[a2 ;b2 ] (q2 (x1 )). (3) An alternate proof of (2) uses the fact (easily seen from Macdonald's Principle) that D = Da2 ;b2 is a derivation, so D (q2 (x1 )) = D (Ux1 e0 ) = UD(x1);x1 + Ux1 D e0 = q2 (D(x1 ); x1 ) since D(J1 ) J1 and D(J0 ) = 0. (4) If u is an involution in J, show the involution U = Uu on J has the form U (x) = x 12 [[x; u]; u], so that in some sense U = I 12 Ad(u)2 . (5) If e2 is strongly connected to e0 by v1 with connection involution , mimic the proof in 22.3 to show VSktr(a2 ) ([[b2 ; c2 ]; v1 ]) = f[a2; [b2 ; c2 ]]; v1 g: Exercise 22.3
2. Capacity Two Recall that an algebra has capacity 2 if it has unit the sum of 2 supplementary division idempotents (i.e. the diagonal Peirce spaces Ji are division algebras), and is nondegenerate if it has no trivial elements Uz = 0. Theorem 22.4 (Osborn's Capacity Two). A Jordan algebra is simple nondegenerate of capacity 2 i it is isomorphic to one of
380
Osborn's Theorem
(I) (Matrix 2 Type)
H2 (D) = M2 ()+ for D = E x()
for a noncommutative associative division algebra ; (II) (Hermitian 2 Type) H2 (; ) for an associative division algebra with non-central involution; (III) (Reduced Spin Type) RedS pin(q ) for a nondegenerate quadratic form q over a eld : proof. This will be another long proof, divided into a series of short steps, and the readers should once more buckle their seat belts.
Step 1: Reduction to the strong case. By Creating Involutions Proposition 10.5 some diagonal isotope (ui ) (u) has strong capacity 2 (the diagonal Peirce spaces e e J = J Ji = Ji are (~ u ) still division algebras), and J = Je is a diagonal isotope of eJ. If we (~u) can prove Je is of Type I-III (with = 1 in Type II) then so is J = Je : any diagonal isotope of RedS pin(q ) is reduced spin by Quadratic Factor Isotopes Example 7.4(2), any isotope (A+ )(u) = (Au )+ = A+ of a full M2 (D)+ = A+ is isomorphic to M2 (D)+, and any diagonal isotope of H2(D) is H2 (D; ) by Twisted Matrix Example 7.8(3). FROM NOW ON WE ASSUME J HAS STRONG CAPACITY 2: let 1 = e2 + e0 for Ji division idempotents strongly connected by v = v1 2 J1 , let x = Uv (x) be the connection involution, and let D End(J1) be the subalgebra generated by (J2). We claim the following 5 properties hold; (1) J1 = H1 S 1 = T r(J1 ) Sktr(J1 ) for H1 := fx 2 J1 j x = xg; S 1 := fx 2 J1 j x = xg; (2) x + x = (ti (x))v (x 2 J1 ; ti (x) := qi (x; v ); i = 2; 0); (3) H1 = (J2 )v : (a2 )v = (a2 )v = (a2 )v; (4) S 1 = fx 2 J1 j t2 (x) = 0g = fx 2 J1 j t0 (x) = 0g; (5) H1 is a \division triple": all h1 6= 0 are invertible in J: Indeed, the decomposition (1) follows from the Skewtrace De nition 22.2; (2-3) follow from the Connection Involution Lemma 10.3(2-3); (4) follows since S 1 consists by (2) of elements with (ti (x)) = 0, which is equivalent to ti (x) = 0 by Peirce Injectivity 9.2(1) [by injectivity of Uv1 ]; for (5), note that if h = (a2 )v weren't invertible then by ODiagonal Non-Invertibility 20.4 0 = q2 (h1 ) = q2 (fa2 ; v g) = Ua2 q2 (v ) [by the q -Composition Laws 9.5(3)] = a22 , which forces a2 to vanish in the
2. CAPACITY TWO
381
division algebra J2 , hence h1 = (a2 )v = 0 too. Step 2: The case where diagonal skewtraces kill J1 : We come to the First Dichotomy, the rst branching in our family tree, where the quadratic factors diverge from the main line. The decision whether the algebra is a quadratic factor or not hinges on whether diagonal skewtraces vanish identically on the o-diagonal space. We will show that the diagonal skewtraces kill J1 precisely when J is of Quadratic Type. The Spin Peirce Relation 11.2 for norms is equivalent to the Spin Bar Relation 11.8(1), which can be reformulated as f(a2 a2 ); J1 g = 0, or in terms of diagonal skewtraces tr(J2 )J1 = 0. When this vanishes identically then by the Strong Spin Coordinatization Theorem 11.9 J = RedS pin(q ) over = + = J+2 , which is a eld since it is commutative and J2 is a Jordan division algebra [ (a2 ) commutes with (b2 ) = (b2 ) 2 (J0 ) by Peirce Associativity 9.3]. Here RedS pin(q ) is nondegenerate i q is a nondegenerate quadratic form by Factor Nondegeneracy 5.13(3), and we have Type III. FROM NOW ON WE ASSUME DIAGONAL SKEWTRACES DO NOT KILL J1 , so in the notation of Skewtrace De nition 22.2(2) .
(6)
tr(J2 )J1 6= 0
Step 3: The case where diagonal skewtraces kill S 1 but not J1
By (1) and assumption (6), tr(J2 )H1 tr(J2 )S 1 = tr(J2 )J1 6= 0, so one of the two pieces must fail to vanish. In principle there is a branching into the case where skewtraces kill S 1 but not H1 , and the case where they do not kill S 1 . We will show that the rst branch withers away and dies. We begin by showing that tr(J2 )S 1 = 0 leads to D-invariance and q0 -orthogonality of S 1 ; Dv (7)
tr(J2 )S 1 = 0 =)
D(S 1) S 1; q0 (S 1; Dv) = 0:
Invariance follows since the generators (a2 ) of D leave S 1 invariant, (a2 )s1 = (a2 )s1 = (a2 )s1 = (a2 )s1 by skewness of s1 and the skewtrace assumption at the start of (7). Orthogonality follows since q0 (S 1 ; Dv ) = q0 (DS 1 ; v ) [by repeated use of U 1q Rules 9.5(2)] = t0 (DS 1 ) t0 (S 1 ) [by invariance] = 0 (by (3)). Thus (7) holds.
382
Osborn's Theorem
From this we can show that if skewtraces kill S 1 , then on H1 they slink into the radical and kill H1 as well: (8)
tr(J2 )S 1 = 0 =) tr(J2 )H1 Rad(q0 ) = 0:
The argument now becomes multi-stepped. Firstly, any such value z1 = tr(a2 )h1 = (a2 a2 ) (b2 )v [by (3)] = [[a2 ; b2 ]; v ] 2 Dv [by Peirce Commutator 22.3] is skew since skewtraces are skew and h is symmetric, z1 = (a2 a2 )h1 = z1 . Secondly, z1 now lies in S 1 \ Dv , and so is q0 -orthogonal by (7) to both to S 1 and to H1 Dv [by (4)], hence orthogonal to all of J1 = S 1 + H1 [by (1)], and we have q0 -radicality q0 (z1 ; J1 ) = 0. But, thirdly, q0 is nondegenerate by q Nondegeneracy 9.6(2) since J is nondegenerate, so Rad(q0 ) = 0 and we have established (8). Fourthly, this shows tr(J2 )H1 = 0, and skewtraces kill H1 too. In summary, fth and nally, if skewtraces killed the skew part, they would be forced by (8) to kill the symmetric part as well, hence all of J1 = H1 + S 1 [by (1)], which is forbidden by (6) above. Thus we may ASSUME FROM NOW ON THAT DIAGONAL SKEWTRACES DO NOT KILL S 1 , (9)
tr(J2 )S 1 6= 0:
Step 4: The case where diagonal skewtraces do not kill S 1 : The nal branching in the family tree, the Second Dichotomy, is where the hermitian algebras of Types I,II diverge. We claim that if diagonal skewtraces fail to kill S 1 ; tr(J2 )S 1 6= 0, then J is of hermitian type. The crux of the matter is that each o-diagonal skew element s1 2 S 1 faces a delicate Zariski dichotomy: it must either live invertibly in D(v ) or be killed by all diagonal skewtraces, (10)
tr(J2 )s1 6= 0 =) s1 2 D(v ) is invertible in J:
Indeed, by hypothesis (9) some h := tr(a2 )s 6= 0. Now this h is symmetric since s and Sktr(a2 ) are both skew, so by (5) it is invertible. We can use this to show s itself is invertible: if s were NOT invertible then for the diagonal b = Sktr(a2 ) 2 J2 + J0 we would have Us (b); Us (b2 ); s2 2 Us (J2 + J0 ) = 0 by the O-Diagonal NonInvertibility Criterion 20.4, so h2 = fb; sg2 = fb; Us (b)g + Us (b2 ) + Ub (s2 ) = 0, contradicting the invertibility of h:
2. CAPACITY TWO
383
Thus s and h are invertible in J. We will have to work much harder to show s belongs to D(v ) as in (10). Since s is invertible, we may set d2 := Us 1 (a2 ); s2 := q2 (s); c2 := a22 fa2 ; s2 ; d2g + Ud2 (s22 ) 2 J2 d := (a2 ) (s2 ) (d2 ); d := (a2 ) (d2 ) (s2 ) 2 D: In these terms we claim (11)
h = d(s); d d = (c2 ):
The rst holds since h = (a2 a2 )s = (a2 )s Vs (a2 ) (by de nition of h) = (a2 )(s) Vs Us (d2 ) (by de nition of d2 ) = (a2 )(s) Us2 ;s (d2 ) (by Commuting 5:7(F II )) = (a2 )(s) fq2 (s); d2; sg (by Peirce Orthogonality 8:5) = (a2 ) (s2 ) (d2 ) (s) (by Peirce Specialization Rules 9:1) = d(s) (by de nition of d; s2 ); while for the second, as operators on J1 we have d d = (a2 ) (d2 ) (s2 ) (a2 ) (s2 ) (d2 ) (by de nition of d) = (a2 ) (a2 ) (a2 ) (s2 ) (d2 ) + (d2 ) (s2 ) (a2 ) + (d2 ) (s2 ) (s2 ) (d2 ) = (a22 fa2 ; s2 ; d2 g + Ud2 (s22 )) (by Peirce Specialization 9:1) = (c2 ) (by de nition of c2 ): By invertibility of h and Non-Invertibility Criterion 20.4 we have 0 6= 2q0 (h) = q0 (d(s); d(s)) = q0 (d d(s); s) [by the U 1q Rules 9.5(2) twice] = q0 ( (c2 )s; s) [by (11)], so the element c2 must be nonzero, therefore invertible in the division algebra J2 , hence from the formula (c2 )(s) = d d(s) = d (h) [by (11)] we get an explicit expression s = (c2 1 )d (h) 2 D(v ) since h 2 H1 D(v ) (by (4)). Thus s lies in D(v ), completing the argument for (10). Applying a Zariski-density argument to the Zariski-dichotomy (10) shows all skew elements make a common decision: if each s1 must either submit to being killed by all skewtraces, or else live in D(v ), then either all S 1 must submit to being killed by skewtraces or else all must live together in D(v ). Since by hypothesis they aren't all killed by
384
Osborn's Theorem
skewtraces, they must all live in D(v ) : (12)
S 1 D(v):
Indeed, by (9) there exists at least one resistant s 2 S 1 which refuses to be killed by skewtraces, and he drags all submissive t's into resistant s + t's [tr(J2 )t = 0 =) tr(J2 )(s + t) 6= 0]; but all resisters live in D(v ) by (10), so even the submissive t's must live there too: s; s + t 2 D(v ) =) t = (s + t) s 2 D(v ), therefore S 1 D(v ): By (12) all of S 1 moves to D(v ) to join H1 (which already lives there by (4)), so we have all J1 D(v ). Once the Hermitian Peirce Condition J1 = D(v ) holds then the Strong 2 2 Hermitian Coordinatization Theorem 12.6 shows J = H2 (D) with v = 1[12]; Æ [11] = Æ [22]; and D hermitian-generated. In view of the Hermitian Matrix formulas 3.7, condition (6) is equivalent to the involution not being central: tr(J2 )J1 6= 0 in J i there is some nonzero tr(Æ [11])d[12] 6= 0 in H2 (D) i 0 6= (Æ [11] Æ [11])d[12] = (Æ [11] Æ [22])d[12] = fÆ[11]; d[12]g fd[12]; Æ[22]g = [Æ; d] [12] i some Æ 2 H is not central (recall again that Æ is automatically nuclear by hypothesis), i.e. i the involution is not central. Also, H2 (D) = J nondegenerate imples D nondegenerate by Jordan Matrix Nondegeneracy 5.12, and H(D) = J2 a Jordan division algebra by capacity 2 implies the hermitian H(D) are invertible in D. By the Herstein-Kleinfeld-Osborn Theorem 21.5, either D = E x() of Type I, or D = of Type II, or else D is a composition algebra over its center of Type III, which is excluded because its standard involution is central, violating (6). Step 5: The converse. We have shown the hard part, that every simple J has one of the given types. We now check conversely that these types are always simple and nondegenerate. Since they all have connected capacity 2, by Simple Capacity 20.7 we know they will be simple as soon as they are nondegenerate. But all types are nondegenerate: RedS pin(q ) is nondegenerate i q is by Factor Nondegeneracy 5.13(3), H2 (D; ) = H2(D)( ) is nondegenerate since isotopes inherit nondegeneracy (by Jordan Isotope 7.3(3)) and H2 (D) is nondegenerate by Jordan Matrix Nondegeneracy 5.12. Thus we have the simples, the whole simples, and nothing but the simples.
2. CAPACITY TWO Exercise 22.4(3) Assume e2
385
is strongly connected to e0 by v1 . (1) Show V[[a2 ;b2 ];c2 ] = [[VSktr(a2 ) ; Vb2 ]; VSktr(c2 ) ] 2 DVSktr(J2 ) D 2 on J1 . (2) If tr(J2 )S 1 = 0 (as in Step 3 of the above proof) show [[J2 ; J2 ]; J1 ] S 1 and [[J2 ; J2 ]; J2 ] = 0.
CHAPTER 23
Classical Classi cation The denouement, the nal classi cation theorem, comes as an anticlimax, since all the hard work has already been done. If we wish, we may treat this section as an easy victory lap in celebration of our achievement. Theorem 23.1 (Jacobson's Capacity n 3). A Jordan algebra is simple nondegenerate of capacity n 3 i it is a Jordan matrix algebra isomorphic to one of = Mn ()+ where D = E x() for a noncommutative associative division algebra ; (II) Hn (; ) for an associative division algebra with non-central involution; (III) Hn (C; ) for a composition algebra C of dimension 1; 2; 4; or 8 over its center with standard central involution [dimension 8 only for n = 3]. In particular, it is automatically special unless it is a reduced 27dimensional Albert algebra. We can list the possibilities in another way: the algebra is either (I)
Hn(D)
= Mn ()+ where D = E x() for an associative division algebra ; 0 (II ) Hn (; ) for an associative division -algebra ; (III0 ) H2n ( ; sp) for the symplectic involution X sp = SX tr S 1 over a eld (S the symplectic 2n 2n matrix diag fs; s; : : : ; sg for s = ( 01 10 ); 0 (IV ) H3 (O; ) [n = 3 only] for an octonion algebra O of dimension 8 over its center . proof. If J is simple nondegenerate of capacity n then by the Simple Capacity Theorem 20.7 it has n connected idempotents with diagonal Peirce paces Jii division algebras; by Jacobson's Coordinatization Theorem 17.2, when n 3 J is isomorphic to Hn (D; ) for D alternative with nuclear involution, and Jii is isomorphic to H(D) so the latter is a division algebra, and (as we noted before Osborn's Theorem) D is nondegenerate. (I0 )
Hn(D)
387
388
Classical Classi cation
By the Herstein-Kleinfeld-Osborn Theorem 21.5 D is one of the 3 types I-III (or 4 types I0 -IV0 ). The only case that needs some explaining is (III0 ). Here D = M2 ( ) under x = sxtr s 1 : Mn (M2 ( )) = M2n ( ) by regarding an n n matrix X = (xij ) of 2 2 matrices xij 2 M2 ( ) as a 2n 2n matrix decomposed into 2 2 blocks xij . The involution (X )tr of Mn (M2 ( )) yields a matrix whose ij -block is xji = sxtrji s 1 . Any time Y 2 M2n ( ) is decomposed into 2 2 blocks Y = (yij ) then SY S 1 = (sIn)Y (sIn ) 1 has as its ij -block syij s 1 , and Y tr has as its ij block yjitr , so SX tr S 1 has as its ij -block sxtrji s 1 , showing tr X = SX tr S 1 corresponds to the symplectic involution. We have thus shown that every simple J has one of the given types, and it remains to check conversely that these types are always simple and nondegenerate. Just as in Osborn's Capacity 2 Theorem 22.4, since they all have connected capacity n it suÆces by Simple Capacity 20.7 to verify nondegeneracy, which follows since H(D) is a division algebra and certainly semiprime, and each algebra D = op or or C is semiprime, so Hn (D) is nondegenerate by Jordan Matrix Nondegeneracy 5.12; then its isotope Hn (D; ) is [by Twisted Matrix Isotopy 7.8(3) and Jordan Isotopy 7.3(3)]. Once more we have captured precisely the simple algebras. We can sum up the cases of Capacity 1 (20.2), Capacity 2 (22.4), and Capacity 3 (23.1) to construct our nal edi ce. Theorem 23.2 (Classical Structure). A Jordan algebra is nondegenerate with nite capacity i it is a direct sum of a nite number of simple ideals of the following Division, Matrix, or Quadratic Types of capacity n :
(D) a Jordan division algebra (n = 1); (M1) Mn ()+ (n 2) for a noncommutative associative division algebra ; (M2) Hn (; ) (n 2) for an associative division algebra with non-central involution; (M3) Hn (C; ) (n 3) for a composition algebra C of dimension 1; 2; 4, or 8 over its center
(dimension 8, only for n = 3); (Q) RedS pin(q ) (n = 2) for a nondegenerate quadratic form q over a eld :
389
Alternately, we can describe them as Division, Hermitian, Quadratic, or Albert Types:
(D) a Jordan division algebra (n = 1); (H1) (Exchange Type) Mn()+ (n 2) for an associative division algebra ; (H2) (Orthogonal Type) Hn (; ) (n 2) for an associative division algebra with involution; (H3) (Symplectic Type) H2n ( ; sp) (n 3) for the standard symplectic involution over a eld ; (Q) RedS pin(q ) (n = 2) for a nondegenerate quadratic form q over a eld ; (A) H3 (O; ) (n = 3) for an octonion algebra O over a eld : And thus we come to the end of a Theory, an Era, and Part III.
Part IV Zel'manov's Exceptional Theorem
In Part IV we will give a fairly self-contained treatment of the Zel'manov's celebrated theorem that the only prime i-exceptional Jordan algebras are forms of Albert algebras, lling in the details sketched in Chapter 8 of the Historical Survey (Part II). Throughout this part we again work with linear Jordan algebras over a xed (unital, commutative, associative) ring of scalars containing 12 , but we conduct a quadratic argument as long as it is convenient and illuminating. We will make use of a few results from Part III on Peirce decompositions and invertibility, and of course will invoke Macdonald's Principle at the drop of a hat, but otherwise require very little from Part III. The actual classi cation of simple exceptional algebras shows in essence that they must have capacity, and then falls back on the classi cation in Part III to conclude they are Albert algebras. It does not provide an independent route to Albert algebras, but rather shows that niteness is unavoidable in the exceptional setting.
First Phase:
Begetting Capacity
In this phase we describe general situations that lead to the algebras of nite capacity considered in Part III. The crucial result is that an algebra will have a capacity if all its elements have small spectra over large elds, and that a non-vanishing s-identity f puts a bound on the f -spectra of elements (which is close to putting a bound on ordinary spectra). This means that i-exceptionality, the non-vanishing of some f , is an incipient niteness condition.
CHAPTER 1
The Radical We begin with some basic results about the Jacobson radical. Although our nal structure theorem concerns nondegenerate algebras, our proof proceeds by classifying the more restrictive primitive algebras. At the very end we imbed a prime nondegenerate algebra in a semiprimitive algebra, wave a magic ultra lter wand, and presto { all nondegenerate prime exceptional algebras are turned into Albert forms. The Jacobson radical Rad(A) of an associative algebra A can be de ned in 3 ways. Its origin and its primary importance come from its role as the obstacle to irreducible representation { the part of the algebra that refuses to act on irreducible modules (the intersection \Ker() of the kernels of all irreducible representations ). Second, it is the maximal quasi-invertible ideal R (ideal consisting entirely of quasiinvertible elements, elements z such that 1 z is formally invertible in the unital hull). Thirdly, and most useful in practice, it can be precisely described elementally as the set PQI (A) of all elements which are properly quasi-invertible (where in general the adverb \properly" means \all multiples remain such"). In Jordan theory there are no irreducible representations or irreducible modules M = A=B for maximal modular left ideals B, though there are ghosts of such modules (namely the maximal modular inner ideals we will meet in Chapter 5). Jordan algebras do have an exact analogue for the associative theory of inverses and quasi-inverses, and so we will de ne the radical to be the maximal quasi-invertible ideal. But we will spend the rest of this chapter making sure we again have the user-friendly elemental characterization in terms of properly quasiinvertible elements. This is especially important in Jordan algebras, where ideals are hard to generate.
1. Invertibility We begin by establishing the basic facts about quasi-invertibility. Since quasi-invertibility of z means invertibility of 1 z , many results are mere translations of results about inverses, which we now \recall"(cf. III.6.1). 395
396
The Radical
1.1 (Basic Invertible Facts). (1) Existence: An element u in a unital Jordan algebra J is invertible if there exists an element v with (Q1) Uu v = u; (Q2) Uu v 2 = 1; the element v is called the inverse u 1 of u: (2) Extension: if u is invertible in J then it remains invertible (with the same inverse) in any algebra J~ J having the same unit as J. (3) Criterion: The following are equivalent for an element u of a unital Jordan algebra J : (i) u is an invertible element of J; (ii) Uu is an invertible operator on J; (iii) Uu is surjective on J : Uu (J) = J; (iv ) 1 2 Uu (J); (v ) Uu (J) contains some invertible element. (4) Consequences: If u is invertible in J with inverse v , then (i) Uu ; Uv are inverse operators ; (ii) the inverse is uniquely determined as v = Uu 1 u; (iii) invertibility is symmetric : v is invertible with inverse u; (iv ) fu; v g = 2: proof. We recall the argument of III.6. (1) is just the Inverse De nition III.6.1. (2) is clear since invertibility is strictly a relation between u; v , and the unit element 1. (3) is just the Invertibility Criterion III.6.2 (with the wrinkle (v ) thrown in). (i) ) (ii) : Applying the Fundamental Formula to the relations (Q1-2) shows Uu Uv Uu = Uu ; Uu Uv Uv Uu = 1J; the latter shows the operator Uu is surjective and injective, hence invertible, and then cancelling Uu from the former gives Uu Uv = Uv Uu = 1J , so Uu and Uv are inverses. [This also proves (4)(i).] Clearly (ii) ) (iii) ) (iv ) ) (v ). On the other hand, (v ) ) (ii) since from the Fundamental Formula if Uu a is invertible so is the operator UUua = Uu Ua Uu since (i) ) (ii), which again implies Uu is invertible. (ii) ) (i) because v = Uu 1 u satis es Uu v = u [by de nition], Uu v 2 = Uu Uv 1 = Uu Uv Uu Uu 1 1 = UUu v Uu 1 1 [by Fundamental Formula] = Uu Uu 1 1 = 1: [This also establishes (4)(ii).] (4) We saw (i) and (ii) above. (iii) follows from (ii), and can also be checked directly: the conditions (Q1-2) are satis ed with the roles of u and v switched, since Uv u = Uv (Uu (v )) = 1J (v ) = v; Uv u2 = Uv (Uu (1)) = 1J (1) = 1: (iv ) results by cancelling Uu from Uu (fu; v g) = Uu Vu (v ) = Vu Uu (v ) [by Commuting III.5.7(FII)] = Vu (u) = 2u2 = Uu (2): Theorem
2. QUASI-INVERTIBILITY
397
2. Quasi-Invertibility Naturally we are going to derive the basic facts about quasi-inverses from facts about ordinary inverses making use of the unital hull bJ = 1 J. First we need to recall the results on structural transformations in III.18.5. about operators on Jb and their restrictions to J: Definition 1.2 (Structural Transformation). (1) A structural transformation T on a Jordan algebra J is a linear transformation for which there exists a linear T on J such that both T; T extend to b J and satisfy UT (^x) = T Ux^ T for all x^ 2 Jb: If T is also structural with T = T , we say (T; T ) is a structural pair on J: (2) We say a linear transformation S on Jb is congruent to b1 = 1J^ mod J if (b1 S )(Jb) J; i.e. S (J) J; S (1) = 1 c for some c 2 J; more explicitly S (1 x) = 1 ( c + S (x)): Important examples of structural T congruent to 1b are the quasi-U operators Bx;^1 = 1J Vx + Ux = U^1 x for x 2 J, more generally the Bergmann operators Bx;y^ := 1J Vx;y^ + Ux Uy^ for x 2 J; y^ 2 bJ): We have mentioned several times that these Bergmann operators are structural, and it is now time to put our proof where our mouth is. This is one important identity that will not succumb to Macdonald's blandishments, so the fact that it is easy to verify in associative algebras won't save us from the calculations that follow. Proposition 1.3 (Bergmann Structurality). The generalized Bergmann operators B;x;y := 2 1J Vx;y + Ux Uy (x; y 2 J; 2 ) form a structural pair (B;x;y ; B;y;x); UB;x;y (z) = B;x;y Uz B;y;x : proof. Expanding this out in powers of , for B := B;x;y we have UB(z) = U2 z fx;y;zg+UxUy (z) = 4 Uz 3 Uz;fx;y;zg + 2 Ufx;y;zg + UUx Uy (z);z UUx Uy (z);fx;y;zg+UUx Uy (z) while BUz B = 4 Uz 3 Vx;y Uz + Uz Vy;x +2 Ux Uy Uz +Uz Uy Ux +Vx;y Uz Vy;x Ux Uy Uz Vy;x+Vx;y Uz Uy Ux + Ux Uy Uz Uy Ux : To show these coincide for all x; y; in all situations, we must prove the coeÆcients of like powers of coincide. This is trivial
398
The Radical
for 4 , is the Fundamental Lie Formula III.5.7(FV) for 3 , is Alternate Fundamental Formula III.5.7 (FI)0 for 2 , and is the Fundamental Formula for 0 , but for 1 we have a totally unfamiliar bulky formula: (1 )
UUx Uy (z);fx;y;zg = Ux Uy Uz Vy;x + Vx;y Uz Uy Ux :
For this we need Commuting (FII), Triple Shift (FIIIe), Fundamental Lie (FV), and a linearized Fundamental Formula (FI) UUx (y);Ux;z (y) = Ux Uy Ux;z + Ux;z Uy Ux from 5.7. We compute UUx Uy (z);fx;y;zg = UUx (Uy (z));Ux;z (y) = UUx (y);Ux;z (Uy (z)) + Ux UUy (z);y Ux;z + Ux;z Uy;Uy (z) Ux
= UUx (y);Ux;Uz (y)(y) + Ux Uy Vz;y Ux;z + Ux;z =
[by linearized (FI)] Vy;z Uy Ux
[by Triple Shift on the rst term, and Commuting (FII) on the second and third terms] Ux Uy Ux;Uz (y) + Ux;Uz (y) Uy Ux + Ux Uy Ux;Uz (y) + Uz Vy;x
+ Ux;Uz y + Vx;y Uz Uy Ux by applying linearized (FI) on the rst term, applying Vz;y Ux;z = Ux;Uz (y) +Uz Vy;x to the second terms [since fz; y; fx; a; z gg = Uz fa; x; y g+ fx; a; Uz yg by Fundamental Lie (FV)], and applying Ux;z Vy;z = Ux;Uz y + Vx;y Uz to the third term [since fx; fy; z; bg; z g = fx; b; Uz y g+fx; y; Uz bg by linearized Triple Shift (FIII)]. Thus four terms cancel, leaving
Vx;y Uz Uy Ux + Ux Uy Uz Vy;x as claimed. This nishes 1 , and hence the structurality of the Bergmann operators. The pain of the above proof can be lessened by an injection of Koecher's Principle (see Problems 6,7). b). (1) If a structural transformation Lemma 1.4 (Congruent to 1 T on a unital Jordan algebra J is invertible, then its adjoint is T is invertible too. (2) If T is structural and has the property that it is invertible as soon as it is surjective (for example, a U -operator), then we have contagious invertibility: if T ever takes on an invertible value, then it and is adjoint are invertible operators,
T (x) is invertible i T; T ; x are invertible.
2. QUASI-INVERTIBILITY
399
(3) If a structural T is congruent to b1 mod J on a general J, then T is injective (resp. surjective) on Jb i it is injective (resp. surjective) on J: proof. (1) If T is invertible then T (x) = 1 for some x, so 1J = UT (x) = T Ux T implies Ux T = T 1 invertible, hence Ux is surjective, hence invertible [by 1.1(3)(ii) = (iii)], so T = Ux 1 T 1 is invertible too. (2) A similar argument works whenever T (x) is invertible if we know that T , like Ux , becomes invertible once it is surjective: UT (x) = T Ux T invertible [by 1.1(3)(ii) and structurality (1.2)(1)] =) T is surjective =) T is invertible [by the HYPOTHESIS on T ] =) Ux T = T 1 UT (x) is invertible =) Ux surjective =) Ux invertible =) x invertible [by 1.1(3)(i)] =) T = Ux 1 T 1 invertible. (3) In Jb = 1 J the submodule 1 is a faithful copy of , 1 = 0 =) = 0: By (1.2)(2) and directness the kernel and cokernel of S live in J: any killing takes place in J since S (1 x) = 1 ( c + S (x) = 0 () = 0; S (x) = 0, similarly any non-imaging takes place in J since 1 x 62 S (Jb) () 1 x S (1) 62 S (Jb) () x + c 62 S (J):
1.5 (Radical). (1) An element x of an arbitrary Jordan algebra J is quasi-invertible (q.i. for short) if 1 z is invertible in the unital hull bJ; if (1 z ) 1 = 1 w then w is called the quasi-inverse of z , denoted qi(z ):1 The set of quasi-invertible elements of J is denoted by QI (J): An algebra or ideal is called quasi-invertible (q.i.) if all its elements are q.i. (2) The Jacobson radical Rad(J) is the maximal q.i. ideal, i.e. the maximal ideal contained inside the set QI (J):2 An algebra is called Definition
1The notation q (x) has already been used for the Peirce quadratic forms; to
avoid confusion, we add the i to make qi() (pronounced \kyoo-eye") to make the reference to quasi-inverse clearer, but we save space by staying aperiodic and not going the whole way to write q:i:(): Some authors use the negation of our de nition, P (1 z ) 1 = 1 + w so that the quasi-inverse is the geometric series w = + z i . 1 We prefer to keep the symmetry in z and w, so that (1 w) = 1 z just as 1 1 u = u for ordinary inverses. 2De ning is not creating: we have to show there is such a maximal ideal, which we will do in 1.10.
400
The Radical
semiprimitive 3 if it has no q.i. ideals, equivalently its Jacobson radical vanishes, Rad(J) = 0: (3) An element is nilpotent if some power vanishes, xn = 0; then all higher powers xm = xn xm n = 0 for m n also vanish. The
nilpotent elements are the most important single source of q.i. elements: z n = 0 =) (1 z ) 1 = 1 + z + z 2 + : : : + z n 1 ; qi(z ) = z + z 2 + : : : + z n 1 by the usual telescoping sum, since by power-associativity this all takes place in the special subalgebra [x]: (4) The nilpotent elements are the most important single source of q.i. elements. An algebra or ideal is called nil if all its elements are nilpotent. The nil radical Nil(J) is de ned as the maximal nil ideal.4 Since nil implies q.i. by (2), we have Nil(J) Rad(J): It is always useful to think of (1 z ) 1 and the negative qi(z ) as given by the geometric series
(1 z )
1
1 X n=0
zn;
qi(z )
1 X n=1
zn:
This is strictly true whenever the in nite series converges respectably, such as for an element of norm < 1 in a Banach algebra (see Problem 1), or if z is nilpotent so the series is nite as in (2). Amitsur's Tricks (see Problem 2 and Theorem 3.3) show the nil elements are the only die-hard q.i. elements, the only ones that remain q.i. over a big eld or when multiplied by a scalar indeterminate. In some sense the nil radical is the \stable" form of the Jacobson radical: in Chapter 5 we will see that if we perturb the algebra enough, the radical will shrink down into the nil radical. But the Jacobson radical has proven to be the most useful radical, both structurally and practically. We will soon obtain a handy elemental characterization of the Jacobson radical, but there is no such characterization known for the nil radical: even in associative algebras, the Kothe Question (whether the nil radical consists precisely of the properly nilpotent elements) remains unsolved. 3Philosophically, an algebra is semi-blah i it is a subdirect product of blah
algebras. Later in Chapter 5 we will see what a vanishing Jacobson radical has to do with primitivity. For the present, just think of semiprimitivity as some form of \semisimplicity". 4This is to be carefully distinguished from a nilpotent ideal, which means a power of the ideal (not just of its individual elements) vanishes. The nil radical is sometimes called the Kothe radical.
2. QUASI-INVERTIBILITY
401
1.6 (Basic Quasi-Invertible Facts). (1) Existence: z is quasi-invertible in J with quasi-inverse w i U1 z w = z 2 z; U1 z w2 = z 2 : (2) Extension: If z is quasi-invertible in J then it remains quasiinvertible (with the same quasi-inverse) in any extension algebra J~ J: (3) Criterion: The following are equivalent for an element z of a Jordan algebra J : (i) z is a quasi-invertible element of J; (ii) the quasi-U-operator U1 z is an invertible operator on J; (iii) U1 z is surjective on J; U1 z (J) = J; (iv ) 2z z 2 2 U1 z (J); (v ) U1 z (J) contains 2z z 2 u for some q.i. u: (4) Consequences: If z is quasi-invertible in J with quasi-inverse w = qi(z ), then (i) the quasi-U-operators U1 z ; U1 w are inverse operators; (ii) w is uniquely determined as w = U1 1z (z 2 z ); (iii) w is quasi-invertible with quasi-inverse z = qi(w); (iv ) fz; wg = 2(z + w); Uz w = w + z + z 2 ; showing that the quasi-inverse w necessarily lies in the original J. proof. (1) The invertibility conditions (Q1-2) of 1.1(1) for u = 1 z; v = 1 w become 1 z = U1 z (1 w) = (1 2z +z 2 ) U1 z (w) () U1 z (w) = z + z 2 and 1 = U1 z (1 w)2 = U1 z (1 2w + w2 ) = U1 z (2(1 w) 1 + w2 ) = 2(1 z ) (1 2z + z 2 ) + U1 z (w2 ) [by the above] = 1 z 2 + U1 z (w2 ) () U1 z w2 = z 2 . (2) is clear since quasi-invertibility is strictly between z and w: (3) Since z is q.i. i 1 z is invertible, Invertible Criterion 1.1(3)(i)(v )) imply equivalence of (i)-(v ). For (ii) and (iii) use Congruence 1.4(2) for T = U1 z ; for (iv ), since U1 z (1) = 1 2z + z 2 the element 1 lies in the range of U1 z i 2z z 2 does (and its preimage must be in J by directness); (iv ) is the special case u = 0 of (v ), where U1 z (a) = 2z z 2 u i U1 z (1 + a) = 1 u is an invertible value of U1 z . (4) Since z is q.i. with quasi-inverse w i 1 z is invertible with inverse 1 w in bJ, (i)-(iii) follow from Inverse Consequences 1.1(4) and (1). For (iv), note 2 = f(1 z ); (1 w)g = 2 2w 2z + Theorem
402
The Radical
fz; wg () fz; wg = 2(z + w), so z + z2 = U1 z w [by the above] = w fz; wg + Uz w = w 2w 2z + Uz w: Since Uz (w) 2 J, this shows w lies in J, not just Jb:
Exercise 1.6*
Show that z n q.i. implies z q.i.; show the converse is false.
3. Proper Quasi-Invertibility The Jacobson radical consists precisely of the properly quasi-invertible elements. In general, the adverb \properly" means \in all homotopes"; in Jordan theory we don't have a notion of \left multiple" yz , but the notion \z in the y -homotope J(y) " provides a workable substitute. Definition 1.7 (Proper Quasi-Invertible). (1) A pair (z; y ) of elements in a Jordan algebra J is called a quasi-invertible (q.i.) pair if z is quasi-invertible in the homotope J(y) , in which case its quasiinverse is denoted z y = qi(z; y ) and called the quasi-inverse of the pair. The quasi-U operators which determine quasi-invertibility become, in the y -homotope, the Bergmann operators 1:3 :
U1((yy))
z
= 1J
Vz(y) + Uz(y) = 1J
Vz;y + Uz Uy =: Bz;y :
Thus (z; y ) is q.i. i Bz;y is an invertible operator. (2) An element z is called properly quasi-invertible (p.q.i. for short) if it remains quasi-invertible in all homotopes J(y) of J, i.e. all pairs (z; y ) are quasi-invertible and all Bz;y are invertible. The set of properly-quasi-invertible elements of J is denoted PQI (J): Proper quasi-invertibility is a stronger condition than mere quasi-invertibility:
z p.q.i. =) (z; z ) q.i.
() z2
q.i.
() z; z
q.i.
since [by Macdonald or III.5.7] Bz;z = 1J Vz;z + Uz Uz = 1J Vz2 + Uz2 = U1 z2 = U1 z U1+z = U1+z U1 z invertible () U1 z ; U1+z invertible () 1 z; 1 + z invertible () z; z q.i. (3) An element z is called properly nilpotent (p.n. for short) if it remains nilpotent in all homotopes J(y) of J, i.e. all pairs Uz(y) = Uz Uy are nilpotent operators. The set of properly-nilpotent elements
3. PROPER QUASI-INVERTIBILITY
403
of J is denoted PN (J): Just as nilness is more restrictive than quasiinvertibility, proper nilpotence is more restrictive than proper-quasiinvertibility:5
z p.n. =) z p.q.i. We should think associatively that Bz;y (x) (1 zy )x(1 yz ): Again, a good mnemonic device is to think of the negative of the quasiinverse of the pair as a geometric series related to the associative quasiinverses of zy and yz ; since the homotope is obtained by sticking a factor y in the middle of all products, the geometric approach to quasiinverses leads to P (n;y) qi(z; y ) z + zyz + zyzyz + : : : = 1 n=1 z z(1 + yz + yzyz : : :) = z(1 yz) 1 (1 zy) 1 z z + z(y + yzy + : : :)z = z + Uz ( qi(y; z)):
This helps us understand some of the facts about q.i. pairs. Theorem 1.8 (Basic Q.I. Pair Facts). (1) Existence: (z; y ) is quasi-invertible in J with quasi-inverse w = qi(z; y ) i
Bz;y w = Uz y
z;
Bz;y Uw y = Uz y:
(2) Extension: If (z; y ) is quasi- invertible in J then it remains quasiinvertible (with the same quasi-inverse qi(z; y )) in any extension algebra J~ J:
5There is a properly nil radical, the largest ideal all of whose elements are p.n.
(the largest ideal inside PN (J)). The Kothe conjecture is that, as with the radical, the properly nil radical consists of the entire set PN (J). Most ring-theorists believe the conjecture is false, but no one has been able to come up with a counter-example, even in associative algebras.
404
The Radical
(3) Criterion: The following are equivalent for a pair of elements (z; y ) : (i) (z; y ) is quasi-invertible in J; (ii) the Bergmann operator Bz;y is an invertible operator on J; (iii) Bz;y is surjective on J; Bz;y (J) = J; (iv ) 2z Uz y 2 Bz;y (J); (v ) Bz;y (J) contains 2z Uz y u for some q.i.pair (u; y ); (vi) fz; yg Uz y 2 2 Bz;y (J); (vii) fz; yg Uz y 2 is q.i. in J; (i) (y; z ) is quasi-invertible in J; (ii) the Bergmann operator By;z is an invertible operator on J: (4) Consequences: If (z; y ) is quasi-invertible in J with quasi-inverse w = qi(z; y ), then (i) the Bergmann operators Bz;y ; Bw;y are inverse operators; (i)0 the Bergmann operators By;z ; By;w are inverse operators; (ii) w is uniquely determined as w = Bz;y1 (Uz y z ); (iii) (w; y ) is quasi-invertible with quasi-inverse z = qi(w; y ); (iv ) fz; y; wg = 2(z + w); Uz Uy w = w + z + Uz y; so again necessarily w 2 J: (y) : proof. We apply the Basic Q.I. Facts 1.6 in the homotope J Since in J(y) the operator U1 z becomes the Bergmann operator Bz;y , and the squares and braces are x(2;y) = Ux y and fx; z g(y) = fx; y; zg. QI Existence 1.6(1) gives (1); QI Extension 1.6(2) gives (2) [for y 2 J (y) we still have inclusion J~ J(y) ]. QI Criterion 1.6(3)(i)-(v ) gives the equivalence of (3)(i)-(v ), and QI Consequences 1.6(4) gives (4). An immediate consequence of Contagious Inversion 1.4(1) applied to T = Bz;y is that q.i. pairs are symmetric: (ii) () (ii) and hence (i) () (i) . To get the remaining equivalences with (vi); (vii) in (3) we work in (y) : Note that the the unital hull Jb instead of the hull of the homotope Jd (y) homotope of the hull Jb is not the same as the hull of the homotope (y) d (y) J , since the former is usually not unital: 1 is not the unit, U1 = U1 Uy = Uy 6= 1J . This is a delicate point: B is structural, even a
3. PROPER QUASI-INVERTIBILITY
405
(y) , but it is also symmetric U -operator, on the hull of the homotope Jd structural (though not symmetric, B 6= B ) on the hull of the original algebra Jb. Set B := Bz;y ; B := By;z ; x := fz; yg Uz y 2 ; so B (1) = 1 x: B is structural on the unital algebra Jb by Bergmann Structurality 1.3, and it is invertible i it is surjective [on the module Jb or on the module J by Congruent-to-b1 1.4(3)] by (3)(ii) = (iii), so we can apply Contagious Inversion 1.4(1) to see (I) B(^a) invertible on Jb =) B; B ; ^a invertible: Thus (ii) () (ii) + (ii) () B; B invertible () U1 x = UB(1) = BU1 B = BB invertible [) is clear, ( follows from (I) with a^ = 1] () x q.i. () (vii). Clearly (iii) =) (vi); and (vi) () x = B (a) () 1 = x + (1 x) = B (a) + B (1) = B (1 + a) 2 B (Jb) =) B invertible [by (I)] =) (iii): Thus all conditions in (3) are equivalent. Theorem 1.9 (3 Basic Q.I. Pair Principles). (1) Symmetry Principle: Quasi-invertibility is symmetric : (z; y ) is quasi-invertible in J i (y; z ) is, in which case qi(y; z ) = Uy qi(z; y ) y: (2) Structural Shifting Principle: If T is a structural transformation on J, then it is a homomorphism J(T (y)) ! J(y) of Jordan algebras for any y 2 J, and it can be shifted in quasi-invertible pairs: if (z; T (y )) is quasi-invertible, so is (T (z ); y ), in which case qi(T (z ); y ) = T (qi(z; T (y ))): If (T; T ) is a structural pair, then (T (z ); y ) is quasi-invertible i (z; T (y )) is quasi-invertible. (3) Addition Principle: Quasi-invertibility is additive: if x; y; z 2 J with (z; y ) quasi-invertible, then (x+z; y ) is quasi-invertible i (x; qi(y; z )) is quasi-invertible, Bx+z;y = Bx; qi(y;z) Bz;y ( if (z; y ) q.i.): proof. (1) We noticed in constructing Principal Inner Ideals III.5.9 the general formula (i) Uy Uy z = Uy Bz;y = By;z Uy which follows from Macdonald, since it involves only two elements, or directly from Uy Uy;Uy w + UUy w = Uy (1J Vw;y + Uw Uy ) by Commuting III.5.7(FII) and Fundamenental Formula. Basic QI Pair Facts 1.8(3)(i) = (i) shows symmetry (z; y ) q.i. () (y; z ), and then
406
The Radical
the formula for qi(y; z ) follows from QI Pair Consequences 1.8(4)(ii): qi(y; z ) = By;z1 (Uy z y ) = By;z1 Uy [Uz y z ] [y fy; z; yg + Uy Uz y ] = By;z1 Uy [Bz;y qi(z; y )] [By;z y ] [by 1.8(4)(ii)] = By;z1 By;z [Uy qi(z; y )] By;z [y ] [by (i) above] = Uy qi(z; y ) y . (2) T is a homomorphsim since it preserves squares, T (x(2;T (y)) ) = T (UxT (y )) = UT (x) y [by structurality 1.2(1)] = T (x)(2;y) , therefore T takes q.i. elements z 2 J (T (y)) to q.i. T (z ) 2 J (y) , hence takes quasi-inverses qi(z; T (y )) to quasi-inverses qi(T (z ); y ) as in (2). Conversely, if T = T then (T (z ); y )q:i: =) (y; T (z )) = (y; T (z ))q:i: [by Symmetry] =) (T (y ); z )q:i: [by the above shift applied to T ] =) (z; T (y ))q:i: [by Symmetry again]. (3) We have another general formula (ii)
Bx;y
Uy w
= Bx;(y1)(y)
w
which follows from Bx;y Uy w = 1J (Vx;y Vx;Uy w ) + Ux Uy Uy w = (y) 1J (Vx(y) Vx;w ) + Ux Uy Bw;y [using (i)] = 1J Vx;(y1)(y) w + Ux(y) U1((yy)) w [by PQI De nition 1.7(1)] = Bx;(y1)(y) w : For invertible u we have formulas (iii) VUu w;u 1 = Vu;w ; (iv) BUuw;u 1 = Bu;w ; (v) Bx;u 1 Uu = Uu x :
For (iii), on J = Uu J we have fUu w; u 1; Uu ag = Uu Uw;a Uu u 1 [Fundamental Formula] = Uu fw; u; ag = fu; w; Uuag [by Commuting III.5.7(FII)]. For (iv), 1J VUu w;u 1 + UUu w Uu 1 = 1J Vu;w + Uu Uw [by (iii), Fundamental Formula, and Inverse Consequences 1.1(2)]. For (v), Uu x = Uu Uuw [for w = Uu 1 x] = Bu;w Uu [by (i)] = BUu w;u 1 Uu [by (iv)] = Bx;u 1 Uu . (y) gives Applying (v) to u = 1(y) z; u 1 = 1(y) qi(z; y ) in Jd (y) (y ) (y) (y) Bx;1(y) qi(z;y) U1(y) z = U(1(y) z) x = U1(y) (z+x) ; by (ii) and 1.7(1) this converts to Bx;y Uy qi(z;y) Bz;y = Bx+z;y , where y Uy qi(z; y ) = qi(y; z ) by Symmetry (1), establishing the formula in part (3). Thus (x + z; y ) q.i. () Bx+z;y q.i. [by Criterion 1.8(4)(iii))] () Bx; qi(y;z) invertible [from the formula in part (3), since By;z is already assumed invertible] () (x; qi(y; z)) q.i. (1) Recall that T is weakly structural if there is a T so that UT (x) = T UxT just on J: Show that 1.9(1-2) hold for such T: (2) If (T; T ) is a structural pair, prove that T : J(T (y)) ! J(y) is a homomorphism of quadratic Jordan algebras with 21 held behind your back [i.e. prove T (Ux(T (y)) z ) = UT(y()x) T (z ) directly from structurality]; prove T Bx;T (y) = BT (x);y T for all x; y: (3) Use this Exercise 1.9*
4. ELEMENTAL CHARACTERIZATION
407
to verify formula 1.9(2) directly from 1.8(3), 1.8(4)(v)) and 1.3(1) to see (z; T (y)) q.i.() (T (z ); y) q.i.
4. Elemental Characterization Now we are ready to give our elemental characterization of the radical. Theorem 1.10 (Elemental Characterization). (1) If K is a q.i. ideal of J then K PQI (J) QI (J) (2) The Jacobson radical coincides with the set of all properly quasiinvertible elements: Rad(J) = PQI (J); which therefore forms a structurally invariant ideal, the unique maximal q.i. ideal (containing all others). (3) Thus an algebra is semiprimitive i it contains no nonzero properlyquasi-invertible elements. proof. (1) The second inclusion holds by the PQI De nition 1.7(2). The rst inclusion holds since the elements of any q.i. ideal must actually be properly q.i.: z in a q.i. ideal K =) all fz; y g Uz y 2 2 K q.i. =) all fz; y g Uz y 2 q.i. =) all (z; y ) are q.i. [by QI Pair Criterion 1.8(4)(vi)] =) z is p.q.i. Thus the radical, and all q.i. ideals, are contained in PQI (J): (2-3) We claim that PQI (J) itself is a structurally-invariant ideal, and therefore from the above is the maximal q.i. ideal Rad(J) as in (2), so that an algebra is semiprimitive i PQI (J) = 0 as in (3). PQI (J) is closed under addition: if x; z are p.q.i. then for any y 2 J (z; y ) quasi-invertible and (x; qi(z; y )) quasi-invertible implies (x + z; y ) is quasi-invertible by the Addition Principle, so x + z is p.q.i. It is closed under structural transformations T : z p.q.i. =) (z; T (y )) quasi-invertible for all y =) (T (z ); y ) is quasi-invertible for all y by Structural Shifting =) T (z ) is p.q.i. In particular, PQI (J) is invariant under scalar multiplications, so it is a linear subspace, and it is invariant under all outer multiplications T = Ua (a 2 Jb), hence also all La = 1 1 2 Va = 2 (Ua+1 Ua 1J ), so it is an ideal. (1) Use the elemental characterization to show Rad(Ue J) = Ue J \ Rad(J) for any idempotent e: (2) Show that Rad(J) is invariant under multiplication by all elements of the centroid (all linear transformations with (xy) = (x)y = x (y)) [cf. III.1.11]. Exercise 1.10*
408
The Radical
5. Radical Inheritance This elemental characterization of the radical has several immediate consequences. It allows us to relate semiprimitivity to nondegeneracy, and to prove that the radical is hereditary, in the sense that the property of radicality (Rad(J) = J) is inherited by ideals.6 The useful version of the hereditary property is that the radical of an ideal (or Peirce inner ideal) is just the part inherited from the global radical. Theorem 1.11 (Hereditary Radical). (1) Quasi-invertibility takes place in inner ideals: if an element of an inner ideal B is q.i. (resp. p.q.i.) in J then its quasi-inverse (resp. proper quasi-inverses) lie back in the inner ideal B \ QI (J) QI (B) (resp.B \ PQI (J) PQI (B)): (2) We have equality for ideals or Peirce inner ideals: PQI (K) = K \ PQI (J) if K / J; PQI (Jk (e)) = Jk (e) \ PQI (J) (k = 2; 0; e idempotent): (3) In particular, the radical remains radical in the unital hull: PQI (J) = J \ PQI (Jb): (4) These guarantee that ideals and Peirce inner ideals inherit radicalfreedom: we have the Ideal and Peirce Inheritance Principles Rad(J) = 0 =) Rad(K) = Rad(Jk (e)) = 0: (5) Semiprimitivity implies nondegeneracy: Rad(J) = 0 =) J nondegenerate: proof. (1) By QI Existence 1.6(1) the quasi-inverse qi(z ) = z z 2 + Uz qi(z ) lies in the inner ideal B if z does (thanks to our strongness condition that all inner ideals be subalgebras). The same holds for all qi(z; y ) by applying this in the y -homotope where B remains inner (cf. QI Pair Existence 1.8(1)). (2) There are lots of inner ideals where the p.q.i. inclusion is strict (an upstanding algebra with no radical can have nilpotent inner ideals which are all radical), but for ideals K or Peirce inner ideals J2 (e); J0 (e) we always have PQI (B) PQI (J) since if z 2 PQI (B); y 2 J then in the ideal case the square y (2;z) = Uy z 2 K is q.i. in K(z) , therefore in J(z) , so by PQI De nition 1.7(2) y itself is q.i. in J(z) too and [by Symmetry 1.9(1)] z is p.q.i. in J; in the Peirce case similarly 6It is a fact from general radical theory that radical-freedom
always inherited by ideals!
Rad(J) = 0 is
6. RADICAL SURGERY 409 [for the Peirce projection T = T = Ue or U1 e ] is q.i. in
T (y ) 2 B (z ) (z ) B , therefore in J , so by Structural Shifting 1.9(2) y itself is q.i. (z )) ( T ( z ) in J = J too [since z = T (z ) 2 Jk (e)], thus [by Symmetry 1.9(1)] z is p.q.i. in J: (3) is an immediate consequence of (2), since J is an ideal in bJ: (4) follows immediately from (2). For (5), any trivial z would be p.q.i. (even properly nilpotent, z (2;y) = Uz y = 0 for all y ), so z 2 PQI (J) = Rad(J) = 0:
6. Radical Surgery
In general, for a \nice" property P of algebras (such as primitive or simple or prime) the P -radical P (J) is designed to isolate the \bad" part of the algebra so that its surgical removal (forming the quotient algebra J=P (J), referred to as radical surgery) creates \semi-niceness": the radical is the smallest ideal whose quotient is semi-P [= subdirect product of P -algebras]. Then an algebra is semi-P i P (J) vanishes. In the case of the Jacobson radical, the niceness property is primitivity. Let us convince ourselves that removing the radical does indeed stamp out quasi-invertibility and create semiprimitivity. Theorem 1.12 (Radical Surgery). The quotient of a Jordan algebra by its radical is semiprimitive: Rad J=Rad(J) = 0: proof. There is no q.i. ideal I left in J = J=Rad(J) because its preimage I would be q.i. in J [hence contained in the radical and I = 0]: quasi-invertibility is a \recoverable" property in the sense that if K I Jb where both I=K and K have the property, so did I to begin with. Indeed, any z 2 I has 1 z invertible in the q.i. I, so 1 = U 1 zx^ for some x^ 2 Jb; 1 = U1 z x^ + k in Jb for some k in the q.i. K, so 1 k = U1 z x^ is invertible, hence 1 z is too, and z is quasi-invertible. So far we have an idea of semi-niceness, but we will have to wait till Chapter 6 to meet the really nice primitive algebras. Now we derive some relations between radicals that will be needed for the last step of our summit assault, the Prime Dichotomy Theorem 9.2. Theorem 1.13 (Radical Equality). (1) Rad(J) contains all trivial elements, and no nonzero regular elements. (2) The nondegenerate radical is contained in the semiprimitive radical, M(J) Rad(J): (3) A nondegenerate algebra with d.c.c. on open principal inner ideals is semiprimitive. (4) M(J) = Rad(J) if J has d.c.c. on open principal inner ideals.
410
The Radical
Set R := Rad(J); M := M(J) for brevity. (1) If z is trivial, Uz = 0; then z (2;y) = Uz y = 0 for all y , and z is properly nilpotent (hence properly quasi-invertible), so lies in R. If x = Ux y is von Neumann regular (v.N.r. as in III.18.1), then Bx;y (x) = x fx; y; xg + UxUy x = x 2Uxy + Ux Uy (Uxy) = x 2x + UU (x)y y [by the Fundamental Formula] = x + Ux y = 0: But then Bx;y kills x, and is not invertible if x 6= 0, (x; y ) is not q.i. [by Q.I. Facts 1.8(3)(ii)], x is not p.q.i., and x is not in R [by the Elemental Characterization 1.10]. (2) By Radical Surgery 1.12, J := J=R is a semiprimitive algebra, Rad(J) = 0, and so by (1) J has no trivial elements. But then R creates nondegeneracy by surgery, hence contains the smallest ideal M with that property. (3) will follow as the case J0 = J of the following general result (needed for Dichotomy 9.2). 0 Lemma 1.14 (Radical Avoidance). If J J where J is nonde0 generate and J has d.c.c. on open principal inner ideals Ux J0 , then 0 J \ Rad(J ) = 0. 0 proof. If not, choose an open inner (x) = Ux J minimal among 0 those determined by 0 6= x 2 J \ Rad(J ) (such exists thanks to the d.c.c.). By nondegeneracy of J, x is not trivial and there exists 0 6= y 2 Ux J J \ Rad(J0 ) [since the ideal Rad(J) is closed under inner multiplication]. But (y ) = Uy J0 Ux UJ Ux J0 Ux J0 = (x) implies (y ) = (x) by minimality, so y 2 Ux J (x) = (y ) is a nonzero regular element contained in Rad(J0 ); contradicting (1). Returning to the proof of Radical Equality, (4) follows from (3) by D-radical surgery: the reverse inclusion D R to (2) will follow since the q.i. ideal R=M vanishes in the nondegenerate algebra J=M by (3) [note any quotient J=I inherits the open principal d.c.c. from J, since Ux J = Ux J]. proof.
7. PROBLEMS AND QUESTIONS FOR CHAPTER 1
411
7. Problems and Questions for Chapter 1 Let A be an associative (real or complex) Banach algebra (a complete normed vector space with norm kxk satisfying kxk = kkkxk; kx + yk kxk + kyk; kxyk kxk kyk). (1) P Show that every element of norm kxk < 1 is quasi1 n invertible, with (1 x) = 1 n=0 x . (2) Show in a Banach algebra that lies in the resolvent of x (i.e. 1 x is invertible) for all with > kxk. (3) Show more generally that the set of invertible elements is an open subset: if u is invertible then so are all v with ku vk < ku 1 k 1. Problem 1*.
(Amitsur's Polynomial Trick) If x 2 J is such that the element tx is quasi-invertible in the Jordan algebra J[t] of polynomials in the scalar indeterminate t (J[t] = [t] J), show that x must be nilpotent in J: the quasi-inverse must P be the geometric series (1 tx) 1 = n=0 tn xn , and since this exists as a nite polynomial we must eventually have xn = 0: Problem 2*.
(1) If u; v are inverses in a unital Jordan algebra show the operators Uu ; Uv ; Vu ; Vv all commute, hence all operators Up(u) ; Vq(u) commute with all Ur(v); Vs(v) for polynomials p; q; r; s; use 21 2 to show the subalgebra [u; v] is a commutative associative linear algebra. (2) If z; w are quasi-inverses in a Jordan algebra show the operators Uz ; Uw ; Vz ; Vw all commute, hence all operators Up(z) ; Vq(z) commute with all Ur(w); Vs(w) for polynomials p; q; r; s; and the subalgebra [z; w] is a commutative associative linear algebra. Problem 3.
(1) For quadratic Jordan algebras the argument of Elemental Characterization 1.10 establishes PQI (J) is an outer ideal, but it requires more eort to prove it is also inner. One needs a 4th Principle for q.i. pairs: (IV) Homotope Shifting Principle: show (x; Uz y) is quasi-invertible in J i (x; y) is quasiinvertible in J(z) i (w; z ) is quasi-invertible in J for w := fx; z; yg Uy Uz Ux z: (2) Use this Principle to show Rad(J(z) ) = fx 2 J j Uz x 2 Rad(J)g: (3) Show J is radical i all J(z) are radical. Problem 4*.
(1) If u is invertible in a unital J, show that u x is invertible () x is q.i. in J(u 1 ) ; conclude x p.q.i. in a unital J implies u x invertible for all invertible u: (2) Show x p.q.i., y q.i. in J =) x + y q.i. in J: (3) Conclude x; y p.q.i. in J =) x + y p.q.i. in J:
Problem 5*.
(1) Show Rad(J(z) ) = fw 2 J j Uz x 2 Rad(J)g: (2) Show J is radical (Rad(J) = J) i all J(z) are radical too. Problem 6.
A useful sidekick of Macdonald and Shirshov-Cohn is Koecher's Principle, which says that if a homogeneous Jordan polynomial f (x1 ; : : : ; xn ) vanishes whenever the xi are invertible elements of unital Jordan algebras, then f vanishes for all elements xi in all Jordan algebras. (1) Establish Koecher's Principle. (2) Show that Koecher's Principle follows from Zariski Density for nite-dimensional algebras over an in nite eld . Problem 7*.
412
The Radical
Bergmann Structurality 1.3 has an easy proof using Koecher's Prin(y) = ciple. (1) Show as in 1.7 that B;x;y = U(y1)(y) x for 1(y) the formal unit for Jd ( y ) 1(y) J , and as in the proof of 1.8(3)(i) that B;x;y Ux = Ux Ux(y) = UxB;y;x. (y) . (2) Show [UB;x;y (z) B;x;y Uz B;y;x]Uy = 0 by the Fundamental Formula in Jd (3) Show that the Bergmann structural identity holds for all z 2 UxJ. (4) Conclude that the Bergmann structural identity holds whenever x or y is invertible. Use Koecher's Principle to show the identity holds for all elements of all Jordan algebras. Question 1. Is Koecher's Principle valid for nonhomogeneous f (so that any f which vanishes on invertible elements vanishes on all elements. Question 2. The Zariski topology (where the closed sets are the zero-sets of polynomial functions, so nonempty open sets are always dense) is usually de ned only for nite-dimensional vector spaces over an in nite eld. Can the restriction of nite-dimensionality be removed? Problem 8*.
CHAPTER 2
Begetting and Bounding Idempotents In this chapter we give some natural conditions on an algebra which guarantee it has a nite capacity. These are mild enough that they will automatically be satis ed by primitive i-exceptional algebras over big elds, and so will serve to cast such algebras back to the classical theory, where we know they must be Albert algebras. First we formulate axiomatically the crucial property of having a rich supply of idempotents, and show that this property can be derived from algebraicness rather than a niteness condition. Once we are guaranteed a rich supply of idempotents, our second step is to insure that we don't have too many : all things in moderation, even idempotents. The reasonable restriction is that there be no in nite family of orthgonal idempotents. Thirdly, when we mix in semiprimitivity, these three conditions are suÆcient to produce a capacity.
1. I-gene An associative algebra is called a I-genic, or an I-algebra, if every every non-nilpotent element generates an idempotent as a left multiple, equivalently if every non-nil left ideal contains an idempotent. In Jordan algebras we don't have left or right, but we do have inner and outer. Definition 2.1 (I -Genic). (1) An algebra is I-genic (idempotentgenerating), if every non-nilpotent element b has an idempotent inner multiple, i.e. generates a nonzero idempotent in its principal inner ideal: b not nilpotent =) (b] = Ub Jb contains e 6= 0: (2) This is equivalent to the condition that every non-nil inner ideal contains a nonzero idempotent, B not nil =) B contains e 6= 0: Indeed, the second condition implies the rst since if b is not nilpotent then B = (b] is not nil since it contains the non-nilpotent b2 , so contains an idempotent. Conversely, the rst implies the second since any nonnil inner ideal B contains a non-nilpotent b, and e 2 (b] implies e 2 B: 413
414
Begetting and Bounding Idempotents
An associative algebra A is called a left-I -algebra if every nonnilpotent element x 2 A has a nonzero idempotent left multiple e = ax; dually for right-I -algebra. (1) Show that A is a left-I - algebra i every non-nil left ideal contains a nonzero idempotent. (2) Show that A is a left-I -algebra i it is a right I -algebra; show this happens if A+ is I -genic. (3) Does left-I imply inner-I ? Exercise 2.1A*
Show (b) (b] [b]; (b) [b) [b]; U[b][b] (b) for the principal inner ideals (b) := Ub (J); (b] = Ub (bJ); [b] = b + Ub (bJ) and the weak inner ideal [b) := b + Ub (J) (cf. III.5.9). Show that the we could have de ned I -gene using weak inner ideals instead of all inner ones, by showing the following are equivalent: there is a nonzero idempotent e in (1) every non-nil weak inner ideal; (2) every non-nil inner ideal; every (3) (b); (4) (b]; (5) [b); or (6) [b] for non-nilpotent b. Exercise 2.1B*
I -gene need not be inherited by all subalgebras: if [b] inherited I -gene it would imply b generated an idempotent which is a polynomial in b, which is too strong a condition { too close to algebraicness. But I -gene is inherited by inner ideals. Lemma 2.2 (I -gene Inheritance). In a Jordan algebra J, any principal inner ideal of an inner ideal is again inner: b ) is inner in J: b 2 B inner in J =) Ub (B I -gene is inherited by all inner ideals: B inner in I -genic J =) B is I -genic. proof. The rst follows from the invariance of inner ideals under structural transformations (Structural Innerness III.18.6), or directly from the Fundamental Formula: UUb (Bb ) (Jb) = Ub UBb Ub (Jb) Ub UBb (B) b ): [An arbitrary inner ideal of B need not [by innerness of B] Ub (B remain inner in J, for example B0 = E12 in B = E12 in J = M2 ( ).] For the second part, if b 2 B is non-nilpotent then the B-principal b ) B is not nil (it contains b2 ) and is inner in inner ideal (b)B = Ub (B J by the rst part, so by the I -Genic Condition 2.1(2) in J, it contains an idempotent.
2. Algebraic implies I -genic I -genic algebras are precisely those that are rich in idempotents, the very stu of the classical approach. A crucial fact for Zel'manov's approach is that algebraicness, a condition of \local" niteness at each individual element (every subalgebra [x] generated by one element is nite-dimensional over ), suÆces to produce the required idempotents.
3. I-GENIC NILNESS
415
2.3 (Algebraic). An element x of a Jordan -algebra is algebraic if it satis es a monic polynomial with zero constant term, p(x) = 0 for some p(t) = tn + n 1 tn 1 + : : : + 1 t1 2 t[t]: For unital algebras the condition is equivalent to the condition that it satisfy a monic polynomial q (x) = 0 for some q (t) = tn + n 1 tn 1 + : : : + 0 1 2 [t] since if x satis es a general q (t) it also satis es p(t) = tp(t) with zero constant term. Algebraicness just means some power xn can be expressed as a -linear combination of lower powers. Notice in particular that nilpotent elements are always algebraic, as are idempotents. Over a eld it suÆces if x satis es a nonzero polynomial, since any nonzero polynomial over a eld can be made monic. Proposition 2.4 (Algebraic I ). Every algebraic algebra over a eld is I -genic. proof. If b is non-nilpotent then there is a nonzero idempotent in the subalgebra C = [c]0 (b] of polynomials with zero constant term in c = b2 = Ub ^1 2 (b] [which is non-nilpotent since b is]: C is a nitedimensional commutative associative algebra which is not nil, since it contains c, so by standard associative results it contains an idempotent. Definition
b cn [Or directly: by nite-dimensionality the decreasing chain of subalgebras Cn = C eventually stabilize, and when Cn = C2n we have cn = ac2n = acn cn for some b ; e := acn 2 C has cn e = ecn = cn 6= 0, so e 6= 0 and e2 = acn e = acn = e is a2C idempotent].
An element y is von Neumann regular (a vNr, cf. III.18.1) if y 2 (y), and doubly vNr if y = Uy2 a 2 (y2 ), in which case by III.18.3 e = Uy2 Ua y2 is a nonzero idempotent in (y): (1) If (xm ) = (x4m ) show y = x2m is doubly vNr. (3) Prove Morgan's Theorem that if J has dc.c. on the open principal inner ideals f(xm )g, then J is I -genic. In particular, any Jordan algebra with d.c.c. on inner ideals is I -genic. Exercise 2.4
3. I-genic Nilness In I -genic algebras, proper quasi-invertibility shrinks to nilness. Proposition 2.5 (Radical Nilness). (1) The Jacobson radical of an I -genic algebra reduces to the nil radical, Rad(J) = Nil(J): (2) In particular, this holds for algebraic algebras over elds. More generally, as soon as an individual element of the Jacobson radical is
416
Begetting and Bounding Idempotents
algebraic, it must be nilpotent: x 2 Rad(J) algebraic over a eld =) x is nilpotent. (3) Indeed, if we call an element co-algebraic if it satis es a co-monic polynomial (one whose lowest term has coeÆcient 1), then for an arbitrary ring of scalars we have x 2 Rad(J) co-algebraic over any =) x is nilpotent. proof. (1) If x 2 Rad(J) were NOT nilpotent then by I -gene it would generate a nonzero idempotent e 2 (x) Rad(J), so e would be quasi-invertible; but nonzero idempotents are NEVER quasi-invertible, 1 e is NEVER invertible, because it kills its brother e: U1 e (e) = 0: (3) Whenever we have a co-algebraic relation xn + 1 xn+1 + : : : + m xn+m = 0 we have 0 = xn (1 + 1 x1 + : : : + m xm ) = xn (1 z ) for z a polynomial in x with zero constant term, so if x is radical so is z , hence 1 z is invertible. But then [by power associativity in the commutative associative subalgebra [x] with 12 2 ] we also have U1 z xn = (1 z ) xn (1 z ) = 0, and we can cancel U1 z to get nilpotence xn = 0: (2) follows from (3) since if x is algebraic over a eld it is co-algebraic [we can divide by the lowest nonzero coeÆcient in the polynomial relation to make it co-monic].
4. I-Finiteness The crucial niteness condition turns out to be the rather mild one that there not be an in nite family of orthogonal idempotents. Definition 2.6 (I -Finite). (1) An algebra J is idempotent- nite or I - nite if it has no in nite family e1 ; e2 ; : : : of nonzero orthogonal idempotents. (2) We order idempotents by g f () g f = f () Ug f = f () f 2 J2 (g ); equivalently g is built by adjoining to f an orthogonal idempotent e: g = f + e for e := g f 2 J0 (f ) \ J2 (g ): (3) I- niteness is equivalent to the a.c.c. on idempotents, since strictly increasing families of idempotents in J are equivalent to nonzero orthogonal families: f1 < f2 < : : : is a strictly increasing family (fi+1 = fi + ei ) () e1 ; e2; : : : is a nonzero orthogonal family 0 6= ei = fi+1 fi 2 J0 (fi ) \ J2 (fi+1 ):
4. I-FINITENESS
417
Note that ei ; ej are orthogonal if i < j since ei 2 J2 (fi+1 ) J2 (fj ) but ej 2 J0 (fj ). (4) I- niteness implies the d.c.c. on idempotents too, since decreasing families are equivalent to bounded orthogonal families: g1 > g2 > : : : is a strictly decreasing family (gi = gi+1 + ei+1 ) () e1 ; e2; : : : are nonzero orthogonal, bounded above by g1 0 6= ei+1 = gi gi+1 2 J2 (g1 ) \ J2 (gi ) \ J0 (gi+1 ): Notice that I - niteness is an \elemental" condition, so it is automatically inherited by all subalgebras (whereas I -gene was inherited only by inner ideals). Show that in the presence of a unit the a.c.c. and d.c.c. on idempotents are equivalent, since ffi g is an increasing (resp. decreasing) family i f1 fi g is a decreasing (resp. increasing) family. Exercise 2.6*
The next step shows that the classical niteness condition, having a capacity, is a consequence of our generating property (I -gene) and our bounding property (I - niteness). Theorem 2.7 (I -Finite Capacity). A semiprimitive Jordan algebra which is I -genic and I - nite necessarily has a capacity. proof. (cf. III.20.3) Semiprimitivity implies the nil radical vanishes by Radical De nition 1.5(4), in particular J can't be entirely nil, so by I -gene nonzero idempotents exist. From the d.c.c. on idempotents we get minimal idempotents e; we claim these are necessarily division idempotents. Any non-nilpotent element b of J2 (e) generates a nonzero idempotent, which by minimality must be e; from b = Ue b we get (without explicitly invoking any Peirce facts from III) Ub = Ue Ub Ue ; Ub Ue = Ub , so e 2 (b] = Ub (Jb) = Ub Ue (Jb) = Ub (J2 (e)) and by Invertibility Criterion 1.1(4)(iii) b is invertible in J2 (e): Thus every element of J2 (e) is invertible or nilpotent. We claim by semiprimitivity there can't be any nilpotent elements at all in J2 (e): If z is nilpotent, x; y arbitrary in J2 (e), then Ux Uz y can't be invertible (recall Ua b is invertible i both a; b are invertible), so it must be nilpotent, so the operator UUx Uz y = (Ux Uz Uy )(Uz Ux ) is nilpotent too; now in general ST is nilpotent i T S is, so (Uz Ux )(Ux Uz Uy ) = UUz (x2 ) Uy = Uz(0y) is nilpotent also, hence z 0 := Uz (x2 ) is nilpotent in the y -homotope for every y , i.e. is properly nilpotent and hence properly quasi-invertible, so by the Elemental Characterization 1.10(2) it lies in the radical. But J2 (e) inherits semiprimitivity by Hereditary Radical 1.11(4), so z 0 = 0: Linearizing x ! y; e in Uz (x2 ) = 0 gives Uz (2e y ) = 0, and since 21 2 we have triviality Uz y = 0: But then z follows in the same footsteps as z 0 : it is nilpotent z (2;y) = 0 (hence q.i.)
418
Begetting and Bounding Idempotents
in every homotope, hence is p.q.i., so falls into the radical and disappears: z = 0: [Alternately, as soon as we reach triviality Uz y = 0 for all y we get
z = 0 since J2 (e) inherits nondegeneracy too by Hereditary Radical 1.11(4-5).]
Thus there are no nilpotent elements after all: every element of J2 (e) is invertible, J2 (e) is a division algebra, and e is a division idempotent. Once we get division idempotents, we build an orthogonal family of them reaching up to 1: if e = e1 + : : : + en is maximal among all idempotents which are nite sums of division idempotents ei (such exists by the a.c.c.), we claim J0 (e) = 0: Indeed, J0 (e) inherits the three hypotheses: semiprimitive [by Hereditary Radical 1.11(4) again], I -gene [by I -Gene Inheritance 2.2 since J0 (e) is an inner ideal], and I - nite [since J0 (e) is a subalgebra]. Thus if it were nonzero we could as above nd a nonzero division idempotent en+1 2 J0 (e) orthogonal to all the others, and e + en+1 > e would contradict maximality of e: Thus we must have J0 (e) = 0, in which case e is the unit by the Idempotent Unit Lemma III.10.2 [stating that i J is nondegenerate and e is an idempotent with nothing orthogonal to it, J0 (e) = 0, then e = 1 is the unit for J]. Therefore 1 = e1 + : : : + en for division idempotents ei , and we have reached our capacity.
5. Problems for Chapter 2 A propos of 2.7, show that nilpotence is symmetric in any associative algebra A: for elements x; y the product xy is nilpotent i the product yx is. Show that x is nilpotent in the homotope Ay (where a y b := ayb) i xy is nilpotent, so an element is properly nilpotent (nilpotent in all homotopes) i all multiples xy (equivalently, yx) are nilpotent. Conclude that any element of a nil one-sided ideal is properly nilpotent, hence a nil ideal I is always properly nil: I PNil(A) (just as a q.i. ideal is always properly q.i.). Problem 1.
CHAPTER 3
Bounded Spectra Beget Capacity The next stage in the argument shows that having a capacity is a consequence, for algebras over a suitably big eld, of having a nite bound on the spectra of all the elements: the local niteness condition of algebraicness appears as soon as we have a niteness condition on spectra. In this chapter we will adopt an ambivalent attitude to unital hulls: our basic unital hull will be denoted Je and its unit ~1; and will be taken to be eJ := J if the original algebra J is already unital, and Je := Jb the formal unital hull if J is not unital. This distinction matters only when deciding whether 0 belongs to a spectrum.
1. Spectra Let us recall the de nitions, familiar to us from the the theory of operators on Hilbert space. Shimshon Amitsur was once teaching an analysis course on dierential equations when he realized that the notion of spectrum and resolvent made perfectly good sense in algebra, and was related to quasi-invertibility. Definition 3.1 (Spectra). (1) If J is a Jordan algebra over a eld , the -spectrum S pec;J(z ) of an element z 2 J is the set of scalars 2 such that ~1 z is not invertible in the unital hull Je, equivalently the principal inner ideal U~1 z (Je) can be distinguished from Je; S pec;J(z) := f 2 j 1~ z not invertible in Jeg
= f 2 j U~1 z (Je) 6= eJg: The complement of the spectrum is the -resolvent Res;J (z ) of z , the set of 2 so that ~1 z is invertible in the unital hull. The spectrum can also be de ned by the action of ~1 z on J itself, S pec;J(z) = f 2 j U~1 z (J) 6= Jg: This is trivial in the unital case Je = J; and in the nonunital case Je = bJ it is trivial for = 0 since U z (Jb) J is never Jb and U z (J) is never J [by Surjective Unit III 18:4 equality would force J to be unital]; for 419
420
Begetting Capacity
6= 0 in the eld we can replace ^1 z by 1^ 1 z , where surjectivity on bJ and J coincide [by Congruent-to-b11:4(3) applied to T = U^1 1 z ]: (2) The spectrum and resolvent are independent of ideal extensions: If z is an element of an ideal I of J then
S pec;I(z) = S pec;J(z) and Res;I(z) = Res;J(z): (3) Spectrum and resolvent are intimately related to quasi-invertibility: For = 0; 0 2 Res;J(z ) () z is invertible, 0 2 S pec;J(z ) () z is not invertible, For 6= 0; 2 Res;J(z ) () 1 z is q.i., 2 S pec;J(z ) () 1 z is not q.i. Note that in the non-unital case eJ = bJ, no z 2 J is ever invertible, so always 0 2 S pec;J (z ): (4) If f is a Jordan polynomial which does not vanish strictly on J, the f-spectrum f-S pec;J (z ) of z is the set of scalars such that the inner ideal U~1 z (J) can be distinguished from J by the strict vanishing of f : f-S pec;J(z ) := f 2 j U~1 z (J) satis es f strictlyg
S pec;J(z):
Here we say a Jordan polynomial f (x1 ; : : : ; xn ) vanishes strictly on J if f and all its linearizations f 0 vanish on J; f 0 (a1 ; : : : ; an ) = 0 for all ai 2 J: We often use these concepts when J is a Jordan algebra over a ring of scalars , and is some sub eld of it. When is understood we leave it out of the notation, writing S pecJ (z ) etc.
Verify 3.1(3) as follows. (1) [ = 0] Show z 2 J is invertible in i z is invertible in eJ = J: (2) [ = ~1] Show ~1 z is invertible in Je (there exists w~ 2 Je with U~1 z w^ = ~1 z; U~1 z (w~)2 = ~1) i there is w 2 J with U1 z w = z z 2 ; U1 z w2 = z 2. (3) [ 6= 0] Show ~1 z is invertible i there is w 2 J with U~1 z w = z z 2; U~1 z (w)2 = z 2: (4) Show S pecB(z ) S pecJ (z ); ResB(z ) ResJ (z ) for all elements z of all subalgebras B of J: (5) Show S pecI(z ) = S pecJ (z ); ResI(z ) = ResJ (z ) for all elements z of all ideals I of J: Exercise 3.1A*
e J
Let J be a unital Jordan algebra over a eld : (1) If z 2 J is nilpotent, show 1 z is invertible with inverse given by the (short) geometric series; show ~1 z is invertible i 6= 0: Conclude S pecJ; (z ) = f0g: (2) If 1 = e1 + : : : + en P for orthogonal idempotents ei , show that x = i xi (xi 2 J2 (ei )) is invertible in J i P each xi is invertible in J2 (ei ); show the spectrum of any x = i i ei +zi (zi 2 J2 (ei ) nilpotent) is Spec;J(x) = f1 ; : : : ; n g: Exercise 3.1B
2. BIGNESS
421
2. Bigness To a nite-dimensional algebra, the real numbers look really big; Amitsur was the rst to discover that amazing things happen whenever you work over a really big eld. Definition 3.2 (Big). If J is a Jordan algebra over a eld , a set of scalars 0 will be called big (with respect to and J) if its cardinality is in nite and greater than the dimension of the algebra j0j is in nite, and > dim(J): Because of this in niteness, we still have j0 n f0gj j0 j 1 > dim(Je): When and J are understood, we simply say 0 is big. We will be particularly interested in the case where J is a Jordan algebra over a big eld in the sense that 0 = is itself big relative to J: Because we demand that a big 0 be in nite, it won't be phased by the fact that dim (Je) may be 1 larger than dim (J), even if we take 1 away: the number of nonzero elements in 0 is (at least) j0 j 1, and this is strictly bigger than the dimension of the unital hull (which is where the resolvent inverses (~1 z ) 1 live). Theorem 3.3 (Amitsur's Big Resolvent Trick). (1) If J is a Jordan algebra over a eld , any element z 2 J with big resolvent is algebraic: jRes(z)j big =) z algebraic: and an element with small (for example, nite) spectrum over a big eld has big resolvent and is algebraic: jS pec;J(z)j < jj big =) z algebraic: (2) In particular, the radical of a Jordan algebra over a big eld is always nil and coincides with the set of properly nilpotent elements: Rad(J) = Nil(J) = PNil(J) = PQI (J) ( big eld ): proof. (1) The reason a big resolvent forces algebraicness is beautifully simple: bigness 3.2 implies jRes;J(z ))j > dim (Je), and there are too many inverses x := (~1 z ) 1 in Je ( 2 Res;J (z )) for all of them to be linearly independent in eJ over : Thus there must exist a nontrivial linear dependence relation over among distinct xi : By clearing denominators this gives a nontrivial algebraic P dependence1 reP Q lation for z : i i xi = 0 =) 0 = = k Lk 1 z i i (i 1 z ) P Q ( 1 z ) = p ( z ) : The reason this is nontrivial is that if i i k6=i k p(t) = 0 is the zero polynomial then for each j we have 0 = p(j ) =
422 P
i i
Begetting Capacity
Q
k6=i (k
Q
j ) = j k6=j (k j ), hence j = 0 by the distinctness of the 's, contradicting the nontriviality of the original linear dependence relation. If z has small spectrum over a big eld, then its resolvent must be big because it has the same size as jj : jRes;J(z )j = jj jS pec;J(z)j = jj [because jS pecJ(z)j < jj and jj is in nite]. (2) If z 2 Rad(J) then it is p.q.i. by the Elemental Characterization 1.10(2), hence all scalar multiples 1 z are q.i., so by 3.1(2-3) all 6= 0 fall in the resolvent, and jRes(z )j j n f0gj = jj is big. By (1) this forces z to be algebraic, and then by Radical Nilness 2.5(2) to be nilpotent. In view of Radical De nition 1.5.(4) this gives the equality of the Jacobson and nil radicals. For the last equality, we always have PNil(J) PQI (J) since in any homotope nil implies q.i.; conversely, if z is p.q.i. it remains p.q.i. in each homotope J(x) [it is q.i. in (Ux y) = J(x) (y) for all y ], which is still an algebra over a big eld, so J by the rst equality z 2 Rad(J(x) ) = Nil(J(x) ) is nilpotent in each J(x) , i.e. z 2 PNil(J) is properly nilpotent. (1) For quadratic Jordan algebras we cannot boost a relation among inverses into an algebraic relation by multiplying by L (and V = 2L is bad in characteristic 2), so we must do our boosting by a U . (1) If f1 ; : : : ; n g are distinct scalars and f1 ; : : : ; n g are not all zero in the eld , show that q(t) = P Qn Q i (t i ) 2 is a nonzero polynomial in [t] with q(i ) = i j=6 i (i i=1 Ut i j )2 for each i: (2) Use this to show if there are too many inverses (i 1 z ) 2 then z is algebraic over arbitrary (not containing 21 ). Exercise 3.3
3. Evaporating Division Algebras Although Zel'manov gave a surprising classi cation of all division algebras (which, you will remember, was an open question in the classical theory), in his nal classi cation of prime algebras he was able to nesse the problem of division algebras entirely: if you pass to a suitable scalar extension they disappear !! Theorem 3.4 (Division Evaporation). (1) If J is a Jordan division algebra over containing a big algebraically closed eld ; jj > dim (J); then J = 1: (2) Any division algebra of characteristic 6= 2 can be split by a scalar extension: if J > 1 is a division algebra over its center , and a big algebraically closed extension eld of , jj > dim (J), then the scalar extension J = J is simple over but not a division algebra. proof. (1) If x 62 1 then x 1 6= 0 for all in , hence [since J is a division algebra] all Ux 1 are invertible and all lie in the resolvent
4. SPECTRAL BOUNDS AND CAPACITY
423
of x; by bigness of with respect to and J, the Big Resolvent Trick 3.3 guarantees that x is algebraic over , so Q p(x) = 0 for some monic polynomial p(t), which must factor as p(t) = i (t Q i ) for some i 2 by the algebraic closure of ; but then 0 = Up(x) 1 = i Ux i 1 1 contradicts the fact that all Ux i 1 are invertible (cf. the Invertible Products III.6.7(3)). Thus J = 1: (2) If J is a proper division algebra over and an algebraically closed (hence in nite!) extension eld with jj > dim (J), the scalar extension J = J remains simple over by the Strict Simplicity TheoremIII.1.13. But the dimension never grows under scalar extension, dim (J ) = dim (J) [any -basis for J remains a -basis for J ], so will still be big for J : Furthermore, J > 1 implies J > 1, so by the rst part J cannot still be a division algebra. At rst glance it seems a paradox that J = 1 instead of J = 1, but a little re ection or exercise will allay your suspicions; we must already have = = : 1 is as far as J can go and remain a division algebra with center !
4. Spectral Bounds and Capacity Zel'manov showed that getting a bound on the spectra of elements produces a capacity, and non-vanishing of an s-identity f put a bound at least on the f -spectra. In retrospect, these two steps doomed all i-exceptional algebras to a nite-dimensional life. Pn Theorem 3.5 (Bounded Spectrum). (1) If x = i=1 i ei for a family of nonzero orthogonal idempotents ei , then S pec;J (x) f1 ; : : : ; ng: (2) If there is a universal bound jS pec;J (x)j N on spectra, where contains at least N + 1 distinct elements, then there is a universal bound N on the size of any family of orthogonal idempotents, in particular J is very I - nite. (3) If J is a semiprimitive Jordan algebra over a big eld with a uniform bound on all spectra, jS pecJ (z )j N for some nite N and all z 2 J, then J is algebraic with capacity. proof. (1) Making use of Peirce decompositions of III.13, we can P P write ^1 = e0 + ni=1 ei , so ~1 x = e0 + ni=1 ( i )ei is not invertible if = j since it kills the nonzero element ej using the Peirce multiplication formulas in the commutative associative subalgebra [e1 ; : : : ; en ] = e1 : : : en ; Uj ~1 x(ej ) = 0: (2) If there are N + 1 distinct scalars P i there can't be N + 1 orthogonal idempotents ei , else by (1) x = Ni=1+1 i ei would have at
424
Begetting Capacity
least N + 1 elements in its spectrum, contrary to the bound N , so the longest possible orthogonal family has length at most N . (3) Assume now that is big. In addition to I - niteness (2), by Amitsur's Big Resolvent Trick 3.3 the nite spectral bound N implies algebraicness, hence the Algebraic I Proposition 2.4 implies J is I genic. If we now throw semiprimitivity into the mix, the I-Finite Capacity Theorem 2.7 yields capacity. Zel'manov gave an ingenious combinatorial argument to show that a non-vanishing polynomial f puts a bound on that part of the spectrum where f vanishes strictly, the f -spectrum. This turns out to be the crucial niteness condition: the nite degree of the polynomial puts a nite bound on this spectrum. Theorem 3.6 (f -Spectral Bound). (1) If a polynomial f of degree N does not vanish strictly on a Jordan algebra J, then J can contain at most 2N inner ideals Bk where f does vanish strictly and which are relatively prime in the sense that J
=
X i;j
Cij
for
Cij
= \k6=i;j Bk :
(2) In particular, in a Jordan algebra over a eld a non-vanishing f provides a uniform bound 2N on the size of f -spectra, jf-S pec;J(z)j 2N: proof. We can assume f is a homogeneous function of N variables of total degree N , so its linearizations f 0 (x1 ; : : : ; xN ) still have total degree N: By relative primeness we have X
f 0 (J ; : : : ; J ) = f 0 (
Cij
;:::;
X
Cij
)=
X
f 00 (Ci1 j1 ; : : : ; CiN jN )
summed over further linearizations f 00 of f 0 : For any collection fCi1 j1 ; : : :, CiN jN g, each Cij avoids at most 2 indices i; j , hence lies in Bk for all others, so N of the Cij 's avoid at most 2N indices, so when there are more than 2N of the Bk 's then at least one index k is unavoided and Ci1 j1 ; : : : ; CiN jN all lie in this Bk (which depends on the collection of C's), and f 00 (Ci1 j1 ; : : : ; CiN jN ) f 00 (Bk ; : : : ; Bk ) = 0 by the hypothesis that f vanishes strictly on each individual Bk : Since this happens for each collection fCi1 j1 ; : : : ; CiN jN g, we see f 0 (J; : : : ; J) = 0 for each linearization f 0 , contrary to the hypothesis that f does NOT vanish strictly on J:
4. SPECTRAL BOUNDS AND CAPACITY
425
(2) In particular, there are at most 2N distinct 's for which the inner ideals B := U~1 z (J) satisfy f strictly, since the family fBk gnk=1 is automatically relatively prime: the scalar interpolating polynomials 1 Q Q pi (t) = kP of degree n 1 have pi (j ) = Æij ; 6=i (k t) k6=i (k i ) so p(t) = i pi (t) has p(k ) = 1 for all k, p(t) 1 of degree < n has n distinct roots k , and so vanishes identically: P p(t) = 1. SubstitutP ing z for t yields J = U~1 (J) = U pi(z) (J) = i;j Jij where Jii := Upi (z) (J) \k6=i Uk ~1 z (J) = \k6=i Bk = Cii , and JijP= Upi (z);pjP (z ) (J) \k6=i;j Uk ~1 z (J) = \k6=i;j Bk = Cij , therefore J = i;j Jij i;j Cij :
We will have to relate the f -spectra to the ordinary spectra. This will take place only in the heart of the algebra, and will involve yet another kind of spectrum coming from the absorbers of inner ideals.
426
Begetting Capacity
5. Problems for Chapter 3 (1) If J is an algebra over a big eld show jJj = jj: (2) Show in general P n that if J has dimension d over a eld of cardinality f , then jJj 1 n=0 (df ) ; if d or f is in nite show jJj max(d; f ), while if d and f are both nite then jJj = f d: (3) Conclude always jJj max(d; f; @0 ): Problem 2. Let A be a unital algebra over an algebraically closed eld : (1) Show that if a 2 A has 1 a invertible for ALL 2 then a is NOT algebraic over , and the (1 a) 1 are linearly INDEPENDENT over , so dim (A) jj: (2) Conclude that there cannot be a proper eld extension such that is big with respect to the algebra , i.e. jj > dim ( ): Problem 3*. Suppose V is an vector space over a eld , and let be sub elds. (1) Show that we have j j j j jj and (denoting dim (V ) by [V : ] etc.) [V : ] = [V : ][ : ] [V : ] = [V : ][ : ] [V : ]: (2) Show that if J is an algebra whose centroid is a eld, and some sub eld is big with respect to J, then any larger sub eld is even bigger: if and jj > [J : ] then j j > [J : ] too for any intermediate sub eld . Problem 4*. Prove the associative analogue of the f -Spectral Bound Theorem: (1) If a polynomial f of degree N does not vanish strictly on an associative algebra P A, then A can contain at most N relatively prime left ideals Bk (A = i Ci for Ci = \k=6 i Bk ) on which f vanishes strictly. (2) In particular, in an associative algebra over a eld a non-vanishing f provides a uniform bound jf -S pec;A (z )j N on the size of f -spectra f -S pec;A (z ) := f 2 j A(~1 z ) satis es f strictlyg: (3) Give an elementary proof of this spectral bound in the special case of the polynomial ring A = [s]: Problem 1.
Second Phase:
The Inner Life of Jordan Algebras
In this Phase we examine the nature of inner ideals, especially their absorbers. Primitive algebras are precisely those with a suitable inner ideal, the primitizer. A crucial fact is that the quadratic absorber of a primitizer must vanish, since the Absorber Nilness Theorem says the quadratic absorber of any inner ideal generates an ideal which is nil modulo the original inner ideal, and primitive algebras have no nonzero ideals nil modulo the primitizer. This leads to a nonzero heart for exceptional algebras, and then to their branding as Albert algebras in the case of a big algebraically closed eld.
CHAPTER 4
Absorbers of Inner Ideals A key new concept is that of outer absorbers of inner ideals, analogous to right absorbers r(L) = fz 2 L j z A Lg of left ideals L in associative algebras A: In the associative case these absorbers are ideals, precisely the cores of the one-sided ideals. In the Jordan case the quadratic absorber is only an inner ideal in J, but on the one hand it is large enough to govern the obstacles to speciality, and on the other it is close enought to being an ideal that the ideal in J it generates is nil modulo the original inner ideal.
1. Linear Absorbers The quadratic absorber is the double linear absorber, and the linear absorber has its origins in a specialization. The Peirce specialization of the inner ideals J2 ; J0 played an important role in the classical theory. Zel'manov discovered a beautiful generalization for arbitrary inner ideals. Definition 4.1 (Z-Specialization). The Z-specialization of an inner ideal B of J is the map from B into the associative algebra E nd(M ) of endomorphisms of the quotient -module M := J=B de ned by Z (b) := Vb (Vb (x) := Vb (x) = fb; xg): Note that because B is a subalgebra we have Vb (B) B; thus each Vb stabilizes the submodule B, and so induces a well-de ned linear transformation Vb on the quotient module M = J: (Recall our convention that J; B denote spaces with algebraic structure, but J; B are merely linear spaces.) Lemma 4.2 (Z-Specialization). (1) The Z-specialization is a true specialization, a Jordan homomorphism of B into the special Jordan algebra E nd (M )+ :
Vb2 = Vb 2 ; VUb c = Vb Vc Vb : (2) The kernel of this homomorphism is the linear absorber Ker(Z ) = `(B) := fz 2 B j VJz Bg; 429
430
Absorbers of Inner Ideals
which is an ideal of B and an inner ideal of J,
`(B) / B;
`(B) inner in J:
proof. (1) It suÆces to prove Z preserves squares, and this follows from the general formula Vb2 = Vb2 2Ub together with the fact that Ub disappears in the quotient since it maps J down into B by de nition of inner ideal. For linear Jordan algebras this implies Z preserves U products, but we can also see this directly for quadratic Jordan algebras from the Specialization Formula III.5.7(FIII)0 VUb c = Vb VcVb Ub;c Vb Ub Vc where for b; c 2 B both Ub;c; Ub map J into B: (2) Clearly the kernel consists of the z 2 B which act trivially on M because their V -operator shoves J down into B; Vz J = VJ z B as in (2). As the kernel of a homomorphism, `(B) is automatically an ideal of B: But it is also an inner ideal in all of J: if b 2 `(B) and x 2 J then Ub x 2 `(B) since using Triple Switch III.5.7(FIV) the Specialization Formula (FIII)0 can be written VUb x = Vb;x Vb Ub Vx, which shows VUb x shoves J down into B [Ub does by innerness of B, and Vb;x Vb does since rst Vb shoves it down (don't forget that b 2 `(B)!) and then Vb;x won't let it escape (Vb;x(B) = Ub;B x B by innerness)].
Note that M is just a module, with no further algebraic structure. When B = J2 (e) is the Peirce inner ideal determined by an idempotent, the module M can be identi ed with the Peirce complement J1 (e) J0 (e), and the Z -specialization is essentially the Peirce specialization on J1 (e) (with a trivial action on J0 (e) glued on). The linear absorber is the set of b 2 J2 which kill J1 , which in this case is an ideal in all of J and is precisely the core (cf. Exercise 4.3B). The linear absorber is overshadowed by its big brother, the quadratic absorber, which is a user-friendly substitute for the core of an inner ideal: it is the square of the linear absorber, and is close to being an ideal, since as we will see later the ideal it generates is not too far removed from B: Definition 4.3 (Absorbers). (1) The linear absorber `(B) of an inner ideal B in a Jordan algebra J absorb linear multiplication by J into B : `(B) = fz 2 B j LJ z Bg = Ker(Z ): The double absorber absorbs two linear multiplications by J into B :
`2 (B) := `(`(B)) = fz 2 B j LJ LeJ z B:g
1. LINEAR ABSORBERS
431
(2) The quadratic absorber1 qa(B) absorbs quadratic multiplications by J into B : qa(B) = fz 2 B j VJ;eJz + UJ z Bg: (3) The higher linear and quadratic absorbers are de ned inductively inductively by `n (B) := `(`n 1 (B)); qan (B) := qa(qan 1 (B)); these absorb strings of n linear or quadratic products respectively. The absorbers of B are the same whether we take them in J or its unital hull Je, so there is no loss of generality in working always with unital J: Let L be a left ideal and R a right ideal in an associative algebra + A: Show that for the inner ideal B = L \ R of A , the linear and quadratic absorbers coincide with the core of B (the largest associative ideal of A contained in B) : `(B) = qa(B) = fz 2 B j z A L; Az Rg: Exercise 4.3A
Show that for a Peirce inner ideal B = Ue J = J2 (e) determined by an idempotent in a Jordan algebra, the linear and quadratic absorbers are `(B) = qa(B) = fz 2 B j z J1 (e) = 0g; which is just the kernel of the Peirce specialization x ! Vx of B in E nd (J1 (e))+ , so B=`(B) = VJ2 (e) jJ1 (e) is an algebra of linear transformations. Exercise 4.3B
The absorbers were born of specialization, and in turn give birth to speciality: the linear absorber contains all obstacles to speciality and i-speciality. Definition 4.4 (Specializer). The specializer or special radical sp(J) of any Jordan algebra J is the smallest ideal of J whose quotient is special; J is special i sp(J) = 0: The i-specializer or i-special radical isp(J) is the smallest ideal of J whose quotient is identity-special (i-special); J is i-special i isp(J) = 0: isp(J) consists precisely of all values f 0 (J; : : : ; J) on J of all s-identities f 0 (x1 ; : : : ; xn ) (all those Jordan polynomials that should have vanished if J were special, constituting the ideal isp(X ) of all s-identities in the free Jordan algebra FJ (X ) on a countable number of indeterminates X ): Since it is easier to be i-special than special, we always have isp(J) sp(J): An ideal I is co-special or co-i-special if its quotient J=I is special or i-special. (1) Show that a smallest co-special ideal always exists: an arbitrary intersection of co-special ideals is co-special, so that sp(J) = \fI j I is co-specialg. (2) Show that a smallest co-i-special ideal always exists: an arbitrary intersection of co-i-special ideals is co-i-special, so that isp(J) = \fI j I is co-i-specialg. Exercise 4.4*
1Like \quasi-inverse"
qi, \quadratic absorber" is written aperiodically as qa and pronounced \kyoo-ay".
432
Absorbers of Inner Ideals
2. Quadratic Absorbers We can't obtain the properties of the quadratic absorber quite so easily and naturally. Theorem 4.5 (Quadratic Absorber). (1) The linear and quadratic absorbers of an inner ideal B in a Jordan algebra J are ideals in B, and they together with all higher absorbers are again inner ideals in J. The linear absorber also absorbs obstacles to speciality: we have
Specializer Absorption B=`(B) is special : isp(B) sp(B) `(B):
(2) The double absorber coincides with the quadratic absorber, qa(B) = `2 (B): (3) The linear absorber already absorbs VJ;Be ,
e ) B; VJ;Be (`(B)) = UJ;`(B) (B and internal multiplications boost the absorbent power, e ) qa(B); e ) qa2 (B): isp(B)3 isp(B)2 U`(B) (B Uqa(B) (B (4) If J is nondegenerate and the quadratic absorber of an inner ideal vanishes, then its i-specializer and linear absorber vanish too, and its Z -specialization is injective: qa(B) = 0 ) isp(B) = 0 and Z is injective (J nondegenerate): proof. (1) The Jordan algebra B=`(B) is manifestly special, since by Z-Specialization 4.2(2) Z induces an imbedding of B in the special Jordan algebra End (J=B )+, therefore it swallows up the obstacle sp(B) and hence isp(B) as well. (2) Since 12 2 the double and quadratic absorbers coincide: all Lx ; Lx Ly^ map z into B () all Vx = 2Lx ; Vx Vy = 4Lx Ly do () all Vx; 2Ux = Vx2 Vx2 ; Vx;y = Vx Vy Ux;y do () all Ux ; Vx;y^ do. We saw that the linear absorber is an ideal in B and an inner ideal in J in Z-Specialization 4.2(2). Thus all higher absorbers `n (B) are also inner [but only ideals in `n 1 (B), not all of B ]. This shows that the quadratic absorber too, as somebody's linear absorber, is an inner ideal in J: Thus again the higher absorbers are inner ideals. Though it doesn't arise as a kernel, the quadratic absorber is nevertheless an ideal in B : if z 2 qa(B) then Vb z 2 qa(B) since it absorbs V 's by Vx;y^(Vb z ) = (Vb Vx;y^ + VVx;y^(b) Vb;fx;y^g )z [from the 5-Linear Identity III.5.7 (FV)0 ] Vb (B) + B B [since z absorbs VJ;eJ ] B, and it absorbs U 's by Ux (Vb z ) = (Ufx;bg;x Vb Ux )z [from Fundamental Lie III.5.7 (FV)] B Vb B B [since z absorbs UJ ]. [In contrast to linear
2. QUADRATIC ABSORBERS
433
absorbers, the higher quadratic absorbers are ideals in B, see Problem 4.1.] (3) For the rst part, for z 2 `(B) we have fJ; 1; z g = fJ; z g B by absorption and fJ; B; z g = ffJ; Bg; z g fB; J; z g [by Triple Switch III.5.7 (FIVe)] fJ; z g fB; J; Bg B by absorption and innerness, e ; z g B: so together fJ; B For the second part, to see that products boost the absorptive e : Then Uz ^b absorbs Vx;y^'s since Vx;y^(Uz ^b) = power, let z 2 `(B); ^b 2 B e ) UB (J) B (UVx;y^(z);z Uz Vy^;x )^b [by Fundamental Lie again] UJ;z (B by the rst part and by innerness of B: Uz^b also absorbs Ux 's since 2 )^ Ux (Uz^b) = (Ufx;zg + Ux2 ;z2 Uz Ux Ux;z b [from Macdonald or lin2 earized Fundamental Ux2 = Ux ] UB J + UJ;`(B)^b UB J UJ2;`(B)^b B by innerness and the absorption of the rst part. Thus Uz^b absorbs its way into qa(B) as in the third inclusion. The rst two inclusions then follows from isp(B) `(B), since then isp(B)3 isp(B)2 U`(B) B: For the third and nal inclusion, that w := Uz ^b 2 qa2 (B) = qa(Q) is even more absorptive when z 2 Q := qa(B), note that w absorbs Vx;y^ as always from Fundamental Lie by Vx;y^(Uz^b) = (UVx;y^(z);z Uz Vy^;x )^b e ) UQ (J) Q by idealness and innerness of Q established in UB;Q (B (2). But to show w absorbs Ux we cannot use our usual argument, because we do not know Ufx;zg^b 2 Q: So we go back to the de nition: Ux w 2 Q = qa(B) because it absorbs all Vr;s^; Ur for r 2 J; y^ 2 Je: Indeed, it absorbs V 's because Vr;s^(Ux w) = Vr;s^(Ux Uz^b) (Ux0 ;x Uz Ux Uz0 ;z + Ux Uz Vr;s^)^b [using Fundamental Lie twice] UJ Uz (J) e ) + UJ Uz (J) UJ Q [by idealness and innerness of Q again] UJ UB;z (B B [since Q absorbs UJ by de nition]. To see Uxw absorbs the U 's, note Ur (Ux w) = Ur (Ux Uz ^b) (Ufr;x;zg + UUr Ux z;z Uz Ux Ur (UVr;x (z);z Uz Vx;r )Vx;r )^b [by Alternate Fundamental II.5.7 (FI)0 and Fundamental e ) [since z absorbs Vr;x ] B [by innerness and Lie] UB (J) + UJ;qa(B) (B the rst part of (3)]. Thus we can double our absorption, double our fun, by U -ing. (4) If J is nondegenerate and qa(B) = 0, we must show all z 2 `(B) vanish. By nondegeneracy it suÆces if all w = Uz x vanish for x 2 J, or in turn if all Uw = Uz Ux Uz vanish. But again Uz Ux Uz = 2 ) [by linearized (FI) for y = 1]=U U Uz (Ufx;zg + Ux2 ;z2 Uz Ux Ux;z z fx;z g + Uz Ux2 ;z2 Uz2 Ux (Vz2 Vz;x Vz3 ;x)Ux;z [using Macdonald to expand e qa(B) = 0 by (3), and Uz Uz;x]; this vanishes because z 2 ; z 3 2 Uz B Uz Ufx;zg (J) Uz UB (J) U`(B) (B) [by innerness] qa(B) [by (3)
434
Absorbers of Inner Ideals
again] = 0 too. Thus nondegeneracy allows us to go from isp(B)2 = 0 in (3) to isp(B) = 0 in (4). We emphasize cubes rather than squares of ideals, since the square is not an ideal but the cube always is. In general, the product B C of two ideals is not again an ideal, since there is no identity of degree 3 in Jordan algebras to re-express x(bc), but the Jordan identity of degree 4 is enough to show UB (C) is again an ideal, since by Fundamental Lie x Ub c = Uxb;b c Ub (x c) 2 UB (C):
Ignore the linear absorber and show directly that the quadratic absorber is an inner ideal in J: if z 2 qa(B); r 2 J; x^; s^ 2 eJ then Uz x^ absorbs double Vr;s by Fundamental Lie III.5.7(FV), and Ur by Ufr;zg + UUr z;z = Ur Uz + +Uz Ur + (Ufr;zg;z Uz Vr )Vr . Exercise 4.5A*
Exercise 4.5B*
Show VJ;^J VB VB^ VJ;^J and UJ VB VB^ UJ :
The new notion of absorber brings with it a new notion of spectrum, the absorber spectrum consisting of those for which U~1 z (J) is not merely dierent from J (as in the ordinary spectrum), or distinguishable from J by means of some f (as in the f -spectrum), but so far from J that its absorber is zero. Definition 4.6 (Absorber Spectrum). (1) The -absorber spectrum AbsS pec;J(z ) of an element z is the set of scalars giving rise to an absorberless inner ideal: AbsS pec;J(z) = f 2 j qa(U~1 z (J)) = 0g: (2) By Quadratic Absorber 4:5(3-4) an absorberless inner ideal strictly satis es all f 2 isp(X )3 (even all f 2 isp(X ) if J is nondegenerate), so qa(U~1 z (J)) = 0 implies U~1 z (J) satis es all such f strictly, so by Spectra 3:1(4) AbsS pec;J(z) f-S pec;J(z) S pec;J(z) for all f 2 isp(X )3 , even for all f 2 isp(X ) if J is nondegenerate. Exercise 4.6
Show directly from the de nitions that AbsS pec;J (z ) S pec;J (z ):
3. Absorber Nilness We now embark on a long, delicate argument to establish the crucial property of the quadratic absorber Q = qa(B): that the ideal I J(Q) it generates is nil modulo Q in the sense that for every element z of the ideal some power z n 2 Q: Notice that if the quadratic absorber is equal to the core (as in the associative case, and for Peirce inner ideals) then it already is an ideal, Q = I J(Q), and the result is vacuous. The
3. ABSORBER NILNESS
435
import of the result is that the absorber Q I is suÆciently close to being an ideal I J(Q) that it behaves like an ideal for many purposes. Theorem 4.7 (Absorber Nilness). The ideal I J (qa(B)) in J generated by the quadratic absorber qa(B) of an inner ideal B is nil mod qa(B), hence nil mod B. proof. Throughout the proof we let Q := qa(B) and I := I J (Q), and T be the set of all T = Ux1 Uxr for xi 2 Je of length r 0: We will break the proof into a series of steps. Step 1: I is spanned by all monomials w = T (z ) for z 2 Q; T
2T:
Indeed, this span is a linear subspace which contains Q, and is closed arbitrary algebra multiples since 2Lx = Vx = Ux;1 = Ux+1 Ux U1 is a sum of U -operators. Because they are structural transformations, products of U 's are much easier to work with than products of L's. Step 2: If x 2 J is properly nilpotent mod qa(B), then it is also properly nilpotent modulo any higher absorber: (1) x(m;y) 2 Q =) x(3n m;y) 2 qan (Q) (y 2 eJ): This follows by applying to z = x(m;y) the following general result: (2)
z 2 Q =) z (3n ;y) 2 qan (Q) for all y 2 Je:
For this we induct on n; n = 0 being trivial, and for the induction step w = z (3n ;y) 2 qan (Q) =: qa(C) =) z (3n+1 ;y) = w(3;y) = Uw Uy w 2 Uqa(C) C [since w 2 qa(C) absorbs U 's into C] qa2 (C) [by Absorber Boosting 4.5(3) applied to C] = qan+1 (Q): Step 3: Each monomial w = T (z ) is properly nilpotent mod Q with index r which depends only on the length of T : (3) w(3r ;y) 2 Q (r = length(T )). Recall the length of T is the number of factors Uxi , so by de nition it is absorbed by a suitably high absorber: (4) T qar (Q) Q (r = length(T )). Fromr Structural Shifting 1.9(2) we have that w(3r ;y) = T (z )(3r ;y) = T (z (3 ;T (y)) ) 2 T (qar (Q)) [by (2) above] Q [from (4)], which establishes (3).
436
Absorbers of Inner Ideals
Step 4: Modular Interlude. That wasn't painful, was it? Now it's one thing to prove that monomials are nilpotent mod Q, but quite another to prove that nite sums of such monomials remain nilpotent: as we raise the sum to higher and higher powers the number of terms in the expansion proliferates hopelessly. If you want to get dispirited, just sit down and try to show from scratch that the sum of two monomials elements is nilpotent. But Zel'manov saw a way out of this quagmire, nessing the diÆculty by showing the nite sums remain quasi-invertible mod Q in all extensions, and then using Amitsur's Polynomial Trick to deduce nilness mod Q: Definition 4.8 (Invertible Modulo an Inner Ideal). If C is an inner ideal in J, we say two elements x; y are equivalent mod C, written x C y , if x y 2 C: We say an element u 2 eJ is invertible mod C if there is an element v 2 eJ such that Uu v C ~1: We say z is quasi-invertible mod C if ~1 z is invertible mod C in e J (where C remains inner), in which case the element v necessarily has the form ~1 w for some w 2 J : U~1 z (~1 w) C ~1: Lemma 4.9 (Invertible Modulo an Absorber). Let qa(B) be the quadratic absorber of an inner ideal B in J: Then we have the following general principles: (1) Multiplication of Equivalences, at the price of one degree of absorption: x qa(B) y =) Ux a B Uy a; Ua x B Ua y (a; b 2 eJ): (2) Higher Invertibility: u invertible mod qa(B) =) u invertible mod qan (B) for any n: (3) Cancellation: u invertible mod qa(B); Uu x qa2 (B) Uu y =) x B y: (4) Invertibility of Factors: as with ordinary inverses, Ux y invertible mod qa(B) =) x; y invertible mod qa(B): (5) Inverse Equivalence: Invertibility passes to suÆciently equivalent elements, u invertible mod qa(B); u0 qa2 (B) u =) u0 invertible mod qa(B):
3. ABSORBER NILNESS
437
(6) Nilpotence implies quasi-invertibility modulo an absorber: z nilpotent mod qa(B) =) z q.i. mod qa(B): proof. (1) If x y 2 qa(B) then Ux a Uy a = (U(x y)+y Uy )a = (Ux y + Ux y;y )a 2 Uqa(B) (J) + fqa(B); J; Jg B; Ua x Ua y 2 UJ qa(B) B: (2) It suÆces to show that if u is invertible mod qa(C) = qan (B) for some n 1, then is is also invertible mod the next higher absorber qa2 (C) = qan+1 (B): By invertibility mod qa(C) we have v so that Uu v = ~1 c for c 2 qa(C), therefore Uu (Uv Uu U~1+c ~1) = UUu v U~1+c~1 = U~1 c U~1+c ~1 = U~1 c2 ~1 [by Macdonald] = ~1 c0 for c0 = 2c2 c4 = e ) qa2 (C) by Absorber Boosting 4.5(3). Uc (~2 c2 ) 2 Uqa(C) (C (3) Set d := x y ; then Uu d 2 qa2 (B) and Uu v qa(B) ~1 together imply x y = U~1 d B UUu v d [using (1)] = Uu Uv (Uu d) 2 Uu Uv (qa2 (B)) [by hypothesis] B [since qa2 (B) absorbs UJ UJ by Higher Absorber De nition 4.3(3)]. (4) If Ux y is invertible mod qa(B) then by Higher Invertibility (2) it is also invertible mod qa4 (B); UUx y v qa4 (B) ~1; then ~1 qa4 (B) Ux (Uy Ux v ) = Ux (Uy w) for w := Ux v shows that x is invertible mod qa4 (B) too, whence Ux ~1 = U~1 (Ux ~1) qa3 (B) UUxUy w (Ux ~1) [by Multiplication (1)] = Ux Uy (Uw Uy Ux Ux ~1) [by the Fundamental Formula] implies ~1 qa(B) Uy (Uw Uy Ux Ux ~1) by Cancellation (3) of Ux , and y is invertible too mod qa(B): (5) If Uu v qa(B) ~1 then Uu0 v qa(B) Uu v [by Multiplication (1)] qa(B) ~1, so u0 is invertible mod qa(B) too. (6) If z n 2 qa(B) then U~1 z (~1 + z + : : : + z n 1 ) = ~1 z n qa(B) ~1, so z is q.i. mod qa(B): We have now gathered enough tools concerning modular quasi-invertibility to resume the proof of the Theorem. Step 5: For every monomial w we can transform ~1 w to be congruent to 1: We will nd a polynomial p(t) with constant term 1 so that (5)
Up(w) (~1 w) qa2 (Q) ~1.
Indeed, from Step 3 w is properly nilpotent mod Q, hence mod any higher absorber by Step 2, so there is an m so that wm 2 qa2 (Q): Then for all k 2m + 1 we have wk 2 Uwm (J) Uqa2 (Q) (J) qa2 (Q) [by
438
Absorbers of Inner Ideals
innerness]. Now BECAUSE WE HAVE A SCALAR 12 , WE CAN EXP 1 i TRACT SQUARE ROOTS: the binomial (1 t) 2 = 1 i=0 i t series 1 has coeÆcients 0 = 1; i = ( 1)i i2 [by the Binomial Theorem] = ( 2i i 1 ) ( 12 )2i 1 [by virtuoso ddling, heard in Ex. 4.7 below] P2m 1 2 Z[ 2 ]1 : The partial sum p(t) = i=0 i ti is a polynomial of degree 2m with constant term 1, satisfying p(t)2 = 1 + t + : : : + t2m + 2m+1 t2m+1 + : : : + 4m t4m because p(t)2 is a polynomial 4m which as formal series is P of degree i )2 = (1 t) 1 = P1 ti , so it must congruent mod t2m+1 to ( 1 t i=0 i i=0 coincide with that power series up through degree 2m: Thus if we evaluate this on w in the hull Je we get Up(w) (~1 w) = p(w)2 (~1 w) [by Macdonald] = (~1 + wP + : : : + w2m + 2m+1 w2m+1 + : : : + 4m w4m ) (~1 w) = (~1 w2m+1 ) + 4km=2m+1 k (wk wk+1) qa2 (Q) ~1 [since we noted wk 2 qa2 (Q) for all k 2m + 1], establishing (5). Step 6: The elements of I are q.i. modulo Q: In view of Step 1 this means every nite sum vn = w1 + : : : wn of monomials is q.i. mod Q: We of course prove this by induction on n, the case n = 1 being settled by Step 3 and Nilpotent-Implies-QI 4.9(6). Assume the result for sums of n monomials, and consider a sum vn+1 = w1 + : : : wn + wn+1 = vn + w of n + 1 monomials. We must show ~1 vn+1 is q.i. mod Q. We squash it down to a shorter sum (which is therefore q.i. by induction), using the multiplier p of Step 4 to \eliminate" the last term: if p = p(w) as in (5) then (6)
Up (~1 vn+1 ) qa2 (Q) ~1 vn0
since Up (~1 vn+1 ) = Up ((~1 w) vn ) qa2 (Q) ~1 vn0 [by (5)], where vn0 := Up vn again a sum of n monomials wi0 := Up wi : Now BY INDUCTION vn0 is q.i. and ~1 vn0 is invertible mod Q, so by Inverse Equivalence 4.9(5) [here's where we need congruence mod qa2 (Q)] Up (~1 vn+1 ) is invertible mod Q too, so by Invertibility of Factor 4.9(4) ~1 vn+1 is too, completing our induction. Now we come to Amitsur's magic wand which turns quasi-invertibility into nilpotence. Step 7: Modular Amitsur Polynomial Trick: If x 2 J has tx q.i. mod Q[t] in J[t], then x is nilpotent mod Q = qa(B):
3. ABSORBER NILNESS
439
Indeed, tx q.i. means ~1 tx invertible mod Q[t], so by Higher Invertibility 4.9(2) we can nd v~ with U~1 tx v~ qa3 (B)[t] ~1, hence by Multiplication of Equivalences 4.9(1) ~1 tx = U~1 (~1 tx) qa2 (B)[t] UU~1 txv~(~1 tx) = P U~1 tx Uv~ U~1 tx (~1 tx) = U~1 tx y~. Writing such y~ as y~ = Ni=0 ti yi (yi 2 Je) for coeÆcients yi 2 J, we will show that this y~ is congruent to the geometric series: (7)
yi Q xi :
Since the polynomial y~ must eventually have yi = 0 for all i > N for some N , this will imply 0 Q xi for all i > N , and x will be nilpotent mod Q, and the Trick will have succeeded. We rst claim it will be suÆcient if c0i := yi xi 2 Mx qa2 (B) for Mx the algebra of multiplication operators involving only the element x: At rst glance the arbitrarily long strings of multiplications by x in Mx look frightening, since we are down to our last qa and our absorption is reduced by 1 each time we multiply by Multiplying Equivalences 4.9(1). Luckily, the crucial fact is that no matter how long a string of multiplications by x we have, it may always be shortened to sums of monomials Up(x) 2 U[x] of length 1, noting that this span absorbs any further multiplications by x in view of Lxk Up(x) = Uxp(x);p(x) by the Operator Power-Associativity Rules III.5.6(2), where Ur(x);p(x) = Ur(x)+p(x) Ur(x) Up(x) , so Mx = U[x]: Therefore Mx qa2 (B) = U[x] qa2 (B) qa(B) = Q since qa2 absorbs UJ into qa, and c0i will fall in Q and yi xi 0 as in (7). To establish c0i 2 Mx qa(Q), we start from the de nition of the yi as coeÆcients of y~; where ~1 txPis equivalentPmod qa2 (B)[t] = qa(Q)[t] to U~1 tx y~ = (1J[t] tVx + t2 Ux )( Ni=0 ti yi ) = Nj =0+2 tj (yj Vx yj 1 + Ux yj 2): Identifying coeÆcients of like powers of t; we obtain elements ck 2 qa(Q) such that ~1 + c0 = y0 ; x + c1 = y1 Vxy0 ; 0 + cj = yj Vx yj 1 + Ux yj 2 for all j 2: Solving this recursively, we get y0 = 1~ + c00 for c00 := c0 2 qa(Q) Mx(qa(Q)); then y1 = Vx(~1 + c00) x + c1 = x + c01 for c01 := Vxc00 + c1 2 Mx(qa(Q)), and if the assertion is true for consecutive j; j + 1 then yj +2 = Vx (yj +1) Ux (yj )+ cj +2 = Vx(xj +1 + c0j +1 ) Ux (xj + c0j )+ cj +2 = (2xj +2 + Vx (c0j +1 )) (xj +2 + Ux (c0j ))+ cj +2 = xj +2 + c0j +2 where the error
440
Absorbers of Inner Ideals term c0j +2 := Vx (c0j +1) Ux (c0j )+ cj +2 2 Mx (Mx (qa(Q)) Mx(qa(Q)): This completes the recursive construction of the c0 2 Mx qa(Q), yieldi
ing the congruences (7) and establishing the Amitsur Trick.
Step 8: If I J(Q) is quasi-invertible mod Q for all Q = q (B) for all inner ideals B in all Jordan algebras J, then I J(Q) is nil mod Q for all B and J: e = B[t] remains an inner ideal in the Jordan algebra J e = Indeed, B J[t] = J [t] of polynomials in the scalar indeterminate t with e := qa(B e ) = qa(B)[t] = Q[t] [a polynomial coeÆcients in J, and Q P i e e i t zi belongs to Q = qa(B) i it absorbs VJ;J ; UJ into B[t] ([t] is automatically absorbed into B[t]), and by equating coeÆcients of ti this happens i each coeÆcient zi absorbs VJ;J ; UJ into B, i.e. belongs e ) = I J (Q)[t]: Thus if x 2 I J (Q) then x to Q = qa(B)], so I eJ(Q ~= e e tx 2 I eJ (Q) must remain q.i. modulo Q by Step 6. But this implies x is actually nilpotent modulo Q by Amitsur's Polynomial Trick. Thus every x 2 I = I J (Q) is nilpotent mod Q, and Absorber Nilness is established. This is the crucial result which will make the absorber vanish and create an exceptional heart in primitive algebras. Exercise 4.9* Fiddle with binomial coeÆcients to show (
1)i
12
i
=
2i 1 ( 1 )2i 1 . i 2
4. PROBLEMS FOR CHAPTER 4
441
4. Problems for Chapter 4 Let B be an inner ideal in J: (1) Show J `n(B) `n 1(B) for all n: (2) If `n 1 (B) is an ideal in B, show `n (B) is an ideal i fB; J; `n (B)g `n 1 (B) i fJ; B; `n (B)g `n 1 (B) i fB; `n (B); Jg `n 1 (B): (3) Show that if C is an ideal of B and an inner ideal of J, then qa(C) is again an ideal of B and an inner ideal of J: (4) Conclude qan (B) is always an ideal of B: Problem 2. (1) We say z is properly quasi-invertible mod C if 1(y) z is g (y) ( y ) ( y invertible mod C in J (where C remains inner) for each y 2 J : U1(y) z (1 ) w(y) ) C 1(y) for each y, some w(y) 2 J: Show that z p.n. mod qa(B) =) z p.q.i. mod qa(B): (2) Show x qa(B) y =) Bx;ab B By;a b: Problem 3. Repeat the argument of the Absorber Nilness Theorem to show I J (qa(B)) is properly nil mod qa(B): (Step 1) Modify the de nition of monomial T (z ) for T = T1 Ts to include Bergman operators Bxi ;yi (xi ; yi 2 eJ) among the Ti ; instead of length of T we work with width of T , de ned to be the sum of the widths of its constituent Ti , where Uxi has width 1 and Bxi ;yi width 2. (Step 3) Show that 4.7(4) holds for these more general T if we replace length by width. (Step 4) Show for every y we can transform every 1(y) w for monomial w to be congruent to 1(y) : we can nd a polynomial p(t) with constant term 1 so that Up((yy)) (w) (1(y) w) qa2 (Q) 1(y): (Step 5) Show Up(y) (1(y) vn+1 ) qa2 (Q) 1(y) vn0 where vn0 is a sum of n new monomials wi0 = Up(y) wi = Bs;y wi BY THE INCLUSION OF BERGMANN B 's IN OUR MONOMIALS (wi0 has width 2 greater than the original wi ) [recall that p had constant term 1, so we can write p = 1(y) s for s = s(w; y) 2 J]. (Step 5) Show from Amitsur's Polynomial Trick that if tx remains p.q.i. modulo qa(B)[t] in J[t], then x is p.n. modulo qa(B): Problem 1.
CHAPTER 5
Primitivity The next new concepts are those of modular inner ideal and primitivity, which again have an illustrious associative pedigree but require a careful formulation to prosper in the Jordan setting.
1. Modularity In his work with the radical, Zel'manov had already introduced the notion of modular inner ideal and primitive Jordan algebra. In the associative case a left ideal L is modular if it has a modulus c, an element which acts like a right unit (\modulus" in the older literature) for A modulo L : ac a modulo L for each a 2 A, or globally A(^ 1 c) L. If A is unital then all left ideals are modular with modulus c = 1. Such c remains a modulus for any larger L0 L, any translate c + b by b 2 L is another modulus, and as soon as L contains one of its moduli then it must be all of A. The concept of modularity was invented for the Jacobson radical in non-unital algebras: in the unital case Rad(A) is the intersection of all maximal left ideals, in the non-unital case it is the intersection of all maximal modular left ideals. To stress that the moduli are meant for nonunital situations, we consistently use ^1 to indicate the external nature of the unit. (If an algebra already has a unit, we denote it simply by 1.) In Jordan algebras we don't have left or right, we have inner and outer. The analogue of a left ideal L is an inner ideal B, and the analogue of a right unit is an outer unit for J modulo B; U^1 c(J) B. This turns out not to be quite enough to get a satisfactory theory. Definition 5.1 (Modular). (1) An inner ideal B in a Jordan algebra J is modular with modulus c if (i) U^1 c(J) B; (ii) c c2 2 B; (iii)
f^1 c; Jb; Bg B:
The last condition can be written intrinsically (without hats) as
(iiia)
f^1 c; J; Bg B;
443
(iiib)
fc; Bg B:
444
Primitivity
(2) A modulus c remains a modulus for a larger B0 B only if f1^ c; bJ; B0 g B0 , which may not happen for all enlargements B0 , but does happen for \ideal enlargements" by the Ideal Enlargement Principle c remains a modulus for any B0 = B + I for I / J: Indeed, note (1)(i-ii) certainly hold in any enlargement, and (iii) holds in any ideal enlargement since f^1 c; bJ; Bg B by (iii) for B and f^1 c; bJ; Ig I by de nition of ideal. Exercise 5.1* (1)
Check that if J is unital then indeed all inner ideals are modular with modulus 1. (2) Show that all inner B in J remain inner in Jb with modulus ^1, but never modulus c. (3) Show that an inner ideal B in J has modulus c 2 J c b := (^ i its \c-hull" B 1 c) + B is inner in bJ with modulus ^1 and modulus c, in c b which case B \ J = B. (4) Show that if B0 is an inner ideal in bJ with a modulus c 2 J, then its \contraction" B := J \ B0 is an inner ideal in J with modulus c, c b = B0 . (5) Conclude that hull and contraction are inverse bijections between and B c b = b c-modular inner ideals in J and bJ, in particular B = J () B J () c 2 B. (6) Conclude that the modular ideals are precisely the traces (intersections) on J of those inner ideals of the immortal hull which have a mortal modulus c 2 J in addition to their immortal modulus ^1.
It is important that we can adjust the modulus to adapt to circumstances. Lemma 5.2 (Modulus Shifting). (1) If c is a modulus for an inner ideal B in a Jordan algebra J, so is any translate or power of c: c + b; cn are moduli for any b 2 B; n 1: Indeed, powers are merely translates: c cn 2 B for all n 1: (2) The Modulus Exclusion Principle says that a proper inner ideal cannot contain its modulus: B has a modulus c 2 B i B = J: 0 proof. (1) For translation shifting from c to c = c + b, we must verify conditions 5.1(i)0 -(iii)0 for c0 . (i0 ) holds since U^1 c0 = U^1 c b = U^1 c U^1 c;b + Ub where each piece maps J into B by Modularity (i); (iii), and innerness of B; (ii0 ) holds since c0 (c0 )2 = (c c2 ) + (b fc; bg b2 ) where the rst term is in B by Modularity (ii) and the second term by Modularity (iiib) and the fact that inner ideals are subalgebras; (iii0 ) holds since U^1 c0 ;B = U^1 c;B Ub;B maps bJ into B since the rst term does by by Modularity (iii) and the second term by innerness of B.
1. MODULARITY
445
For power shifting from c to cn we use recursion on n; n = 1 being trivial, n = 2 being (ii), and for the recursion step to n + 2 we note (cn+2 c) 2(cn+1 c) + (cn c) = cn+2 2cn+1 + cn = U^1 c cn 2 B by Modularity (i); since cn+1 c and cn c lie in B by the recursion hypothesis, we see cn+2 c does too, completing the recursion step. (2) follows since if c 2 B then by (1) the translate c0 = c c = 0 would be a modulus, which by (i) clearly implies J = B. Verify that for a left ideal L of an associative algebra A with associative modulus c (1) any translate or power of c is again a modulus; (2) the inner ideal B = L in J = A+ is modular with Jordan modulus c. Exercise 5.2
An important source of modular inner ideals is structural transformations. Example 5.3 (Structural Inner Ideal). If T is a structural transformation on J with T as in 1:2 which are both \congruent to b1 mod J " as in 1:4(3); (b1 T )(Jb) + (b1 T )(Jb) J; then the structural inner ideal T (J) is a modular inner ideal: T (J) is inner with modulus c = 1^ T (1^) 2 J:
(2) In particular,
Bx;y (J) is inner with modulus c = fx; y g Ux y 2 ; U^1 x(J) is inner with modulus c = 2x x2 : (1) Acting separately on ^1; J, the congruence condition in (1) reduces to proof.
(10 ) T (^1) = ^1 c; T (^1) = ^1 c for c; c 2 J; T (J) + T (J) J: The linear subspace B := T (J) is an inner ideal (cf. Structural Innerness III.18.6) since UT x (Jb) = T Ux T (Jb) [by structurality] T (J) = B. To check that c is a modulus, for Modularity 5.1(i) we have U^1 c (J) = UT (^1) (J) = T T (J) [by structurality] T (J) = B; for Modularity 5.1(ii) we have c2 c = (^1 c) (^1 c)2 = T (^1) T (^1)2 [by (10 )] = T (^1) UT (^1) (^1) = T (^1 T (^1)) [by structurality] = T (c ) 2 T (J) [by (10 )] = B; and for Modularity 5.1(iii) we have f^1 c; Jb; Bg = fT (^1); bJ; T (J)g [by (10)] = T U^1;J(T (Jb)) [by structurality] T (J) = B. Applying (1) to T = Bx;y and T = U^1 x = Bx;^1 gives (2).
446
Primitivity
Show T = 1b S is structural with T = b1 S on bJ i S; S satisfy Ux;Sx (SUx + UxS ) = US(x) SUxS , and show T (J) J i S (J) J, and dually for T ; S . Exercise 5.3
2. Primitivity An associative algebra A is primitive if it has a faithful irreducible representation, or in more concrete terms if there exists a left ideal such that A=L is a faithful irreducible left A-module. Irreducibility means L is maximal modular, while faithfulness means L has zero core (the maximal ideal of A contained in L, which is just its right absorber fz 2 A j z A Lg); this core condition (that no nonzero ideal I is contained in L) means I + L > L, hence in the presence of maximality means I + L = A, and L complements all nonzero ideals. Once a modular L0 has this property, it can always be enlarged to a maximal modular left ideal L which is even more complementary. In the Jordan case A=L is not going to provide a representation anyway, so there is no need to work hard to get the maximal L (the \irreducible" representation), any complementary L0 will do. Definition 5.4 (Primitivity). A Jordan algebra is primitive if it has a primitizer P, a proper modular inner ideal P 6= J which satis es the Complementation Principle: I + P = J for all nonzero ideals I of J: Another way to express this is the Modulus Containment Princi-
ple:
every nonzero ideal I contains a modulus for P. Indeed, if I + P = J then we can write the modulus for P as c = i + p, in which case the translate i = c p remains a modulus and lies in I; conversely, if I contains a modulus c for P then by Ideal Enlargement 5:1(2) it remains a modulus for the inner ideal I + P, which then by Modulus Exclusion 5:2(2) must be all of J. The terminology \semiprimitive" for Jacobson-radical-less algebras means \subdirect product of primitives", so we certainly want to be reassured that totally-primitive algebras are always semi-primitive! Proposition 5.5 (Primitive). (1) Any primitive Jordan algebra is semiprimitive, hence nondegenerate: J primitive =) Rad(J) = 0 =) J nondegenerate. (2) In particular, primitive algebras have no nil ideals; indeed, they have no ideals nil modulo P, if an ideal I is nil mod P then I = 0:
2. PRIMITIVITY
447
(3) Although quadratic absorber and core do not coincide for general inner ideals, the core, absorber, and s-identities all vanish on the primitizer: core(P) = qa(P) = isp(P) = 0: proof. (1) If I = Rad(J) 6= 0 then I contains a modulus c for P by the Modulus Containment Principle 5.4; then U^ 1 c (J) P < J by Modularity 5.1(i), yet ^1 c is invertible in bJ since the radical c is quasi-invertible, forcing U^1 c (J) = J, a contradiction. Once J is semiprimitive, Rad(J) = 0, it is nondegenerate by Hereditary Radical 1.11(5). (2) If I were nonzero it would contain a modulus c for P by Modulus Containment 5.4, hence by nilness modulo P some power cn would fall in P yet remain a modulus for P by Modulus Shifting 5.2(1), contrary to Modulus Exclusion 5.2(2). (3) We introduced Containment 5.4 as a surrogate for corelessness; note if I = core(P) P were nonzero then Complementation 5.4 would imply P = P + I = J, contrary to its assumed properness, so I must vanish. The vanishing of qa(P) is much deeper: the ideal I = I J (qa(P)) is usually not contained in P, but it is nil mod P by Absorber Nilness 5.7, so from (2) we see I = 0 and hence its generator qa(P) = 0 too. Once the absorber vanishes, Quadratic Absorber 4.5(4) says isp(P) = 0 by nondegeneracy (1). Recall that if M is a maximal modular left ideal in an associative algebra A, then removing the core K = Core (M) produces a primitive algebra A=K with faithful irreducible representation on the left module A=M. In the Jordan case we don't have representations on modules, but we still have maximal modular inner ideals and cores. The core of an inner ideal B (or any subspace) is the maximal 2-sided ideal contained in B (the sum of all such ideals). Primitizers always have zero core, and once again removing the core from a maximal modular inner ideal is enough to create primitivity. Proposition 5.6 (Coring). If B is maximal among all proper cmodular inner ideals, then removing its core creates a primitive algebra: J=Core (B) is primitive with primitizer B=Core (B). proof. We have K = Core (B) B < J, so the image B := B=K < J=K =: J remains a proper inner ideal in J with modulus c by taking images of Modularity 5.1(i)-(iii), but now it also has the Complementation Property 5.4 for a primitizer. Indeed, if I is nonzero in J then its pre-image is an ideal I > K in J which is not contained in the core, hence not entirely contained in B; then the ideal enlargement
448
Primitivity
+ B > B is still a c-modular inner ideal by Ideal Enlargement 5.1(2), so by MAXIMALITY OF B it must not be proper, so I + B = J and therefore I + B = J as required for Complementation 5.4. I
3. Semiprimitivity We are nally ready to connect the Jacobson radical to primitivity, showing that the radical vanishes i the algebra is a subdirect product of primitive algebras, thereby justifying our use of the term \semiprimitive" for such algebras. Recall that the Jacobson radical of an associative algebra A is the intersection of all maximal modular left ideals M, and also the intersection of their cores K = Core (M). In the Jordan case we have a similar core characterization. Theorem 5.7 (Semiprimitivity). (1) The Jacobson radical is the intersection of the cores of all maximal modular inner ideals: Rad(J) = \f Core (B) j for some c; B is maximal among all c-modular inner idealsg. Q (2) J is semiprimitive i it is a subdirect product J = J of primitive Jordan algebras J = J=K , i.e. i the co-primitive ideals K / J separate points, \K = 0. proof. (1) Rad(J) \K since for each K = Core (B) we have J = J=K primitive by Coring 5.6, and therefore semiprimitive by the Primitive Proposition 5.5(1). Thus the image of Rad(J), as a q.i. ideal in the semiprimitive J=K, must vanish, and Rad(J) K. The converse Rad(J) \K is more delicate: if z 62 Rad(J) we must construct a maximal modular B (depending on z ) with z 62 Core (B). Here the Elemental Characterization 1.10(2) of the radical as the p.q.i. elements comes to our rescue: z 62 Rad(J) =) z not p.q.i. =) some (z; y ) not q.i., B0 = Bz;y (J) < J [by non-surjectivity Criterion 1.8(4iii)] with modulus c = fz; y g Uz y 2 [by B -Innerness 5.3(2)]. Properness B < J of a c-modular inner ideal is equivalent [by Modulus Exclusion 5.2(2)] to c 62 B, so we can apply Zorn's Lemma to nd a maximal c-modular (proper) inner ideal B containing B0 ; and we claim z 62 Core (B): if z were in the ideal Core (B) B so would c = fz; y g Uz y 2, a contradiction. (2) J semiprimitive () \K = Rad(J) = 0 [by de nition and (1)] () the co-primitive ideals K [by Coring 5.6] separate points, and in this Q case J is a subdirect product of the primitive J = J=K : If J = J is the subdirect product of SOME primitive algebras J = J=K (perhaps unrelated to maximal modular inner ideals) then by Primitive 5.5(1) the radical image (Rad(J)) in each primitive J
4. IMBEDDING NONDEGENERATES IN SEMIPRIMITIVES
449
is zero, Rad(J) \Ker( ) = \K = 0 by de nition of semi-direct product, and J is semiprimitive. We remark that also in the Jordan case the radical is the intersection of all maximal modular inner ideals, not just their cores (see Problem 2).
4. Imbedding Nondegenerates in Semiprimitives There are standard methods, familiar from associative theory, for shrinking the semiprimitive radical into the nondegenerate radical. We will show that, by a multi-step process passing through various scalar extensions and direct products, we can imbed any nondegenerate algebra into a semiprimitive algebra, a subdirect product of primitive algebras. In the end we want to apply this result to prime algebras, and assuming primeness from the start saves a few steps, so we will only imbed prime nondegenerate algebras (the general case is left to Problem 6). Each imbedding step creates a larger algebra where more of the radical vanishes, starting from an algebra where the nondegenerate radical vanishes and eventually reaching an algebra where the entire Jacobson radical vanishes. Furthermore, at each step in the process the strict identities are preserved, so that the nal semiprimitive algebra satis es exactly the same strict identities as the original, and each primitive factor of the subdirect product inherits these identities (but might satisfy additional ones). Recall that a polynomial is said to vanish strictly on J if it vanishes on all scalar extensions J: This happens i it vanishes on the \generic" extension [T ] J = J[T ] of polynomials (with coeÆcients from J) in a countable set of indeterminates T: A polynomial which vanishes on J but not all extensions only vanishes \fortuitously" on J, due to a lack of suÆcient scalar power; only the polynomials which vanish strictly are \really" satis ed by J: For example, the identity x2 = x de ning Boolean algebras does not hold strictly, indeed the only scalars that Boolean algebras can tolerate are those in Z2; Boolean algebras have a
eeting existence over the scalars Z2, then disappear. Theorem 5.8 (Semiprimitive Imbedding). Every prime nondegenerate Jordan algebra J can be imbedded in a semiprimitive Jordan algebra J~ over a big algebraically closed eld ; in such a way that J and J~ satisfy exactly the same strict identities. In particular, J is iexceptional i J~ is. Q Here we may take J~ = J to be a direct (not merely subdirect) product of primitive algebras J , where the J are also algebras over the big algebraically closed eld .
450
Primitivity proof.
As usual, we break this long proof into small steps.
Step 1: Imbed J in an algebra J1 over a eld 0 where no element of J is trivial in J1 : By primeness of J its centroid is (by the Centroid Theorem III.1.13) an integral domain acting faithfully on J. Thus the usual construction of the algebra of fractions 1 J leads (just as in the \scalar case" constructing the eld of fractions of an integral domain) to a Jordan algebra over the eld of fractions 0 = 1 : The fact that the action of is faithful guarantees that J is imbedded in J1 := 1 J, and we claim (1)
there is no nonzero z 2 J which is trivial in J1 :
Certainly by nondegeneracy of J none of its elements can be trivial in this larger algebra, Uz J1 Uz J 6= 0: Thus we can pass from a prime nondegenerate algebra to an algebra over a eld where J avoids any possible degeneracy in J1 : Primeness is used only to get to a eld quickly. In fact, J1 remains prime nondegenerate (see Problem 7), but from this point on we discard primeness and use only nondegeneracy. Step 2: Imbed J1 in an 0 -algebra J2 where no element of J is properly nilpotent of bounded index (p.n.b.) in J2 : Let J2 := J1 [T ] = J
0 0 [T ] = eJ be the algebra of polynomials in an in nite set of scalar indeterminates T . In fact, in a pinch you can get by with just one indeterminate (see Problem 5). We claim (2) there is no nonzero z 2 J such that z (n;eJ) = 0 for some n = n(z ):
Indeed, if some nth power vanished for all y~ 2 Je then all mth powers for m n would vanish since z (n+k;y~) = z (n;y~) y~ z (k;y~) = fz (n;y~) ; y~; z (k;y~) g = 0: The p.n.b. index of z is the smallest n so that z (m;eJ) = 0 for all m n: We claim no nonzero z can have proper index n > 1: Suppose z (m;eJ) = 0 for all m n > 1, but that some z (n 1;y~) 6= 0: Because T is in nite we can choose a t 2 T which does not appear in the polynomial y~: For any x 2 J set w~ := y~ + tx: Since n > 1 we have m := 2n 2 = n + (n 2) n, and therefore by de nition of the p.n.b. index we have z (2n 2;w~) = z (m;w~) = 0: Then the coeÆcients of all powers of t in the expansion of z (2n 2;w~) = 0 must vanish, in particular that of t itself.
4. IMBEDDING NONDEGENERATES IN SEMIPRIMITIVES
451
The coeÆcient of t1 in z (2n 2;w~) = Uz(w~) n 2 z (2;w~) = (Uz Uw~ )n 2 Uz w~ consists of all terms with a single factor tx and the rest of the factors z and y~, namely
P
0 = Uz Uy~ n 2 Uz x + nk=12 Uz Uy~ n 2 k Uz Uy~;x Uz Uy~ k 1Uz y~ P = Uz(n 1;y~) x + nk=12 fz (n+k 1;y~) ; x; z (n k 1;y~) g = Uz(n 1;y~) x since z (n+k 1;y~) = 0 for k 1 by hypothesis of p.n.b. bound n: Here we have used Macdonald to write the sum in terms of y~homotope powers, since the kth term can be written in associative algebras as z
n 1 k }|
{
z
2k }|
{
z
n 1 k }|
{
fz; y~; : : : ; z; y~; z; y~; z; y~; : : : ; y~; z; x; z; y~; z; : : : ; y~; z g z
n+k 1 }|
{
z
n k 1 }|
{
= fz; y~; : : : ; y~; z; x; z; y~; : : : ; y~; z g) (where the label on the brace tells the number of factors z in the alternating list). If we order the variables in T , and the lexicographically leading term of 0 6= z (n 1;y~) 2 eJ is 0 6= z 0 2 J, then the lexicographically leading term of Uz(n 1;y~) x is Uz0 x: Thus we have Uz0 x = 0 for all x 2 J, so 0 6= z 0 is trivial in J, contradicting NONDEGENERACY of J: Thus n > 1 is impossible, as we claimed. Thus the only possible index is n = 1; but then z = z (1;y~) = 0 anyway. This establishes (2). Thus we can pass to an algebra over a eld where J avoids any proper-nilpotence-of-bounded-index in J2 : Step 3: Imbed J2 in an 0 -algebra J3 so that no element of J is properly nilpotent in J3 : An algebra J always imbedsQ as constant sequences z 7! (z; z; : : :) in the sequence algebra S eq (J) = 1 1 J: Let J3 := S eq (J2 ) consist of all sequences from J2 , where J2 is identi ed with the constant sequences. We claim (3)
there is no nonzero z 2 J which is properly nil in J3 :
Indeed, by Step 2 there is no global bound on the index of nilpotence of z in the various J(y) 's for y 2 J2 : for each n there is a yn 2 J2 with z (n;yn ) 6= 0; then the element ~y = (y1 ; y2; y3 ; : : :) and the copy ~z = (z; z; z : : :) live in the sequence algebra J3 , but for any n we have
452
Primitivity
~z(n;~y) = (z (n;y1 ) ; z (n;y2 ) ; : : : ; z (n;yn ) ; : : :) 6= (0; 0; : : : ; 0; : : :) since it diers from zero in the nth place, so z is not nil in the homotope J(3~y) , i.e. z is not properly nilpotent. Thus we can pass to an algebra over a eld where J avoids all proper nilpotence whatsoever in J3 : Step 4: Imbed J3 in an algebra J4 over a big algebraically closed eld so that no element of J is properly quasi-invertible in J4 : If J4 :=
0 J3 for algebraically closed with j j > dim 0 J3 = dim J4 1 then we claim (4)
no nonzero z 2 J is properly quasi-invertible in J4
because any such z would be properly nilpotent in J4 (hence even more properly in J3 ) since by Amitsur's Big Resolvent Trick 3.3(2) the Jacobson radical of a Jordan algebra over a big eld is properly nil, and by Step 3 there aren't any such elements in J. Thus we can pass to an algebra over a big algebraically closed eld where J avoids all proper quasi-invertibility in J4 : Step 5: Imbed J4 in a semiprimitive algebra J5 which satis es exactly the same strict identities as J: Once we have passed to an algebra J4 where J avoids all proper quasi-invertibility, we can surgically remove the proper quasi-invertibility to create a semiprimitive algebra, yet without disturbing the original algebra J : (4) and the Elemental Characterization 1.10(2) guarantee J \ Rad(J4 ) = 0; so J remains imbedded in the semiprimitive -algebra J5 := J4 =Rad(J4 ) (recall Radical Surgery 1.12). Moreover, the scalar extension J1 inherits all strict identities from J, the scalar extension J2 inherits all strict identities from J1 , the direct product J3 inherits all identities from J2 , the scalar extension J4 inherits all strict identities from J3 , and the quotient J5 inherits all identities from J4 : Thus J5 inherits all strict identities from J, and conversely the subalgebra J J5 inherits all identities from J5 , so they have exactly the same strict identities. At last we have shrunken the Jacobson radical away to zero, and the -algebra J~ := J5 now ful lls all the requirements of our theorem: it is semiprimitive and has the same strict identities as J: By 1Recall that dimensions stays the same under scalar extension: if x are a basis i for J3 over 0 then the 1 xi are a basis for
0 J3 over .
4. IMBEDDING NONDEGENERATES IN SEMIPRIMITIVES
453
Semiprimitivity 5.7(2), semiprimitive -algebras are subdirect prodQ ~ ~ ~ for -ideals K ~ ucts J = J for primitive -algebras J~ = J~=K ~ = 0: Here the primitive J~ inherit all strict identities in J~ with \ K from J5 , hence from the original B, and the algebraically closed eld
remains big for J and even bigger for each J~ : j j > dim J4 dim J5 = dim J~ dim J~ : In quadratic Jordan algebras z (n;y~) = 0 does NOT imply z (m;y~) = 0 for all m n: Show instead (in a paci c manner, with no bullets, only U 's) that z (n;y~) = 0 implies z (k;y~) = 0 for all m 2n: Exercise 5.8*
454
Primitivity
5. Problems for Chapter 5 (1) Show that if J is unital, so bJ = e0 J, then all structural transformations T on J extend to Tb := 10 T; Tc := 10 T which is are congruent to b1 mod J via c = e T (e). [Since all inner ideals of J are modular with modulus e, by Shifting 5.2 it is not surprising that c = e T (e) is a modulus for T (J)!] (2) Show that a structural transformation T is congruent to 1b mod J i S := b 1 T; S := 1b T satisfy S (J) + S (J) J; S (^1) := c; S (^1) := c 2 J. (3) Show that the set S of structural transformations congruent to 1b mod J forms a semigroup: 1J 2 S with c = c = 0, if T1 ; T2 2 S and Si (^1) = ci then T1 T2 2 S with S12 (^1) = c1 + T1 (c2 ) = c1 + c2 S1 (c2 ). (3) Conclude that if Ti 2 S and Si (^1) = ci then T1 Tn 2 S with S1:::n (^1) = c1 + T1(c2 ) + T1 T2 (c3 ) + : : : + T1 Tn 1 (cn ) = P P P i ci i<j Si (cj ) + i<j 1; z~(m;~J) = 0 but z~(n 1;y~) 6= 0, and set s = tf ; f = 4nd P and show 0 = z~(2n;y~+sx) = 2kn=0 1 sk pk (~z; y~; x) where each sk pk (~z; y~; x) is a sum of powers tj whose exponents lie in the range fk j < f (k + 1). Since the ranges are non-overlapping, identify coeÆcients in the range f j < 2f to conclude Uz~(n 1;y~) x = 0: (3) Show nondegeneracy of eJ leads to a contradiction z~(n 1;y~) = 0, so again the only possibility is n = 1; z~ = 0: Problem 6. Strengthen the Semiprimitive Imbedding Theorem 5.8 to show that any nondegenerate Jordan algebra J (prime or not) is imbedded in a semiprimitive algebra which satis es exactly the same strict identities as J does. (1) If J2 := J [T ] for a countable set of indeterminates T , show J \ PN ilbd(J2 ) M(J): (2) If PN ilbd(J) denotes the set of all p.n.b. elements of J, show J3 := S eq(J2 ) show J2 \ PNil(J3 ) PN ilbd(J2 ): (3) If J4 := J3 [t], show J3 \ PQI (J4 ) PNil(J2 ): (4) Conclude that J is imbedded in J~ = J4 =Rad(J4 ) having exactly the same strict identities as J: Problem 7. (1) If J is prime in the quadratic sense of having no nonzero ideals I; K which are quadratically orthogonal, UI(K) = 0, show that the quadratic centroid q (J) := fT 2 End (J) j T UxZ = Ux T; T Vx = Vx T; UT (x) = T 2 Ux for all x 2 Jg is a domain acting faithfully. (2) Show that in the presence of 21 the quadratic centroid coincides with the ordinary centroid of J as a linear Jordan algebra (cf. 1.11). (3) Show that the algebra of fractions J1 = 1 J of a prime nondegenerate algebra J by its centroid (appearing in Step 1 of the proof of 5.8) is again prime nondegenerate.
CHAPTER 6
The Primitive Heart The nal concept is that of the heart. It is diÆcult to have a heart: for nice algebras, there must be a simple ideal at the algebra's core, which makes the whole algebra but a heartbeat away from simplicity. Zel'manov made the amazing anatomical discovery that i-exceptional algebras always have hearts (like Tin Woodsmen), consisting of the values taken on by all special identities. This is the key to the classi cation of primitive exceptional algebras over big elds (after which it is just a mopping-up operation to classify prime exceptional algebras in general).
1. Hearts and Spectra Just as in the associative case, the heart is the minimal nonzero ideal. Definition 6.1 (Heart). The heart of a Jordan algebra is its smallest nonzero ideal, if such exists: 0 6= ~(J) = \fI / J j I 6= 0g: Of course, most algebras are heartless, and many hearts are trivial (for A = E11 + E12 M2 () over a eld, the heart ~ = E12 is trivial). But if there happens to be a semiprimitive heart, bounding the spectra of its elements forces simplicity of the heart. Proposition 6.2 (Hearty). Hearts have the following properties: (1) If J has a heart ~(J), then J is indecomposable, J 6= J1 J2 : (2) If ~(J) has a unit element, then J is all heart, ~(J) = J: (3) J is simple i it is all heart and nontrivial: ~(J) = J nontrivial. (4) J is simple with capacity if ~(J) has a capacity, in particular if J is semiprimitive and the elements of ~(J) have bounded spectra over a big eld. proof. (1) If J = J1 J2 we would have ideals J1 \ J2 = 0, resulting in heartlessness. 457
458
The Primitive Heart
(2) If ~ has a unit e then ~ J2 (e) J2 (e) + J1 (e) = Ue (J) + Ue;^1 e (J) ~ since ~ is an ideal, which implies ~ = J2 (e); J1 (e) = 0; J = J2 (e) J0 (e), so by (1) J0 (e) = 0 and J = J2 (e) has unit e. (3) Here =) is clear, and (= holds since any ideal I 6= 0 has I ~ = J: (4) If ~ has capacity then by de nition III.20.1 it has a unit, so by (2) ~ = J is all heart with a nontrivial idempotent, hence by (3) is simple. The last statement follows since if J is semiprimitive over a big eld the ideal ~ inherits semiprimitivity by Ideal Inheritance 1.11(4), and the eld is still just as big for ~, so by Bounded Spectrum 3.5(3) the ideal ~ has a capacity. For elements of the heart, the absorber spectrum and ordinary spectrum are practically the same. Lemma 6.3 (Hearty Spectrum). If z is an element of the heart ~(J) of a Jordan algebra J over a eld , then its absorber spectrum almost coincides with its spectrum, S pec;J(z) AbsS pec;J(z) [ f0g S pec;J(z) [ f0g: proof. We already recalled AbsS pec(z ) S pec(z ) in Absorber Spectrum 4.6(2), so the last inclusion is clear. For the rst, consider any 0 6= in S pec(z ); then J > B = U^1 z (J) = U(^1 w) (J) = U^1 w (J) for w = 1 z [it is important here that is a eld]. B is an inner ideal with modulus c = 2w w2 2 ~(J) [since z; w 2 ~(J)] by Structural Inner Example 5.3(2). To show 2 AbsS pec(z ) we must show q (B) = 0: But J > B implies B contains no modulus by Modular Exclusion 5.2(2), hence no power of a modulus by Modular Shifting 5.2(1), hence I = I J (q (B)) contains no modulus for B either [since it is nil mod B by Absorber Nilness 4.7]; in particular I doesn't contain c 2 ~(J), so I doesn't contain ~(J), so by de nition of heart I = 0 and q (B) = 0:
2. Primitive Hearts Zel'manov opened up a primitive i-exceptional algebra and found
a very natural heart. Theorem 6.4 (Primitive Exceptional Heart). A primitive i-exceptional Jordan algebra has heart ~(J) = isp(J) consisting of all values on J of all s-identities. proof. isp(J) is an ideal, and it is a nonzero ideal since J is i-exceptional, i.e. not i-special, and therefore does not satisfy all sidentities. We need to show that each nonzero ideal I contains isp(J).
2. PRIMITIVE HEARTS
459
Now I isp(J) is equivalent to 0 = isp(J)=I = isp(J=I) in J=I: But from the Complementation Property 5.3(1) of the primitizer P of J we have J=I = (I + P)=I = P=P \ I and isp(P=P \ I) = isp(P)=P \ I = 0 from isp(P) = 0 by Primitive Proposition 5.5(3). The key to the entire classi cation of i-exceptional algebras turns out to be the case of primitive algebras over big elds. Theorem 6.5 (Big Primitive Exceptional). A primitive i-exceptional Jordan algebra over a big algebraically closed eld is a simple split Albert algebra Alb(): A primitive Jordan algebra over a big algebraically closed eld is either i-special or a split Albert algebra. proof. Since every algebra is either i-special or i-exceptional, it suÆces to prove the rst assertion. We break the proof for a primitive i-exceptional algebra J into a few dainty steps. (Step 1) The heart ~ = isp(J) 6= 0 exists by the Primitive Exceptional Heart Theorem 6.4, so there exists an f 2 isp(X ) of some nite degree N which does not vanish strictly on J, hence by f -Spectral Bound 3.6(2) there is a uniform bound 2N on f-spectra of elements of J: (Step 2) Since the absorber spectrum is contained in the f -spectrum for all nonvanishing f 2 isp(X ) by Absorber Spectrum 4.6(2) [since J primitive implies nondegenerate by Primitive Proposition 5.5(1)], there is a bound 2N on the size of absorber spectra of elements of J: (Step 3) Since the spectrum and absorber spectrum for a hearty element dier in size by at most 1 by the Hearty Spectrum Lemma 6.3, there is a bound 2N + 1 on ordinary spectra of hearty elements. (Step 4) Once the heart has a global bound on spectra over a big eld, by the Hearty Proposition 6.2(4) ~ has capacity and J = ~ is simple with capacity. (Step 5) Once J has simple capacity, by the Classical Classi cation Theorem III.23.2, the only i-exceptional simple algebra it can be is an Albert algebra H3 (O) of Type (A): it is not a division algebra of Type (D) by Division Evaporation 3.4 over a big eld. But over an algebraically closed eld O must be split, hence J is a split Albert algebra. It's not surprising that students of Jordan algebras can often be found humming to themselves the song \You've got to have heart, All you really need is heart . . . "
Third Phase: Logical Conclusions
In this nal phase we go through in detail the argument sketched in II.8.6, bringing in ultra lters to see that by purely logical considerations our classi cation of i-exceptional primitive algebras over big elds is suÆcient to classify i-exceptional prime nondegenerate algebras over arbitrary scalars. A nondegenerate algebra imbeds in a subdirect product of primitive J 's over big algebraically closed elds, and by our classi cation we know these are i-special or split Albert algebras. If the original algebra Q was i-exceptional, it imbeds in a direct product of Albert algebras Alb( Q ), which is itself an Albert algebra Alb( ) over a ring of scalars = which is very far from being a eld. But if the original algebra was also prime, an ultraproduct argument tightens this up to says that the i-exceptional prime nondegenerate algebra actually imbeds as a form in an Albert algebra Alb(F) over a eld.
CHAPTER 7
Filters and Ultra lters We have actually done all the structural work for classifying simple and prime exceptional algebras. The rest of the book will show how the (seemingly restrictive) classi cation over big algebraically closed elds can be spruced up to apply in complete generality.
1. Filters in General We begin with the concept of lter. Ultimately we will only be concerned with ultra lters on the set of indices for a direct product, but we will start from basics with the general concepts. Definition 7.1 (Filter). A nonempty collection F P (X ) of subsets of a given set X is called a lter on X if it is (F1) closed under intersections: Y1 ; Y2 2 F =) Y1 \ Y2 2 F ; (F2) closed under enlargement: Z Y 2 F =) Z 2 F ; (F3) proper: ; 62 F : Here (F 1) is equivalent to closure under nite intersections, (F1)0 Y1 ; : : : ; Yn 2 F =) Y1 \ : : : \ Yn 2 F ; (F 2) is equivalent to closure under unions, (F2)0 Y 2 F ; Z X =) Y [ Z 2 F , and in particular implies (F2)00 X 2 F : (F 3) is equivalent, in view of (F 2), to (F3)0 F 6= P (X ). Filters are \dual ideals" in the lattice P (X ); just as an ideal I / A has y1 ; y2 2 I =) y1 + y2 2 I and y 2 I; z 2 A =) y z 2 I; F satis es the corresponding properties (F1), (F2) with +; replaced by \; [: Example 7.2 (Principal). The principal lter F (X0 ) on X determined by a subset X0 X consists of all subsets Y X0 containing X0 . Proposition 7.3 (Restriction and Intersection). The restriction lter FjY determined by a subset Y 2 F consists of all subsets of F contained in Y; FjY = fZ 2 F j Z Y g = F \ P (Y ): 461
462
Filters and Ultra lters
This coincides with the intersection lter F \ Y consisting of all intersections of elements of F with Y; F \ Y = fZ \ Y jZ 2 Fg: proof. FjY eortlessly inherits (F1),(F2),(F3) from F , hence is a lter on Y ; it is equal to F \ Y since Z 2 FjY =) Z = Z \ Y 2 F \ Y; and Z \ Y 2 F \ Y for Z 2 F =) Z \ Y 2 F [by (F1) for Z; Y 2 F ] =) Z \ Y 2 FjY : Let X0 X be a subset, and F a lter on X . (1) Show FjX0 automatically satis es (F1),(F2),(F3), but is nonempty i X0 2 F . (2) Show F \ X0 automatically satis es (F1),(F2), and is nonempty, and satis es (F3) i X n X0 62 F . Exercise 7.3
Proposition 7.4 (Enlargement). Every nonempty collection F 0 of nonempty subsets of X which is directed downwards (Y1 ; Y2 2 F 0 =) Y1 \ Y2 Y3 for some Y3 2 F 0 ) generates on X an enlargement lter F 0 = fZ j Z Y for some Y 2 F 0 g: proof. (F2) is trivial [anything larger than something larger than Y 2 F 0 is even larger than that same Y ], (F3) is easy [if ; wasn't in the collection before, you won't get it by taking enlargements: ; 62 F 0 =) ; 62 F 0], while (F1) uses downward directedness [Zi Yi 2 F 0 =) Z1 \ Z2 Y1 \ Y2 Y3 2 F 0 ].
2. Filters from Primes In algebra, the most important way to construct downwardly directed collections F 0 (hence lters F 0 ) is through prime subalgebras of direct products. Recall from the Direct Sum De nition II.2.2 the Q direct product Q x2X Ax of algebraic systems Ax consists of all \X tuples" a = x ax of elements ax 2 Ax , or more usefully all functions on X whose value a(x) := ax at any x lies in Ax, under the pointwise operations on functions. Q Definition 7.5 (Support). (1) If A = x2X Ax is a direct product of linear algebraic systems (additive abelian groups with additional structure), for a 2 A let the zero set and the support set of a be Z ero(a) = fx 2 X j a(x) = 0g; S upp(a) = fx 2 X j a(x) 6= 0g; so Z ero(a) = X () S upp(a) = ; () a = 0: (3) If I A (a) denotes the ideal in A generated by a we have b 2 I A (a) =) Z ero(b) Z ero(a); S upp(b) S upp(a)
3. ULTIMATE FILTERS
463
since, if x (a) = a(x) denotes the projection of A onto the xth coordinate Ax ; x 2 Z ero(a) () x (a) = 0 () a 2 Ker(x ) =) b 2 I A(a) Ker(x ) =) x(b) = 0 =) x 2 Z ero(b): Example 7.6 (Prime). An algebraic system is prime if the product of two nonzero ideals is again nonzero. For linear algebras the product is taken to be the usual bilinear product prod(I1; I2 ) := I1 I2 , but for Jordan algebras the correct product is the quadratic product prod(I1; I2 ) := UI1 (I2 ). In either case, two nonzero ideals have nonzero intersection I1 \ I2 prod(I1 ; I2 ) 6= Q 0. If A0 6= 0 is a prime nonzero subsystem of the direct product A = x2X Ax , then S upp(A0 ) = fS upp(a0) j 0 6= a0 2 A0g is a nonempty collection [since A0 6= 0] of nonempty subsets [by Support 7:5(2), since a0 6= 0] which is directed downwards: if a1 ; a2 6= 0 in A0 then BY PRIMENESS I A (a1 ) \ I A (a2 ) 6= 0 contains some a3 6= 0, hence S upp(a3 ) S upp(a1 ) \ S upp(a2 ) by Support 7:5(2). Therefore by Enlargement 7:4 we always have the support lter of A0 , the enlargement lter S upp(A0 ) = fZ j Z S upp(a0) for some 0 6= a0 2 A0 g: Recall that we are trying to analyze prime Jordan algebrasQA0 . We will eventually imbed these in such a direct product A = x2X Ax (which is very far from being prime, since any two of its factors Ax ; Ay are orthgonal), and will use the support sets of A0 to generate a lter as above, and then return to primeness via an ultraproduct. We now turn to these mysterious beasts.
3. Ultimate Filters An ultra lter is just a lter which is as big as it can be (and still remain a lter). We will use lters to chop down a direct product; to get the result as tight as possible, we must make the lter as large as possible. Ultra lters are so large that their power is almost magical. Definition 7.7 (Ultra lter). An ultra lter is a maximal lter. Example 7.8 (Principal Ultra lter). Any element x0 2 X determines a principal ultra lter F (x0 ) consisting of all subsets containing the point x0 . It is useful to think of an ultra lter as the collection F (x1 ) of all neighborhoods of an ideal point x1 of some logical \closure" or \completion" X of X . The case of a principal ultra lter is precisely the case where this ideal point actually exists inside X; x1 = x0 . We will see that these are the uninteresting lters.
464
Filters and Ultra lters
Just as every proper ideal in a unital algebra can be imbedded in a maximal ideal, Theorem 7.9 (Imbedding). Every lter is contained in an ultra lter. proof. We apply Zorn's Lemma to the collection of lters containing a given lter F 0 . Notice that the union F of a chain (or even upwardly directed set) of lters fF i g is again a lter: (F1) holds by directedness [if Y1 ; Y2 2 F then Y1 2 F i ; Y2 2 F j for some i; j =) Y1 ; Y2 2 F k for some k by directedness, so Y1 \ Y2 2 F k (by(F1) for F k ), hence Y1 \ Y2 2 F ]; (F2) is obvious; and (F3) holds because properness can be phrased in terms of avoiding a speci c element ;. Thus the hypotheses of Zorn's Lemma are met, and it guarantees the existence of a maximal lter, i.e. an ultra lter, containing F 0 : Outside of the rather trivial principal ultra lters, all ultra lters arise by imbedding via Zorn's Lemma, and so have no concrete description at all !! Indeed, one can show that to require that a countable set of natural numbers N have a non-principal ultra lter on it is a set-theoretic assumption stronger than the Countable Axiom of Choice! Prime ideals in a commutative associative ring are characterized by the property y1 y2 2 I =) y1 2 I or y2 2 I. Ultra lters have an analogous characterization. Theorem 7.10 (Ultra Characterization). The following conditions on a lter F are equivalent: (UF1) F is an ultra lter; (UF2) Y1 [ Y2 2 F =) Y1 2 F or Y2 2 F ; (UF3) Y1 [ : : : [ Yn 2 F =) some Yi 2 F ; (UF4) for all Y X , either Y 2 F or Y 0 2 F (where Y 0 = X n Y denotes the complement of Y in the ambient set X ). proof. (UF1) =) (UF2): if Y1 [ Y2 2 F but Y1 ; Y2 62 F we claim that G = fZ jZ W \ Y1 for some W 2 Fg > F is a strictly larger lter (which will contradict the maximality (UF1)). Certainly G F : any W 2 F contains W \ Y1 and so falls in G . The containment is strict since Y1 62 F by hypothesis, but Y1 2 G because Y1 = (Y1 [ Y2 ) \ Y1 = W \ Y1 for W = Y1 [ Y2 2 F by hypothesis. Thus G is strictly larger. Now we check G is a lter: (F2) is trivial, (F1) holds since Zi Wi \ Y1 for Wi 2 F =) Z1 \ Z2 W1 \ W2 \ Y1 for W1 \ W2 2 F [by (F1) for F ], and (F3) must hold since otherwise for some W 2 F we would have W \ Y1 = ; =) W Y10 =) Y10 2 F [by (F2) for F ]
4. PROBLEMS FOR CHAPTER 7
465
=) Y10 \ Y2 = Y10 \ (Y1 [ Y2 ) 2 F [by (F1) for F since Y1 [ Y2 2 F by hypothesis] =) Y2 2 F [by (F2) for F ], contrary to hypothesis. (UF2) () (UF3): (= is clear and =) follows by an easy induction. (UF2) =) (UF4): apply (UF2) with Y1 = Y; Y2 = Y 0 since Y [ Y 0 = X is always in F by (F200 ). (UF4) =) (UF1): if F were properly contained in a lter G there would exist Z 2 G with Z 62 F , hence Z 0 2 F G by (UF4) and so ; = Z \ Z 0 2 G by (F1) for G , contradicting (F3) for G : Show that a collection of subsets of X is an ultra lter on X i it satis es (F1), (UF4), (F3) (ie. (F2) is superceded by (UF4)). Exercise 7.10
Example 7.11 (Ultra Restriction). The restriction FjY of an ultra lter F on a set X to a set Y 2 F is an ultra lter on Y: proof. FjY is a lter on Y by 7.3; we verify FjY has the complementation property (UF4) of an ultra lter: Z Y; Z 62 FjY =) Z 62 F =) Z 0 2 F [by (UF4) for F ] =) Y n Z = Y \ Z 0 2 F =) Y n Z 2 FjY :
4. Problems for Chapter 7 (1) Show that the collection of all subsets of N with nite complement, or of with bounded complement, forms a lter F 0 . Show that F 0 can be described as the enlargement lter (as in Prop. 7.4) of the collection of intervals (N; 1). If F is an ultra lter containing F 0 , show F cannot be principal. (2) Let X be the set Z of all integers or the set R of all real numbers. Show similarly that the collection of all subsets of X with nite complement forms a lter F 0 ; if F is an ultra lter which contains F 0 , show F is not principal. Show that F 0 is contained in the lter F 1 of all subsets with bounded complement. Problem 2*. Show that if an ultra lter contains a nite subset Y , it must be a principal ultra lter. Problem 3*. Let F be the set of neighborhoods of a point x0 of a topological space X (the subsets Y U 3 x0 containing an open set U around x0 ); show that F is a lter. Problem 1*.
R+ ,
CHAPTER 8
Ultraproducts The chief use made of ultra lters in algebra is in the construction of ultraproducts, which are quotients of direct products. We will show that every prime subalgebra of a direct product imbeds in an ultraproduct. Ultraproducts are logical \models" of the original algebra, retaining all its elementary algebraic properties.
1. Ultraproducts We begin with the basic facts about ultraproducts in general. Q Definition 8.1 (Ultraproduct). Let A = Ax be a direct product of algebraic systems indexed by a set X , and let F be a lter on X . The congruence F determined by F is a F b i a agrees with b on some Y 2 F (a(x) = b(x) for all x 2 Y ); equivalently (in view of the enlargement property (F2) of lters) their agreement set belongs to F ; Agree(a; b) := fx 2 X j a(x) = b(x)g 2 F : Q The ltered product A=F = ( Ax )=F is the quotient A= F , under the induced operations. Intuitively, this consists of \germs" of functions (as in the theory of varieties or manifolds), where we identify two functions if they agree \locally" on some \neighborhood" Y . (2) When the Ax are linear algebraic systems with underlying additive abelian groups, then Agree(a; b) = Z ero(a b), and the congruence can be replaced by the lter ideal I (F ) = fa 2 A j a F 0g = fa 2 A j Z ero(a) 2 Fg (where a F b i a b F 0 i )), so the ltered product Q a b 2 I (FQ becomes the quotient system ( Ax )=F = ( Ax )=I (F ). An ultraproduct is just a product ltered by an ultra lter. In the ultraproduct we are identifying two functions if they agree \on a neighborhoood of the ideal point x1 ", and the resulting quotient can be thought of as an \ideal factor Ax1 ". Note Q that if F is the principal ultra lter F (fx0 g) then the ultraproduct ( Ax )=F = Ax0 467
468
Ultraproducts
is precisely the xth 0 -factor. From this point of view, the principal ultra lters are worthless, producing no new ultraproducts. Q Theorem 8.2 (Restriction). Let A = x2X Ax be a direct product of linear algebraic systems. (1) If F G are lters on X , then I (F ) I (G ) induces a canonical projection A=F ! A=G : Q (2) If X0 2 F then for the direct product A0 = x2X0 Ax and the restriction lter F 0 = FjX0 on X0 , we have a natural isomorphism A=F = A0 =F 0 so that for any X0 2 F we can discard all factors Ax for x 62 X0 : (3) If A0 is a prime subalgebra of A, then A0 remains imbedded in the ltered product A=F for any lter F F (A0 ) containing the support lter of A0 : (4) In particular, any prime subalgebra A0 of A imbeds in an ultraproduct A=F for F F (A0 ). proof. (1) is clear from the Ultraproduct De nition 8.1(2). (2) We have a canonical inclusion in : A0 ,! A by in(a0 )(x) = a0 (x) if x 2 X0 ; in(a0 )(x) = 0 if x 62 X0 . This induces an epimorphism f = Æ in : A0 ! A=F , since if a 2 A then its restriction a0 2 A0 to X0 has in(a0 ) = a on X0 2 F ; in(a0 ) F a; f (a0 ) = (in(a0 )) = (a). The kernel of f consists of all a0 with in(a0 ) = 0 on some Y 2 F , i.e. a0 = 0 on Y \ X0 2 F \ X0 = FjX0 [by Restriction Filter 7.3] = F 0 , i.e. a0 2 I (F 0 ), so f induces an isomorphism A0 =F 0 = A0 =I (F 0 ) ! A=F . (3) No nonzero a0 is killed by the imbedding, because if 0 6= a0 2 A0 has a0 F 0 then Z ero(a0 ) 2 F by De nition 8.1(2) of the congruence, yet S upp(a0 ) 2 F 0 F by hypothesis on F , so ; = Z ero(a0 ) \ S upp(a0) 2 F by (F2) for F , which would contradict (F3). (4) follows since every lter F 0 imbeds in an ultra lter F , and we apply (3). Though we will not take the long detour necessary to prove it, we want to at least state the result which guarantees ultraproducts are tight models. Theorem 8.3 (Basic Ultraproduct Fact). Any elementary property Q true of all factors Ax is inherited by any ultraproduct ( Ax )=F . Elementary is here a technical term from mathematical logic. Roughly, it means a property describable (using universal quanti ers) in terms of a nite number of elements of the system. For example, algebraic closure of a eld requires that each nonconstant polynomial have a
2. EXAMPLES
469
root in the eld, and this is elementary P [for each xed n > 1 and xed a0 ; : : : ; n in there exists a 2 with ni=0 i i = 0]. However, simplicity of an algebra makes a requirement on sets of elements (ideals), or on existence of a nite number n of elements without any bound on n [for each xed a 6= 0 and b in A there P exists an n and a set c1 ; : : : ; cn; d1 ; : : : ; dn of 2n elements with b = ni=1 ci adi ]. The trouble with such a condition is that as x ranges over X the numbers n(x) may tend to in nity so that there is P no nite set of elements ci (x); di (x) in the direct product with b(x) = ni=1 ci (x)a(x)di (x) for all x.
2. Examples Rather than give a precise de nition of \elementary", we will go through some examples in detail, including the few that we need. Example 8.4 (Identities). The property of having a multiplicative unit or of satisfying some identical relation (such as the commutative law, anti-commutative law, associative law, Jacobi identity, Jordan identity, left or right alternative law, etc.) is inherited by direct products and Q homomorphic images, so certainly is inherited by the ultraproduct ( Ax )=F . Example 8.5 (Division Algebras). Any ultraproduct of division algebras is a division algebra. In particular, any ultraproduct of elds is a eld. Moreover, any ultraproduct of algebraically closed elds is again an algebraically closed eld. proof. The division algebra condition is that every element a 6= 0 has a multiplicative inverse b (ab = ba = 1 in associative or alternative algebras, ab = 1 and a2 b = a in Jordan algebras, or Ua b = Q a; Ua b2 = 1 in quadratic Jordan algebras). Here the direct product Ax most de nitely does NOT inherit this condition { there are lots of nonzero functions having many zero values, whereas an invertible element of the direct product must have all its values invertible (hence nonzero). However, if a function a 2 A is nonzero in the ultraproduct A=I (F ) then most of its values are nonzero: a 62 I (F ) =) Z ero(a) 62 F by Ultraproduct De nition 8.1(2), hence the complement Y = S upp(a) MUST BE IN F BY ULTRAFILTER PROPERTY (UF4). Since a is nonzero on Y , for each y 2 Y the element a(y ) 6= 0 has an inverse b(y ) in the division algebra Ay , and we can DEFINE an element b of the direct product by b(x) = b(y ) if x = y 2 Y and b(x) = 0 if x 62 Y ; then the requisite relations (ab = 1 or whatever) hold at each y 2 Y , therefore hold globally in the quotient A=F (ab F 1 or whatever), and b is the desired inverse of a:
470
Ultraproducts
Now consider algebraic closure. A eld is algebraically closed if every monic polynomial of degree 1 has a root in (this guarantees all non-constant polynomials split entirely into linear factors over ). If Q
= x2X x =F is an ultraproduct of algebraically closed elds x , we know is itself a eld; to show it is algebraically closed, we must show that any monic polynomial (t) = 0 + 1 t + 2 t2 + : : : + tn 2 [t] of degree n 1 over has a root 2 . This is easy to do, since it is already true at the level of theQdirect product (the lter F is super uous). Choose pre-images ai 2 x of the i 2Q ; and consider the polynomial p(t) = a0 + a1 t + a2 t2 + : : : + tn 2 ( x )[t] over the direct product. For each xed x the x-coordinate or value p(t)(x) = a0 (x) + a1 (x)t + a2 (x)t2 + : : : + tn 2 x [t] is a monic polynomial of degree n 1, so by the hypothesis that x is algebraicallly closed it has a root b(x) 2 x . These b(x) DEFINE an element b of the direct product which is a root of p(t) : p(b) = a0 + a1 b + a2 b2 + : : : + bn = 0 in Q
x since p(b)(x) = a0 (x) + a1 (x)b(x) + a2 (x)b(x)2 + : : : + b(x)n = 0 in x for each x. Therefore the image of b in is a root of (t):1 8.6 (Quadratic Forms). Any ultraproduct of (resp. nondegenerate) quadratic forms over elds is again a (resp. nondegenerate) quadratic form over a eld. proof. If Qx are quadratic forms on vector spaces Vx over elds Q Q Q x , set V := Vx; := x ; Q := Qx ; then it is easy to check that Q is a quadratic form on V over the ring of scalars. If F is an ultra lter on X set V 0 := V=F ; 0 := =F ; by the previous Example 8.5 0 is a eld, and it is routine to check that V 0 remains a vector space over 0 . We claim Q induces a quadratic form Q0 = Q=F on V 0 over 0 via Q0 (a0 ) := Q(a)0 (where 0 denotes coset mod I (F ) as in the Ultraproduct De nition 8.1). It suÆces if this is a well-de ned map, since it automatically inherits the identities which characterize a quadratic form; but if a F b in V then for some Y 2 F and all y 2 Y we have a(y ) = b(y ) in Vy ; Q(a)(y ) = Qy (a(y )) = Qy (b(y )) = Q(b)(y ) in y , hence Q(a) F Q(b) in =F , and Q0 (a0 ) = Q0 (b0 ) in 0 = =F . It remains only to check that Q0 is nondegenerate as quadratic form if each Qx is. If z 0 = zF 6= 00 in V 0 then Z ero(z ) 62 F =) Y = S upp(z) 2 F by (UF4) as always; since each Qx is nondegenerate, once zy 6= 0 then there exists ay with Qy (zy ; ay ) 6= 0 (here we are using the fact that 12 2 ; for general the argument is a bit longer). The element a 2 V DEFINED by a(x) = ay if x = y 2 Y , and Example
1Note at each x there are n choices for b(x) in , so there are usually in nitely x
many roots b in the direct product. But since the ultra lter creates a eld, magically it must reduce this plethora of roots to exactly n, counting multiplicities!
2. EXAMPLES
471
a(x) = 0 if x 62 Y , has Q(z; a)(y ) 6= 0 for all y 2 Y 2 F ; so Q0 (z 0 ; a0 ) = Q0 (zF ; aF ) = Q(z; a)F 6= 00 in 0 = =F , and Q0 is nondegenerate. (1) Argue a bit longer in the nondegeneracy argument, to show that over arbitrary elds (allowing characteristic 2) an ultraproduct Q0 = Q=F on V 0 over 0 of nondegenerate quadratic forms remains nondegenerate. As above, show that if z 0 6= 00 in V 0 then S upp(z ) = Y = Y1 [ Y2 2 F for Y1 := fy 2 Y j there exists a(y) with Qy (z (y); a(y)) 6= 0; Y2 := fy 2 Y j Qy (z (y)) 6= 0g, and use the properties of ultraproducts to show either Q0 (z 0 ) 6= 00 or Q0(z 0 ; a0 ) 6= 00 for some Q a0 , hence z 0 62 Rad(Q0 ). (2) Show more generally that Rad(Q0 ) Rad(Qx ) =F . = Exercise 8.6*
A similar result holds for cubic factors Jord(N; c) constructed by the Freudenthal-Springer-Tits Construction III.4.2,4,8,9 from Jordan cubic forms N with adjoints # and basepoints c: Example 8.7 (Split Albert Algebras). Any ultraproduct of split Albert algebras over (resp. algebraically closed) elds is a split Albert algebra over a (resp. algebraically closed ) eld. proof. Recall the Split Albert Algebra Alb() over any scalar ring Q Q of III.4.6. If we set AQx := Alb(x ); A := Ax ; := x then Q A := Alb(x ) = Alb( x ) = Alb(), and for any ultra lter on X we have A=F = Alb(=F ) [the ideal I A (F ) in the direct product A as in Ultraproduct De nition 8.1(2) consists of all hermitian 3 3 matrices with entries in the corresponding ideal I (F ) of the direct product , which by abuse of language we may write as I A (F ) = Alb(I (F )), and we have Alb()=Alb(I (F )) = Alb(=I (F ))]. By the Division Algebra Example 8.5, the latter is a eld of the required type. We remark that the same argument will work for any functor from rings of scalars to any category, as long as the functor commutes with direct products and quotients. Example 8.8 (Quadratic Factors). Any ultraproduct of (resp. nondegenerate) Jordan quadratic factors over elds is a (resp. nondegenerate) quadratic factor over a eld. proof. If Ax = J ord(Qx ; cx ) is a Jordan algebra determined by a quadratic form Qx with basepoint cx over a eld x , the Jordan structure is determined by having as cx unitQand Ua b = QQx (a; b )a Q b = Qx (b; cx )cx b. Set A = Ax ; = x ; Q = Qx (a)b for Q Qx ; c = cx; then we noted in the Quadratic Form Example 8.6 that Q is a quadratic form on A over the ring of scalars, and it is easy to check that c is a basepoint which determines the pointwise Jordan structure of A : A = Jord(Q; c).
472
Ultraproducts
If F is an ultra lter on X , by the Quadratic Form Example Q0 := Q=F is a (resp. nondegenerate) quadratic form on A0 := A=F over the eld 0 := =F . It is easy to check that c0 := cF is a basepoint for Q0 determining the quotient Jordan structure of A0 : A0 = Jord(Q0; c0 ). Thus the ultraproduct A=F is just the quadratic factor Jord(Q0; c0 ):
We noted in the case of simplicity that not all algebraic properties carry over to ultraproducts: properties of the form \for each element a there is an n = n(a) so that . . . . holds" usually fail in the direct product and the ultraproduct because in an in nitely long string (a1 ; a2 ; : : :) there may be no upper bound to the n(ai ). Another easyQexample is provided by a countably in nite ultrapower AX =F = ( x2X A)=F (ultraproduct based on the direct power, where all the factors are the same, instead of the direct product of dierent factors). N Example 8.9 (Nil Algebras). A countable ultrapower A =F of a nil algebra A is a nil algebra if and only if either (1) A is nil of bounded index, or (2) the ultra lter F is principal. proof. If the algebra is nil of bounded index as in (1), it satis es a polynomial identity an = 0 for some xed n independent of a, hence the direct product and its quotient ultrapower do too. If the ultra lter is principal F = F (x0 ) as in (2), then the ultraproduct is just the xth 0 factor A = Ax0 , which is certainly nil. All of this works for any ultrapower AX =F : The harder part is the converse; here we require X = N to be countably in nite. Assume (1) fails, but AN =F is nil; we will show the ultra lter is principal, equivalently (cf. Problem 7.2) F contains some nite set Y . Since A does not have bounded index, there are elements ak 2 A with akk 6= 0. Let a = (a1 ; a2 ; : : :) 2 AN result from stringing these ak 's together. If the image (a) of a is nilpotent in N N n n A =F ; (a) = 0 for some xed n, then back in A we have a (k ) = 0 n on some subset Y 2 F , i.e. ak = 0 and therefore n > k for all k 2 Y . But then Y f1; 2; : : : ; n 1g is the desired nite set in the lter F :
3. PROBLEMS FOR CHAPTER 8
473
3. Problems for Chapter 8 How is the dimension of the ultraproduct related to the dimensions of the individual factors Ax? Is an ultraproduct of composition algebras again a composition algebra? What about split composition algebras? Quaternion algebras? Octonion algebras? Split octonion algebras? Question 1.
CHAPTER 9
The Final Argument We are nally ready to analyze the structure of prime algebras. The game plan is (1) to go from prime nondegenerate algebras to semiprimitive algebras over big elds by an imbedding process, (2) pass via subdirect products from semiprimitive to primitive algebras over big elds where the true classi cation takes place, then (3) form an ultraproduct to get back down to a \model" of the original prime algebra. Note that we go directly from primitive to prime without passing simple, so that the structure of simple algebras follows from the more general structure of prime algebras. Even if we started with a simple algebra the simplicity would be destroyed in the passage from nondegenerate to semiprimitive (even in the associative theory there are simple radical rings). We have completed the rst two steps of our game plan in the Semiprimitive Imbedding Theorem 5.8. What remains is the ultraproduct step.
1. Dichotomy Because ultraproducts provide tight algebraic models of the factors, if the factors all come in only a nite number of algebraic avors, then the ultraproduct too must have exactly one of those avors. Theorem 9.1 (Finite Dichotomy). If each factor Ax in an ultra lter belongs to one of a nite number of types fT1 ; : : : ; Tn g, then an ultraproduct is isomorphic to a homogeneous ultraproduct of a single type Ti for some i = 1; 2; : : : ; n, A
=(
Y
x2X
Ax
)=F
=
Ai
Y
:= (
x2Xi
Ax
)=(FjXi )
where Xi = fx 2 X j Ax has type Ti g. proof. By hypothesis X = X1 [ : : : [ Xn is a nite union (for any x the factor Ax has some type Ti , so x 2 Xi ), so by property (UF3) of UltraQCharacterizationQ7.10 some Xi 2 F , and then by Restriction 8.2(2) ( x2X Ax )=F = ( x2Xi Ax )=(FjXi ) where by Ultra Restriction 475
476
The Final Argument
7.11 FjXi is an ultra lter on the homogeneous direct product
Q x2Xi Ax .
2. The Prime Dichotomy There are only two options for a primitive Jordan algebra over a big eld: satisfy all s-identities and be an i-special algebra, or be an Albert algebra. Applying this dichotomy to a prime nondegenerate Jordan algebra yields the main theorem of Part IV. Theorem 9.2 (Prime Dichotomy Theorem). (1) Every prime nondegenerate Jordan -algebra with 12 2 is either i-special or a form of a split Albert algebra. (2) Every simple Jordan algebra of characteristic 6= 2 is either i-special or a 27-dimensional Albert algebra Jord(N; c) over its center. proof. (1) We have already noted that the rst two steps in this argument, going from prime to semiprimitive to primitive, have been taken: by Semiprimitive Imbedding 5.8 a prime nondegenerate Q e J imbeds in a direct product J = J for primitive i-exceptional algebras J over one big algebraically closed eld . Further, we clearly understand the resulting factors: by the Big Primitive Exceptional Theorem 6.5 these factors are either i-special or split Albert algebras Alb( ) over their centers . So far we have an enlargement of J with a precise structure. Now we must recapture the original J without losing our grip on the structure. To tighten, we need an ultra lter, and this is where for the rst time primeness is truly esential. By Prime Example 7.6 any prime subalgebra of a direct product has a support lter F (J) generated by the supports S upp(a) = f j a() 6= 0g of its nonzero elements, and by the Restriction Theorem 8.2(4) J remains imbedded in some ultraproduct e JF determined by an ultra lter F F (J). By Finite Dichotomy 9.1 we can replace this ultraproduct by a homogeneous ultraproduct where all the factors are i-special or all the factors are split Albert algebras. In the rst case the algebra J inherits i-speciality as a subalgebra of Q a quotient eJF of a direct product eJ = J of i-special factors. In the second case the algebra J is a subalgebra of an ultraproduct of split Albert factors, which by Split Albert Example 8.7 is itself an Albert algebra J = Alb( ) over a big algebraically closed eld . We claim that in fact J0 = J is all of of the 27-dimensional algebra J, so J is indeed a form of a split Albert algebra as claimed in the theorem. Otherwise J0 has dimension < 27 over , as does the semisimple algebra J00 = J0 =Rad(J0 ) and all of its simple summands, so by the
2. THE PRIME DICHOTOMY
477 nite-dimensional theory these summands must be SPECIAL and J00 is special too. But J \ Rad(J0 ) = 0 by the Radical Avoidance Lemma
1.14 [J is nondegenerate BY HYPOTHESIS and the nite-dimensional 0 J certainly has the d.c.c. on all inner ideals (indeed, all subapces!)], so 00 J remains imbedded in J and is special too, contrary to the assumed i-exceptionality of J. (2) This establishes the theorem for prime algebras. If J is simple iexceptional we will show it is already 27-dimensional over its centroid, which is a eld , and therefore again by the nite-dimensional theory it will follow that J = Jord(N; c). Now J is also prime and nondegenerate, so applying the prime case gives J = Alb( ) with a natural epimorphism J = J ! J = Alb( ). IN CHARACTERISTIC 6= 2 the scalar extension J of the central-simple LINEAR Jordan algebra J remains simple over by Strict Simplicity III.1.13, so this epimorphism must be an isomorphism, and dim (J) = dim (J ) = dim (Alb( )) = 27. This completes our proof of this powerful theorem. As an immediate corollary we obtain the main result of this Part IV, the nonexistence of i-exceptional Jordan algebras which might provide a home for a non-Copenhagen quantum mechanics. Theorem 9.3 (Zel'manov's Exceptional Theorem). (1) The only i-exceptional prime nondegenerate Jordan -algebras with 12 2 are forms of split Albert algebras. (2) The only i-exceptional simple nondegenerate Jordan algebras of characteristic 6= 2 are 27-dimensional Albert algebras over their centers. To show that a prime nondegenerate Jordan algebra is either special or Albert, one has to work hard to show that the i-special algebras are in fact special. At the present time the only way to do this is to actually classify all of them and notice that they all turn out to be special; perhaps someday there will be a direct proof that nondegenerate ispecial algebras are in fact special. We remark that a highly nontrivial proof (which we have chosen not to include) shows that simple Jordan algebras are automatically nondegenerate. This is not true for prime algebras: there exist prime degenerate Jordan algebras, known as Pchelintsev Monsters.
478
The Final Argument
3. Problems for Chapter 9 From Dichotomy 9.1 we know that an ultraproduct of elds x of characteristics 0; p1 ; : : : pn must be a eld of characteristic 0 or p = pi for some i = 1; : : : ; n. Explain what happens in an ultraproduct of elds x of in nitely many distinct characteristics { by Division Example 8.5 this must be a eld of some characteristic, but where does the characteristic come from? Consider a very Q speci c case, p Zp =F for an ultra lter F on the set of prime numbers. Problem 2. The Albert factors J = Alb( ) over a big algebraically closed eld
in the proof of 9.2 must actually themselves be central-simple: = ; Alb( ) = Alb( ). (1) If is an extension of a big eld (i.e. jj > dim ( )), show that is algebraic over . Conclude that if is algebraically closed as well as big, then = : big algebraically closed elds admit no extensions in which they remain big. (2) Show that bigness passes to the algebraic closure: if is big for J then its algebraic closure remains big for J := J. (3) Show that if is a big algebraically closed eld relative to an algebra whose centroid of is a eld (e.g. a simple algebra), then the algebra is already central-simple over : (J) = . Problem 1.
Part V Appendices
These appendices serve as eddies, where we can leisurely establish important results whose technical proofs would disrupt the narrative
ow of the main body of the text. We have made free use of some of these results, especially Macdonald's Theorem, in our treatment, but their proofs are long, combinatorial or computational, and do not contribute ideas and methods of proof which are important for the mainstream of our story. To emphasize that these are digressions from the main path, and should be consulted only after gaining a global picture, we present them here in an uninviting small type. A hypertext version of this book would have links at this point to the appendices which could only be opened after the main body of text had been perused at least once.
APPENDIX A
Cohn's Special Theorems In this chapter we obtain some purely associative results due to P.M. Cohn in 1954, relating the symmetric elements and the Jordan elements in a free associative algebra, providing a criterion for the homomorphic image of a special algebra to be special.
1. Free Gadgets Let FA[X ] denote the free unital associative algebra on the set X , the free -module spanned by all monomials x1 : : : xn for all n 0 (the empty product for n = 0 serving as unit) and all xi 2 X (not necessarily distinct), with product determined by linearity and \concatenation" (x1 : : : xn )(xn+1 : : : xn+m ) = x1 : : : xn xn+1 : : : xn+m . We think of the elements of the free algebra as free or generic associative polynomials p(x1 ; : : : ; xn ) in the variables x 2 X . Let be the canonical set-theoretic imbedding of X in FA[X ]. The free algebra has the universal property that any set-theoretic mapping ' of X into a unital associative algebra A factors uniquely through the canonical imbedding via a homomorphism FA['] : FA[X ] ! A of unital associative algebras; we say that ' extends uniquely to a unital associative algebra homomorphism FA[']. [Universal Diagram] This extension should be thought of as an evaluation map p(x1 ; : : : ; xn ) ! p(a1 ; : : : ; an ), obtained by substituting the ai = '(xi ) 2 A for the variables xi . Free associative polynomials were born to be evaluated on associative -algebras, in the same way that ordinary commutative polynomials in [X ] were born to be evaluated on commutative -algebras. The free algebra FA[X ] also has a unique reversal involution xing the generators x 2 X : (x1 x2 : : : xn ) = xn : : : x2 x1 . For convenience we will usually denote the action (p) simply by p , though when we refer to him by name will always call the involution . The -algebra (FA[X ]; ) has the universal property that any set-theoretic mapping of X into the hermitian part H(A; ) of a unital associative -algebra (A; ) extends uniquely to a -homomorphism (FA[X ]; ) ! (A; ) of unital -algebras factoring through the canonical -imbedding X ! H(FA[X ]; ). The reversible elements (p) = p in FA[X ] form a Jordan algebra H(FA[X ]; ) containing X ; the Jordan subalgebra generated by X is called the free special Jordan algebra FSJ [X ]. We will call an element of FSJ [X ] a free special or generic special Jordan polynomial on X . FSJ [X ] has the universal property that any set-theoretic mapping of X into a special unital Jordan algebra J extends uniquely to a homomorphism FSJ [X ] ! J of unital Jordan algebras factoring through the canonical imbedding X ! FSJ [X ]. [Universal Diagram] 481
482
Cohn's Special Theorems
The map FSJ ['] is again evaluation p(x1 ; : : : ; xn ) ! p(a1 ; : : : ; an ) at ai = '(xi ) for all Jordan polynomials p in the x's. Again, special Jordan polynomials were born to be evaluated (though only on special Jordan algebras). We obtain a free associative or free special Jordan functor F (F = FA or FSJ ) from the category of sets to the category of unital associative or special Jordan algebras, sending X to FL[X ] and a map f : X ! Y to the homomorphism FL[f ] : FL[X ] ! FL[X ] induced by X ! Y ! FL[Y ]. In particular, for X Y the natural set inclusion X ! Y induces a canonical homomorphism FL[X ] ! FL[Y ], and it is clear this is a monomorphism, so we will always identify FA[X ] with the subalgebra of FA[Y ] generated by X . Under this associative homomorphism Jordan products go into Jordan products, inducing an identi cation of FSJ [X ] with the Jordan subalgebra of FSJ [Y ] generated by X .
2. Cohn Symmetry Clearly FSJ [X ] H(FA[X ]; ), and the question is how close these are. As long as contains a scalar 12 , this is easy to answer: on the one hand, the tetrads fx1 ; x2 ; x3 ; x4 g := x1 x2 x3 x4 + x4 x3 x2 x1 are reversible elements BUT NOT JORDAN PRODUCTS in FSJ [X ]. Indeed, in the exterior algebra [X ] the subspace X (which, by convention, means all -linear combinations of generators, consisting of wedges of length 1) forms a special Jordan subalgebra with trivial products x ^ x = x ^ y ^ x = x1 ^ x2 + x2 ^ x1 = x1 ^ x2 ^ x3 + x3 ^ x2 ^ x1 = 0, but not containing tetrads since x1 ^ x2 ^ x3 ^ x4 + x4 ^ x3 ^ x2 ^ x1 = 2x1 ^ x2 ^ x3 ^ x4 62 X . On the other hand, as soon as we adjoin these tetrads to the generators X we can generate all reversible elements. Theorem A.1 (Cohn Symmetry). The Jordan algebra H(FA[X ]; ) of reversible elements of the free associative algebra over 3 21 is precisely the Jordan subalgebra generated by FSJ [X ] together with all increasing tetrads fx1 ; x2 ; x3 ; x4 g for distinct x1 < x2 < x3 < x4 (in some ordering of X ). proof. Let B denote the subalgebra generated by X and the increasing tetrads. Since 12 2 , the reversible elements are all \traces" a + (a), hence are spanned by the traces of the monomials, which are just the n-tads fx1 ; x2 ; : : : ; xn g := x1 x2 xn + xn x2 x1 for xi 2 X . [WARNING: for n = 0; 1 we fudge and decree the 0-tad is 1 and the 1-tad is fx1 g = x1 (not x1 + (x1 ) = 2x1 !!)] We will prove that all n-tads are 0 modulo B by induction on n. The cases n = 0; 1; 2; 3 are trivial (they are Jordan products of x's). For n = 4 the tetrads can all be replaced by tetrads with distinct x's arranged in increasing order, since fx1 ; x2 ; x3 ; x4 g is an alternating function of its arguments mod B, (a tetrad with two adjacent elements equal reduces to a Jordan triad in FSJ [X ] B, eg. fx1 ; x; x; x4 g = fx1 ; x2 ; x4 g). Now assume n > 4 and the result has been proven for all m < n, and consider an n-tad xI := fx1 ; x2 ; : : : ; xn g determined by the n-tuple I = (1; 2; : : : ; n). By induction we have mod B that 0 fx1 ; fx2 ; x3 ; : : : ; xn gg = fx1 ; x2 ; : : : ; xn g + fx2 ; : : : ; xn ; x1 g = xI + x(I ) where is the n-cycle (12 : : : n). Thus (1) x(I ) xI mod B. Applying this repeatedly gives xI = xn (I ) ( 1)n xI , so that when n is ODD we have 2xI 0, hence xI 0 mod B in the presence of 21 .
3. COHN SPECIALITY
483
From now on assume that n is EVEN, so the n-cycle is an odd permutation, so (1) becomes (2) x(I ) ( 1) xI mod B. We have the same for the transposition = (12): by induction we have 0 fx1 ; x2 ; fx3 ; x4 ; : : : ; xn gg = fx1 ; x2 ; x3 ; : : : ; xn g + fx3 ; x4 ; : : : ; xn ; x2 ; x1 g = xI + x2 (I ) xI + ( 1)2 x (I ) and (3) x (I ) ( 1) xI mod B. Putting (2),(3) together gives (4) x(I ) ( 1) xI mod B for all permutations in Sn , since the transposition and the n-cycle generate Sn . Now we bring in the tetrads. Since B contains the tetrads and is closed under Jordan products, we have by induction 0 ffx1 ; x2 ; x3 ; x4 g; fx5; : : : ; xn gg = fx1 ; x2 ; x3 ; x4 ; x5 ; : : : ; xn g + fx4 ; x3 ; x2 ; x1 ; x5 ; : : : ; xn g + fx5 ; : : : ; xn ; x1 ; x2 ; x3 ; x4 g + fx5 ; : : : ; xn ; x4 ; x3 ; x2 ; x1 g = xI + x(14) (23) (I ) + x4 (I ) + x4 (14) (23) (I ) xI + ( 1)2 xI + ( 1)4 xI + ( 1)4 ( 1)2 xI (by (4)) = 4xI ; so again in the presence of 12 we can conclude xI 0. This completes the induction that all n-tads fall in B. When j X j 3, there are no tetrads with distinct variables, so as a corollary we obtain Theorem A.2 (Cohn 3-Symmetry). We have equality H(FA[X ]; ) = FSJ [X ] when jX j 3: any reversible associative expression in at most 3 variables is a Jordan product.
3. Cohn Speciality Now we turn to the question of which images of the free special Jordan algebra remain special. Surprisingly, speciality is not inherited by all images: the class of special algebras does not form a variety de ned by identities, only the larger class of homomorphic images of special algebras (the identity-special or i-special algebras) can be de ned by identities (the s-identities, those that vanish on all special algebras). Theorem A.3 (Cohn Speciality Criterion). Any homomorphic image of a special algebra is isomorphic to FSJ [X ]=K for some set of generators X and some ideal K / FSJ [X ]. Such an image will be special i the kernel K is \closed", in the sense that the associative ideal K it generates still intersects FSJ [X ] precisely in the original kernel: (1) K \ FSJ [X ] = K (K := I FA[X ] (K)). Put in seemingly more general terms, (2) FSJ [X ]=K special i K = I \ FSJ [X ] for some I / FA[X ]. proof. Every special Jordan algebra J is a homomorphic image of some FSJ [X ]: by the universal property of the free special Jordan algebra there is a homomorphism of FSJ [X ] to J for any set X of generators of J as -algebra [at a pinch, X = J will do], which is an epimorphism since the image contains the entire generating set X . Thus every image of a special algebra is also an image
484
Cohn's Special Theorems
of some FSJ [X ] and thus is isomorphic to FSJ [X ]=K for K the kernel of the epimorphism. We now investigate speciality of such an FSJ [X ]=K. The seeming generality of (2) is illusory: the trace conditions in (1) and (2) are equivalent, since as soon as K = I \ FSJ [X ] is the trace on FSJ [X ] of some ideal it is immediately the trace of its closure: K I / FA[X ] =) K K I, hence K FSJ [X ] \ K FSJ [X ] \ I = K forces K = FSJ [X ] \ K. Thus it suÆces to prove (2). For the direction (=, as soon as K is the trace of some associative ideal as in (2), the quotient is immediately seen to be special by the 3rd Fundamental Homomorphism Theorem: FSJ [X ]=K = FSJ [X ]=I \ + FSJ [X ] = FSJ [X ] + I =I FA[X ]=I is imbedded in the associative algebra FA[X ]=I. For the converse direction =), we suppose the quotient is special, so there + exists a faithful specialization FSJ [X ]=K ! A . The map : X ! FSJ [X ] ! FSJ [X ]=K ! A+ is a map of sets, hence induces an associative homomorphism A : FA[X ] ! A by the associative universal property; moreover, since A jFSJ [X ] and Æ are both Jordan homomorphisms on FSJ [X ] which agree with on the generators X , they must agree everywhere by the uniqueness in the Jordan universal property: A jFSJ [X ] = Æ . Thus ker(A ) \FSJ [X ] = ker(A jFSJ [X ]) = ker( Æ ) = K exhibits K as the trace of an associative ideal. When X consists of at most 2 variables, all homomorphic images are special. Theorem A.4 (Cohn 2-Speciality). If jX j 2, then every homomorphic image of FSJ [X ] is special: every i-special algebra on at most two generators is actually special, indeed it is isomorphic to H(A; ) for an associative algebra A with involution . proof. We may work entirely with unital algebras, since a non-unital J is b ; ) = 1 isomorphic to H(A; ) i the unital hull bJ = 1 J is isomorphic to H(A H(A; ). For convenience, we work only with the case of 2 generators X = fx; yg (for 1 generator, just replace all y's by x's). We may represent our 2-generator unital algebra as (1) J = FSJ [x; y]=K for a Jordan ideal K / FSJ [x; y] = H(FA[x; y]; ) (using Cohn 2-Symmetry A.2); its associative closure K is spanned by all pkq where p; q are associative monomials in x; y and k 2 K, and is thus automatically a -ideal invariant under the reversal involution (remember that k 2 FSJ [x; y] is reversible, and it is a general fact that if the generating set X is closed under an involution, X X , then the ideal I A (X ) generated by X in A is automatically a -ideal invariant under the involution). By the Cohn Speciality Criterion A.3 the homomorphic image (1) is special i (2) K \ FSJ [x; y ] = K. By Cohn 2-Symmetry A.2 the trace K \ FSJ [x; y] = K \ H(FA[x; y]; ) consists precisely of all reversible elements of K, and due to the presence of 21 the reversible elements are all \traces" u = u + u of elements of K, which are spanned by all m(k) := pkq + q kp . We claim each individual m(k) lies in K. Now in the free associative algebra FA[x; y; z ] on 3 generators the element m(z ) := pzq + q zp is reversible, so by Cohn 3-Symmetry A.2 it is a Jordan product which is homogeneous of degree 1 in z , thus of the form m(z ) = Mx;y (z ) for some Jordan multiplication operator in x; y. Under the associative homomorphism FA[x; y; z ] ! FA[x; y]
3. COHN SPECIALITY
485
induced by x; y; z ! x; y; k this Jordan polynomial is sent to m(k) = Mx;y (k), so the latter is a Jordan multiplication acting on k 2 K and hence falls back in the ideal K, establishing (2) and speciality: J A := FA[x; y]=K. But we are not content with mere speciality. Since K is invariant under , the associative algebra A inherits a reversal involution from on FA[x; y], whose symmetric elements are (again thanks to the presence of 21 ) precisely all traces a + a = (u) + (u ) ( the canonical projection of FA[x; y] on A) = (u + u) = (s) for s 2 H(FA[x; y]; ) = FSJ [x; y] (by Cohn 2-Symmetry A.2), so H(A; ) = (FSJ [x; y]) = (FSJ [x; y] + K)=K = J. Thus J arises as the full set of symmetric elements of an associative algebra with involution. This result fails as soon as we reach 3 variables. We can give our rst example of a homomorphic image of a special algebra which is not special, merely i-special. Example A.5 (i-Special-but-not-Special). The quotient FSJ [x; y; z ]=K is not special when K is the Jordan ideal generated by k = x2 y2 . proof. By Cohn's Criterion we must show K \ FSJ [X ] > K. The reversible tetrad k := fk; x; y; z g = fx2 ; x; y; z g fy2; x; y; z g falls in FSJ [x; y; z ] by Cohn 3-Symmetry, and it certainly lies in the associative ideal K generated by K, but we claim it does not lie in K itself. Indeed, the elements of K are precisely all images M (k) of M (t) for Jordan multiplications M by x; y; z acting on FSJ [x; y; z; t]. Since M (t) of degree i; j; ` in x; y; z maps under t ! x2 y2 to an element with terms of degrees i + 2; j; ` and i; j + 2; ` in x; y; z , it only contributes to the element k with terms of degrees 3; 1; 1 and 1; 3; 1 when i = j = ` = 1. Thus we may assume M = M1;1;1 is homogeneous of degree 1 in each of x; y; z . As elements of H[x; y; z; t] (which, as we've noted, in 4 variables is slightly larger than FSJ [x; y; z; t]), we can certainly write such an M (t) as a linear combination of the 12 possible tetrads of degree 1 in each of x; y; z; t: M (t) = M1;1;1 (t) = 1 ftxyzg + 2 ftyxzg + 3 fxtyzg + 4 fxytzg +5 fyxtzg + 6 fytxzg + 7 ftxzyg + 8 fxtzyg +9 ftyzxg + 10 fytzxg + 11 fxyztg + 12 fyxztg: By assumption this maps to fx3 yzg fy2xyzg = k = M (k) = 1 fx3 yzg fy2xyz g + 2 fx2 yxz g fy3xz g +3 fx3 yzg fxy3 zg + 4 fxyx2 zg fxy3 zg +5 fyx3z g fyxy2zg + 6 fyx3 zg fy3xz g +7 fx3 zyg fy2xzyg + 8 fx3 zyg fxy2 zyg +9 fx2 yzxg fy3 zxg + 10 fyx2 zxg fy3 zxg +11 fxyzx2g fxyzy2g + 12 fyxzx2g fyxzy2g = 1 + 3 fx3 yzg 1 fy2 xyzg 2 + 6 fy3 xzg + 5 + 6 fyx3z g 3 + 4 fxy3 zg + 8 + 7 fx3 zyg 10 + 9 fy3zxg + 2 fx2 yxz g + 9 fx2 yzxg 7 fy2xzyg +4 fxyx2 zg 5 fyxy2zg 8 fxy2 zyg + 10 fyx2 zxg +11 fxyzx2g + 12 fyxzx2g 11 fxyzy2g 12 fyxzy2g Identifying coeÆcients of the 18 independent 5-tads on both sides of the equation gives 1 + 3 = 1 = 1; 2 + 6 = 3 + 4 = 0, all 9 single-coeÆcient i for i = 2; 9; 7; 4; 5; 8; 10; 11; 12 vanish, which in turn implies from the double-coeÆcient terms that the 2 remaining i for i = 3; 6 vanish, leaving only 1 = 1 nonzero. But
486
Cohn's Special Theorems
then M (t) = ft; x; y; zg would be a tetrad, contradicting the fact that M (t) is a Jordan product. Remark A.6. This argument fails if we take k = x2 instead of k = x2 y2 : 1 2 2 then k := fx; y; z; kg = fxyzx g = 2 Vfx;y;zg Ux;z Vy + Vx;y Vz (x ) is a Jordan product back in K. Of course, this by itself doesn't prove FSJ [x; y; z ]=I (x2 ) is special { I don't know the answer to that one.
APPENDIX B
Macdonald's Theorem In this appendix we will descend into the combinatorics of the free Jordan algebra on 3 generators in order to obtain Macdonald's Principle. All algebras here will be unital.
1. The Free Jordan Algebra We have heretofore been silent about the free (unital) Jordan -algebra on a set of generators X , consisting of a unital Jordan -algebra FJ [X ]1with a given mapping : X ! FJ [X ] satisfying the universal property: any mapping ' : X ! J of the set X into a unital Jordan -algebra J extends uniquely to a homomorphism (or factors uniquely through via) 'e : FJ [X ] ! J of unital Jordan -algebras. [Universal Diagram] The condition that the extension be unique just means that the free algebra is generated (as unital -algebra) by the set X . The explicit construction indicated below shows that is injective (we can also see this by noting the map X ! L + x2X 1x (each 1x a faithful copy of ) via x ! 1x is injective, and if it factors through then must also be injective). We will always assume that X is contained in FJ [X ]. Like all free gadgets, the free algebra is determined up to isomorphism by its universal property: an algebra that acts like a free algebra with respect to X is a free algebra. Although by this approach we cannot speak of the free algebra, there is a canonical construction which we always keep in the back of our minds as \the" free algebra: FJ [X ] := FL[X ]=K where FL[X ] = [M[X ]] is the free unital nonassociative -algebra generated by X and K is the ideal of \Jordan relations". Here the free monad M[X ] on the set X is a nonassociative monoid (set with unit and closed under a binary product); it has the universal property that any set-theoretic mapping ' : X ! M into a monad M extends uniquely to a homomorphism 'e : M[X ] ! M of monads. This is a graded set which can be constructed recursively as follows: the only monomial of degree 0 is the unit 1, the monomials of degree 1 are precisely the elements x of X , and if the monomials of degree < n have been constructed the monomials of degree n are precisely all m = (pq) (an object consisting of an ordered pair of monomials p,q surrounded by parentheses) for p,q monomials of degrees p; q > 0 with p + q = n. For example, the elements of degree 2,3 are all (xy), ((xy)z), (x(yz)). The mapping (p; q) ! (pq) 1Note
we don't bother to indicate the base scalars in the notation; a more precise notation would be FJ [X ], but we will never be this picky, relying on the reader's by-now- nely-honed common sense.
487
488
Macdonald's Theorem
for p; q 6= 1; (1; p) ! p; (p; 1) ! p gives a \totally nonassociative" product on M[X ] with unit 1. The free linear algebra FL[X ] = [M[X ]] is the monad algebra generated by the free monad M[X ], i.e. the free -module based on the monomials m2 M, P consistingPof all linear combinations a = m with bilinear multiplication m m P P ab = m n := ( mn ), with canonical inclusion : m n m n m n m;n x ! x of X ! M[X ] ! [M[X ]]. This has the universal property that every set-theoretic map ' : X ! A into an arbitrary unital -algebra extends uniquely (factors through ) to a homomorphism 'e : FL[X ] ! A of unital -algebras. [Universal Diagram] Namely, if 'e is the unique linear map whose values on monomials is de ned recursively on degree 0 by 'e(1) := 1A , on degree 1 by 'e(x) := '(x) for the generators x 2 X , and if de ned up to degree n then 'e((pq)) := 'e(p)'e(q). The ideal K of relations which must be divided out is the ideal generated by all (ab)-(ba), (((aa)b)a) - ((aa)(ba)) for a; b 2 FL[X ], precisely the elements that must vanish in the quotient in order for the commutative law and Jordan identity III.1.9(JAx1-2) to hold. Thus we have a representation of the free Jordan algebra in the form FJ [X ] = FL[X ]=K. Exercise B.0 Show that the ideal K is generated in terms of monomials m,n,p,q 2 M[X ] by all (1)(mn)-(nm), (2) (((mm) n) m) - ((mm)(nm)), (3) (((mm) n) p) + (((mp) n) m) + (((pm) n) m) - ((mm)(np)) - ((mp)(nm)) - ((pm)(nm)), (4) (((mq) n) p) + (((qm) n) p) + (((mp) n) q) + (((qp) n) m) + (((pq) n) m) + (((pm) n) q) - ((mq)(np)) - ((qm)(np)) - ((qp)(nm)) - ((mp)(nq)) - ((pq)(nm)) - ((pm)(nq)). Exercise B.0 (1) Show that FL[X ] carries a reversal involution (involutory isomorphism from FL[X ] to its opposite FL[X ]op) uniquely determined by (x) = x for all x 2 X . Show that the Jordan kernel is invariant under the involution, (K) K, so that FJ [X ] inherits a reversal involution. Why is this involution never mentioned in the literature? Irrespective of how we represent the free algebra, we have a standard result that if X Y then FJ [X ] can be canonically identi ed with the Jordan subalgebra of FJ [Y ] generated by Y : the canonical epimorphism of FJ [X ] onto this subalgebra, induced by X ! Y ! FJ [Y ], is an isomorphism (it is injective because it has as left inverse the homomorphism FJ [Y ] ! FJ [X ] induced by Y ! X [ f0g via x ! x (x 2 X ); y ! 0 (y 2 Y n X )). In particular, we will always identify FJ [x; y] with the elements of FJ [x; y; z ] of degree 0 in z . A Jordan polynomial or Jordan product f (x; y; z ) in 3 variables is just an element of the free Jordan algebra FJ [x; y; z ]. Like any polynomial, it determines a mapping (a; b; c) ! f (a; b; c) of J3 ! J for any Jordan algebra J by evaluating2 the variables x; y; z to speci c elements a; b; c. This, of course, is just the universal property: the map ' : (x; y; z ) ! (a; b; c) induces a homomorphism 2This standard process in
mathematics is called specializing the indeterminates x; y; z to the values a; b; c, going from general or \generic" values to particular or \special" values. But here the Jordan polynomial can be \specialized" to values in any Jordan algebra, not just special Jordan algebras in the technical sense, so to avoid confusion with \special" in the Jordan sense we will speak of evaluating f at a; b; c.
2. IDENTITIES
489
'e : FJ [x; y; z ] ! J, and f (a; b; c) is just 'e(f (x; y; z )). Such a polynomial vanishes on J if f (a; b; c) = 0 for all a; b; c 2 J, i.e. all evaluations in J produce the value 0. We have a canonical specialization FJ [x; y; z ] ! FSJ [x; y; z ] (in the Jordan sense of homomorphism into a special algebra) onto the free special Jordan algebra (inside the free associative FA[x; y; z ]) xing x; y; z ; the kernel of this homomorphism is the ideal of s-identities, those Jordan polynomials in three variables that vanish on (x; y; z ) in FSJ [X ], hence, by the universal property of free special algebra, on any (a; b; c) in any special Jordan algebra J A+ . 2. Identities Remember that we have used The Macdonald for simply everything we have done so far, so we have to go back to the very beginning to prove The Macdonald itself. The Jordan identity (JAx2) [x2 ; y; x] = 0 in operator form says Lx commutes with Lx2 , its linearization [x2 ; y; z ] + 2[x z; y; x] = 0 acting on z becomes Lx2y Lx2 Ly + 2 LxLy Lxy Lx = 0, leading (cf. Linearization Proposition III.1.10(JAx20),(JAx200 )) to the Basic Identities (B:1:1) [Lx; Lx2 ] = [Vx ; Ux] = 0; (B:1:2) Lx2 y = 2LxLy Lx + Lx2 Ly + 2Lxy Lx = 2LxLy Lx + Ly Lx2 + 2LxLxy ; (B:1:3) L(xz)y = LxLy Lz Lz Ly Lx + Ly Lzx + Lz Lxy + LxLyz = LxLy Lz Lz Ly Lx + LzxLy + Lxy Lz + Lyz Lx ; (B:1:4) Ux := L2x Lx2 ; Ux;y = 2 Lx Ly + Ly Lx Lxy ; (B:1:5) fx; y; yg = fx; y2 g; Vx;y = Vx Vy Ux;y ; Vx := 2Lx: (The second equality in (2),(3) follows from linearizing LxLx2 = Lx2 Lx . For the rst part of (5), note by (4) that fx; y; yg = Ux;y y = 2(x y2 + y (x y) (x y) y) = 2x y2 = fx; y2 g. Linearizing y ! y; z gives fx; y; z g + fx; z; yg = fx; fy; z gg; interpreting this as an operator on z yields Vx;y + Ux;y = Vx Vy as required for the second part of (5).) The reader will notice that we have abandoned our practice of giving mnemonic mnicknames to results and formulas; this appendix in particular is lled with technical results only a lemma could love, and we will keep our acquaintance with them as brief as possible and fasten our attention on the nal goal. From Basic (B.1.3) we can immediately establish operator-commutativity and associativity of powers. Lemma B.2 (Power Associativity). (1) If the powers of an element x in a unital Jordan algebra are de ned recursively by x0 := 1; x1 := x, and xn+1 := x xn then all the multiplication operators Lxn are polynomials in the commutating operators Lx; Lx2 , and therefore they all commute with each other. (2) The element x generates a commutative associative subalgebra [x]: we have power-associativity xn xm = xn+m : proof. We prove (1) by recursion on n, the cases n = 0; 1; 2 being trivial. Assuming it for powers < n +2 we have for the n +2nd power (setting x = y; z = xn in Basic (B.1.3) that Lxn+2 = L(xxn)x = Lxn+1 Lx + Lxn+1 Lxn Lx Lx + Lx2 LxLx Lxn , where by recursion Lxn ; Lxn+1 are polynomials in Lx; Lx2 and hence Lxn+2 is too.
490
Macdonald's Theorem
(2) then follows easily by induction on n + m; the result is trivial for n + m = 0; 1; 2, and assuming it for degrees < n + m, with m 2, then xn xm = Lxn (Lxxm 1 ) = Lx(Lxn xm 1 ) [by commutativity (1)] = Lx(xn+m 1 ) [by the induction hypothesis on (2)] = xn+m [by de nition of the power]. In fact, (B.2)(1) is just the special case X = fxg of the following result about generation of multiplication operators, which again ows out of Basic (B.1.3). Theorem B.3 (Generation). If X generates a subalgebra B of unital Jordan algebra J, then the multiplication algebra MB (J) of B on J (generated by all multiplications Lb for elements b 2 B) is generated by all the operators Lx ; Lx2 ; Lxy (or, equivalently, by all the Vx ; Ux; Ux;y ) for x; y 2 X .3 proof. In view of Basic (B.1.4), Ux ; Ux;y are equivalent mod Vx ; Vy to Lx2 , Lxy , so it suÆces to prove the L-version. For this, it suÆces to generate Lb for all monomials b of degree 3 in the elements from X . Such b may be written as b = (p q) r for monomials p; q; r, which by Basic (B.1.3) can be broken down into operators Ls of lower degree, so by repeating we can break all Lb down into a sum of products of Lc 's for c of degree 1 or 2, which are x; x2 , or x y for generators x; y 2 X . We need the following general identities and identities for operator-commuting elements. Lemma B.4 (Operator Commuting). Let x; y be arbitrary elements of a Jordan algebra. Then we have Commuting Formulas (B:4:1) Ux2 ;x = Vx Ux = UxVx ; (B:4:2) Ux2 ;y = Vx Ux;y Ux Vy = Ux;y Vx Vy Ux; (B:4:3) 2Uxy;x = Vy Ux + UxVy : More generally, if x0 is an element which operator-commutes with x (all the operators Lx ; Lx0 ; Lx2 ; Lxx0 commute), then we have (B:4:4) Ux0 x;x = Vx0 Ux = UxVx0 ; (B:4:5) Uxx0 = Ux Ux0 ; (B:4:6) Ux2 x0 ;y = Vx Uxx0 ;y Ux Ux0;y = Uxx0;y Vx Ux0 ;y Ux; (B:4:7) Ux2 x0 ;y = Vxx0 Ux;y Ux Vx0 ;y : proof. (1) follows from (2) with y = x, in view of Basic (B.1.1). The rst equality in (2) holds since Ux2 ;y Vx Ux;y + UxVy = 2 Lx2 Ly + Ly Lx2 Lx2 y 2Lx 2 LxLy + Ly Lx Lxy + 2L2x Lx2 2Ly [using Basic (B.1.4)] = 2 Ly Lx2 Lx2y 2LxLy Lx + 2LxLxy = 0 [by Basic (B.1.2)]. The second equality in (2) holds by linearizing x ! x; y in Basic (B.1.1), Vx Ux = UxVx . (3) is equivalent to (2), since their sum is just the linearization x ! x; x; y in (1) Ux2 ;x = Vx Ux . Notice that (1)-(3) are symmetric under reversal of products. The rst equality in (4) follows by linearizing x ! x; x0 in (B.4.2) and setting y = x to get 2Uxx0;x = Vx Ux0;x + Vx0 Ux;x Ux;x0 Vx = Vx0 (2Ux) by the assumed commutativity of Lx with Lx0 ; Lxx0 (hence Vx with Ux;x0 , in view of Basic (B.1.4)), then cancelling 2's. The second equality holds symmetrically. 3The multiplication algebra of B on J is sometimes denoted M (B) or M(B)j J J
or just M(B) (if J is understood), but we will use a subscript for B since that is where the multipliers live in the operators Lb ; Vb ; Ub ; Ub;c .
3. NORMAL FORM FOR MULTIPLICATIONS
491
Applying this thrice yields (5): 2UxUx0 = Vx2 Vx2 Ux0 [by Basic (B.1.4)] = Vx Uxx0;x0 Ux2 x0 ;x0 [by (4) twice] = Ux(xx0);x0 + Uxx0 ;xx0 Ux2x0 ;x0 [using linearized (4)] = Uxx0;xx0 [since x (x x0 ) = x2 x0 by commutativity of Lx ; Lx0 acting on x] = 2Uxx0 , and we again divide by 2. Thedierence of the sides in the rst equality in (6) is 2 Vx Uxx0;y Ux2 x0 ;y UxUx0 ;y = Vx Vx Ux0;y + Vx0 Ux;y Ux;x0 Vy Vx2 Ux0;y + Vx0 Ux2 ;y Ux2 ;x0 Vy Vx2 Vx2 Ux0 ;y [by linearizing x! x; x0 and also x ! x2 ; x0 in (2), and using Basic (B.1.4)] = Ux2 ;x0 Vx Ux;x0 Vy + Vx0 Ux2 ;y + Vx Ux;y [since Vx $ Vx0 ] = UxVx0 Vy + Vx0 UxVy [by (2) with y ! x0 , and by (2) itself] = 0 [by Vx0 $ Ux]. The second equality in (6) follows symmetrically. (7) is equivalent to the rst equality in (6), since the sum of their right sides minus their left sides is Vxx0 Ux;y + Vx Uxx0 ;y Ux(Vx0 ;y + Ux0;y ) 2Ux2x0 ;y = Vxx0 Ux;y + Vx Uxx0 ;y Ux;xx0 Vy 2Ux(xx0);y [by Basic (B.1.5) and (4), and noting that x (x x0 ) = x2 x0 by the assumed commutativity of Lx; Lx0 acting on x] = 0 [as the linearization x ! x; x x0 of (2)].
Derive the rst equality in (B.4.4) directly from (B.1.3). Lemma B.5 (Macdonald Tools). For any elements x; y in a unital Jordan algebra and k; i 1 with m = minfk; ig, we have the operator identities: (B:5:1) Uxk Uxi = Uxk+i ; (B:5:2) Uxk ;y Uxi = Uxk+i;y Vxi Uxk+2i;y ; (B:5:3) Vxk Uxi ;y Uxk+i ;y = Uxm k;i ; xi k ;y if i k; m = k for k;i := U Vxk i ;y if k i; m = i ; (B:5:4) Vxk Vxi Vxk+i = Uxm Uxk m ;xi m = Uxk ;xi : proof. By Power-associativity (B.2)(1) any xn ; xm operator-commute with n x xm = xn+m , and we apply the results of Commuting (B.4). (1) is a special case of (B.4.5) [replacing x ! xk ; x0 ! xi ; x x0 ! xk+i ], (2) is a special case of the second part of (B.4.6) [with x ! xi ; x0 ! xk ; x x0 ! xk+i ; x2 x0 ! xk+2i ], while (3) follows from the rst part of (B.4.6) [with x ! xk ; x0 ! xi k ; x x0 ! xi ; x2 x0 ! xk+i ] when i k, and from (B.4.7) [with x ! xi ; x0 ! xk i ; x x0 ! xk ; x2 x0 ! xk+i ] when k i. Since Uxn;1 = U1;xn = Vxn ;1 = Vxn , (4) is just the special case y = 1 of (3) [using linearized (B.4.5) with x ! xm ; x0 ! xk m ; xi ; x x0 ! xk ; xi for the last equality]. Exercise B.4
3. Normal Form for Multiplications In order to show the free and free special multiplications in two variables are isomorphic, we put them in a standard form of operators Mp;q which produce, acting on the element z 2 FSJ [x; y; z ], the basic reversible elements m(p; z ; q) := pzq + qzp homogeneous of degree 1 in z (p; q monomials in FA[x; y]). We de ne the operators and verify their action recursively, where the recursion is on the weight !(p) of monomials p, de ned as the number of alternating power xi ; yj (i; j > 0): if X; Y denote all monomials beginning with a positive power of x; y respectively, then !(1) = 0; !(xi ) = !(yj ) = 1, and !(p) = 1 + !(p0 ) if p = xi p0 (resp. yj p0 ) for p0 2 Y (resp. X ). We de ne the weight of a pair of monomials to be the sum of the individual weights, !(p; q) := !(p) + !(q).
492
Macdonald's Theorem
Definition B.6. The multiplication operators Mp;q = Mq;p determined by associative monomials p; q 2 FA[x; y] are de ned recursively in terms of weight !(p; q) as follows:
(B:6:1) (B:6:2x ) (B:6:2y ) (B:6:3)
M1;1 = 2Id; Mxi ;1 := Vxi ; Myj ;1 := Vyj ; Mxi ;yj := Uxi ;yj ; Mxi p0 ;xj q0 := Uxi Mp0 ;xj i q0 (if j i; p0 ; q0 2 Y [ f1g); Myi p0 ;yj q0 := Uyi Mp0 ;yj i q0 (if j i; p0 ; q0 2 X [ f1g); Mxi p0 ;yj q0 := Uxi;yj Mp0 ;q0 Myj p0 ;xi q0 (p0 2 Y [ f1g; q0 2 X [ f1g; not p0 = q0 = 1); (B:6:4x ) Mxi p0 ;1 := Vxi Mp0 ;1 Mp0 ;xi (p0 2 Y ); (B:6:4y ) Myi p0 ;1 := Vyi Mp0 ;1 Mp0 ;yi (p0 2 X ):
Note that the Mp;q are operators on the free Jordan algebra which are parametrized by elements of the free associative algebra. We now verify they produce the reversible elements m(p; z ; q) in the free associative algebra. Recall by Cohn 3Symmetry A.2 that FSJ [x; y; z ] = H(FA[x; y; z ]; ) consists precisely of all reversible elements of the free asociative algebra. Lemma B.7 (Action). The multiplication operators Mp;q act on the element z in the free special algebra FSJ [x; y; z ] by
Mp;q (z ) = m(p; z ; q) = pzq + qzp: proof. We will prove this by recursion on the weight ! (p; q ). For weights 0,1,2 as in (B.6.1), we have M1;1 (z ) = 2Id(z ) = 2z = 1z 1 + 1z 1; Mxi;1 (z ) = Vxi (z ) = xi z + zxi = xi z 1 + 1z (xi ) [analogously for Myj ;1 ], and Mxi;yj (z ) = Uxi ;yj (z ) = xi zyj + yj zxi = xi z (yj ) + yj z (xi ) . For (B.6.2x) [analogously (B.6.2 y )], if j i; p0 ; q0 2 Y [ f1g we have Mxip0 ;xj q0 (z ) = Uxi Mp0 ;xj i q0 (z ) = xi p0 z (q0 ) xj i + xj i q0 z (p0) xi [by recursion, since !(p0 ; xj i q0 ) !(p0 )+!(q0 )+1 < !(p0 )+!(q0 )+ 2 = !(p; q)] = pzq + qzp. For (B.6.3), if p0 2 Y [ f1g; q0 2 X [ f1g; not p0 = q0 = 1, then Mxi p0 ;yj q0 (z ) = Uxi ;yj Mp0 ;q0 (z ) Myj p0 ;xi q0 (z ) = Uxi ;yj p0 z (q0 ) + q0 z (p0 ) yj p0 z (q0 ) xi + xi q0 z (p0 ) yj [by recursion, since (p0 ; q0 ) and (yj p0 ; xi q0 ) have lower weight than (p; q) as long as one of p0 ; q0 is not 1 (e.g. if 1 6= p0 2 Y then !(yj p0 ) = !(p0 ))] = xi p0 z (q0 ) yj + yj q0 z (p0 ) xi = pzq + qzp . Finally, for (B.6.4x) [analogously (B.6.4y )] if p0 2 Ythen Mxi p0 ;1 (z ) = Vxi Mp0 ;1 (z ) Mp0 ;xi (z ) = 0 0 p0 zxi + xi z (p0) [by recursion, since (p0 ; 1) has lower weight Vxi p z 1 +1z (p ) 0 i than (p; 1), and (p ; x ) 2 (Y; X ) has the same weight as (xi p0 ; 1) but falls under case (B.6.3) handled above]= xi p0 z 1 + 1z (p0) xi = pz 1 + 1zp.
The crux of The Macdonald is the following veri cation that these operators span all free multiplications by x and y; from here it will be easy to see that these map onto a basis for the free special multiplications by x and y, and therefore they must actually form a basis for the free multiplications. Lemma B.8 (Closure). The span M of all Mp;q (p; q monomials in FA[x; y ]) is closed under left multiplication by all multiplication operators in x; y: for all
3. NORMAL FORM FOR MULTIPLICATIONS
493
monomials p; q in FA[x; y] and all k; ` 0 we have4
(B:8:1) Uxk Mp;q = Mxk p;xk q ; Uyk Mp;q = Myk p;yk q ; (B:8:2) Uxk ;y` Mp;q = Mxk p;ylq + My`p;xk q (B:8:3) Vxk Mp;q = Mxk p;q + Mp;xk q ; Vyk Mp;q = Myk p;q + Mp;yk q : Hence M is the entire multiplication algebra M[x;y] (FJ [x; y; z ]) proof. (1) By symmetry in x; y we need only prove the x-version of (1), and we give a direct proof of this. If p or q lives in Y [ f1g, then xk is all the x you can extract simultaneously from both sides of (xk p; xk q), so the result follows directly from De nition (B.6.2x). If both p; q live in X; p = xi p0 ; q = xj q0 for j i; p0 ; q0 2 Y [f1g, then Uxk Mp;q Mxk p;xk q = Uxk Mxi p0 ;xj q0 Mxk+ip0 ;xk+j q0 = Uxk Uxi Mp0 ;xj i q0 Uxk+i Mp0 ;xj i q0 [by De nition (B:6:2x )] = 0 [by Tool (B.5.1)]. (2) and (3) are considerably more delicate; we prove them by induction on the total weight !(p; q), treating three separate cases. The rst case, where p = q = 1 both lie in the same set f1g, is easy: M1;1 = 2 by De nition (B.6.1) and Mxk ;y` + My`;xk = 2Uxk ;y` ; Mxk ;1 + M1;xk = 2Vxk by De nition (B.6.1). Assume we have proven (2),(3) for all lesser weights. Note that no limits are placed on k; `; this will allow us carry out a subinduction by moving factors outside to increase k; ` and reduce what is left inside. For the second case, when p; q both lie in X (dually Y ) we can handle both (2) and (3) together by allowing k; ` to be zero: p = xi p0 ; q = xj q0 , for i; j 1; p0 ; q0 2 Y [ f1g, where by symmetry we may assume j i. Then the dierence at issue is k;` (p; q) := Uxk ;y` Mxip0 ;xj q0 Mxk+i p0 ;y` xj q0 My`xi p0 ;xk+jq0 = Uxk ;y` Uxi Mp0 ;xj i q0 Uxk+i;y` Mp0 ;xj q0 My`p0 ;xk+i+j q0 My`xi p0 ;xk+j q0 [by De nitions (B.6.2x,3))] = Uxk+i ;y` Vxi Uxk+2i ;y` Mp0 ;xj i q0 Uxk+i;y` Mp0 ;xj q0 + My`p0 ;xk+i+j q0 My` xi p0 ;x k+j q0 [by Tool (B.5.2)] = Uxk+i ;y` Vxi Mp0 ;xj i q0 Mp0 ;xj q0 Uxk+2i ;y` Mp0 ;xj i q0 My`p0 ;xk+i+j q0 My`xi p0 ;xk+j q0 = Uxk+i ;y` Mxi p0 ;xj i q0 Mxk+2ip0 ;y`xj i q0 My`xi p0 ;xk+j q0 [by induction (3),(2) since (p0 ; xj i q0 ) has lesser weight than (p; q))] = k+i;` (xi p0 ; xj i q0 ): If j = i, (xi p0 ; xj i q0 ) has lesser weight than (p; q), otherwise it has the same weight but lower total x-degree i + (j i) < i + j in the initial x-factors of p0 and q0 , so by induction on this degree we can reduce to the case where at least one of the initial factors vanishes and we have lower weight. For the third case, where p; q lie in dierent spaces, by symmetry we can assume (p; q) is contained in either (1; Y ) or (X; Y ) or (X; 1). This is easy when k; ` > 0 (i.e. for (2)): then y`p; xk q begin with exactly ` y's and k x's, and the result follows directly from De nition (B.6.3) of My`p;xk q . This nishes all cases of (2). For (3), it suÆces by symmetry to consider the x-version. We have three separate subcases according as (p; q) is contained in either (1; Y ) or (X; Y ) or (X; 1). 4Note
that by the Generation Theorem we need only prove closure in (1)-(3) for k = ` = 1, but our inductive proof of (2-3) requires k; ` to be able to grow. The proof for (1) is just as easy for general k as it is for k = 1, so we may as well do the general case for all three parts.
494
Macdonald's Theorem
The rst case (p; q) 2 (1; Y ) is easiest: here p = 1; q = yi q0 , and the dierence is
Vxk M1;yi q0
Mxk ;yi q0
M1;xk yi q0 = 0
by De nition (B.6.4x) on M1;xk yi q0 = Mxk yi q0 ;1 . The second case (p; q) 2 (X; Y ) of (3) is messiest, as it depends on which of i or k is larger. Here p = xi p0 ; q = yj q0 (p0 2 Y [ f1g; q0 2 X [ f1g), and the relevant dierence becomes k;0 (p; q) = Vxk Mxi p0 ;yj q0 Mxk+i p0 ;yj q0 Mxip0 ;xk yj q0 = Vxk Uxi ;yj Mp0 ;q0 Myj p0 ;x i q0 Uxi+k ;yj Mp0 ;q0 Myj p0 ;xk+iq0 Uxm Mxi m p0 ;xk m yj q0 m := minfi; kg; by De nitions (B.6.3,3,2 x) = Vxk Uxi ;yj Uxi+k ;yj Mp0 ;q0 Vxk Myj p0 ;xi q0 Myj p0 ;xk+i q0 Uxm Mxi m p0 ;xk m yj q0 m 0 = U M x k;i p ;q0 0 Mxi m p0 ;xk m yj q0 0 := Mxk myj p0 ;xi mq0 if (p0 ; q0 ) 6= (1; 1); 0 := k;i if (p0 ; q0 ) = (1; 1) using on the rst term Tool (B.5.3) and on the second term either (3) if (p0 ; q0 ) 6= (1; 1) [so (yj p0 ; xi q0 ) has lower weight, and Mxk yj p0 ;xiq0 = Uxm Mxk m yj p0 ;xi m q0 by De nition (B.6.2x)] or Tool (B.5.3) if (p0 ; q0 ) = (1; 1) [in view of De nition (B.6.1)]. First consider the case m = k i. By de nition Tool (B.5.3) of , if (p0 ; q0 ) 6= (1; 1) the inner term becomes Uxi k ;yj Mp0 ;q0 Myj p0 ;xi k q0 Mxi k p0 ;yj q0 = i k;j (p0 ; q0 ) = 0 by (2) for (p0 ; q0 ) of lesser weight, while if (p0 ; q0 ) = (1; 1) it becomes Uxi k ;yj M1;1 Uxi k ;yj Uxi k ;yj = 0 by De nition (B.6.1). Next consider the case m = i k. By de nition Tool (B.5.3) of , if (p0 ; q0 ) = (1; 1) the inner term becomes Vxk i ;y j M1;1 Vxk i ;yj M1;xk i yj = Vxk i Vyj Uxk i;yj Vxk i M1;yj Mxk i ;yj [by Basic (B.1.5), De nition (B.6.4x)] = 0 [by De nition (B.6.1)]. The complicated case is (p0 ; q0 ) 6= (1; 1), where the inner term becomes
Vxk i ;yj Mp0 ;q0 Mxk iyj p0 ;q0 Mp0 ;xk i yj q0 = Vxk i Vyj Uxk i ;yj Mp0 ;q0 Vxk i Myj p0 ;q0 Myj p0 ;xk i q0 Vxk i Mp0 ;yj q0 Mxk i p0 ;yj q0 [by Basic (B.1.5) and k i;0 vanishing by (3) on lower weight terms (yj p0 ; q0 ); (p0 ; yj q0 ) < (xi p0 ; yj q0 ) = (p; q)] = Vxk i Vyj Mp0 ;q0 Myj p0 ;q0 Mp0 ;yj q0 Uxk i;yj Mp0 ;q0 Myj p0 ;xk i q0 Mxk ip0 ;yj q0 = Vxk i 0;j (p0 ; q0 ) k i;j (p0 ; q0 ); which vanishes by (3),(2) on lesser weight terms. This nishes the case (p; q) 2 (X; Y ). The third case (p; q) 2 (X; 1) of (3) depends on the previous case. Here p = xi p0 ; p0 2 Y [ 1; q = 1. If p0 = 1 this reduces by De nition (B.6.1) to
Vxk Vxi
Vxk+i
Uxm Uxk m;xi
m
= 0;
4. THE MACDONALD PRINCIPLES
495
and therefore follows from Tool (B.5.4). Henceforth we assume 1 6= p0 2 Y . The dierence is k;0 (xi p0 ; 1) = Vxk Mxi p0 ;1 Mxk+ip0 ;1 Mxi p0 ;xk = Vxk Vxi Mp0 ;1 Mp0 ;xi Vxk+i Mp0 ;1 Mp0 ;xk+i Uxm Mxi m p0 ;xk m (m := min( i; k); [by De nitions (B.6.4x; 4x; 2x )] = Vxk Vxi Vxk+i Mp0 ;1 Vxk Mp0 ;xi Mp0 ;xk+i Uxm Mxi m p0 ;xk m = Uxm Uxk m ;xi m Mp0 ;1 Mxk p0 ;xi Uxm Mxi mp0 ;xk m [by Tool (B.5.4) and the second case of (3) ABOVE on Mp0 ;xi 2 MY;X since we are assuming p0 2 Y = Uxm Uxk m ;xi m Mp0 ;1 Mxk m p0 ;xi m Mxi mp0 ;xk m [by De nition (B.6.2x)] =0 since if k > i; m = i the inner term is k i;0 (p0 ; 1) and if i > k; m = k it is i k;0 (p0 ; 1), which both vanish either by induction on weight or straight from the de nition De nition (B.6.4x); if k = i = m the result is trivial since U1;1 = 2. This completes the induction for (3). Since M contains 1FJ [x;y;z] and is closed under left multiplication by the generators Vx ; Vy ; Ux; Uy ; Ux;y [by the Generation Theorem (B.3)] for x; y the generators of B := [x; y] J := FJ [x; y; z ] by (1)-(3) it contains all of M[x;y] (FJ [x; y; z ]).
4. The Macdonald Principles We are now ready to establish the fundamental Macdonald Principle of 1958 in all its protean forms. Theorem B.9 (Macdonald Principles). (1) There are no s-identities in three variables x; y; z which are of degree 1 in z . (2) The canonical homomorphism of the free Jordan algebra FJ [x; y; z ] onto the free special Jordan algebra FSJ [x; y; z ] in three variables is injective on elements of degree 1 in z . (3) Any Jordan polynomial f (x; y; z ) which is of degree 1 in z and vanishes on all associative algebras A+ (equivalently, all special Jordan algebras J A+ ) vanishes on all Jordan algebras J. (4) Any multiplication operator in two variables which vanishes on all special algebras, will vanish on all Jordan algebras. (5) The canonical mapping of the multiplication subalgebra M[x;y] (FJ [x; y; z ]) to the multiplication subalgebra M[x;y] (FSJ [x; y; z ]) is injective. proof. Let us rst convince ourselves that all 5 assertions are equivalent. (1) () (2) since the s-identities are precisely the elements of the kernel of the canonical homomorphism FJ ! FSJ : (2) () (3) since f vanishes on all (resp. all special) Jordan algebras i it vanishes in FJ (resp. FSJ ) by the universal property for the free algebras: f (x; y; z ) = 0 in FJ [x; y; z ] (resp. FSJ [x; y; z ]) () f (a; b; c) = 0 for all a; b; c in any (resp. any special) J. The tricky part is the equivalence of (3) and (4). The homogeneous polynomials of degree 1 in z are precisely all f (x; y; z ) = Mx;y (z ) given by a multiplication operator in x; y, and [by de nition of the zero operator] f (x; y; z ) vanishes on an algebra J i the operator Mx;y does. Thus (4) is equivalent to (3) for polynomials which are homogeneous of degree 1 in z . In particular, (3) =) (4), but to prove (4) =) (3) must show that the addition of a constant term doesn't aect the result.
496
Macdonald's Theorem
We will do this by separating f into its homogeneous components, and reducing each to a multiplication operator. Any polynomial of degree 1 in z can be written as f = f0 + f1 in terms of its homogeneous components of degree 0; 1 in z . Now it is a general fact that f vanishes on all algebras i each of its homogeneous components does (see Problem 2 below), but in the present case we can easily see the stronger result that our f vanishes on any particular algebra i its components f0; f1 do: the homogeneous f1 (x; y; z ) automatically vanishes at z = 0 [f1 (a; b; 0) = f1(a; b; 0 0) = 01 f1 (a; b; 0) = 0 by homogeneity of degree 1 in z ], and the constant term f0 (x; y; z ) = f0 (x; y) does not involve z , therefore f vanishes at all (a; b; 0) i f0 vanishes at all (a; b), in which case f (a; b; c) = f0(a; b) + f1 (a; b; c) = f1 (a; b; c) and f vanishes at all (a; b; c) i f1 does. Thus we have reduced the problem for f to a problem about its separate components: (6) f vanishes on J () f0 ; f1 vanish on J. Here in a trivial way we can also write the constant term in terms of a multiplication operator: f0 (x; y) = Nx;y (1) for Nx;y := Lf0 (x;y). Moreover, since La = 0 on a unital algebra i a = 0, it is clear that Nx;y vanishes on a unital algebra J i the polynomial f0(x; y) does: (7) f0 (x; y) = Nx;y (1); f1(x; y; z ) = Mx;y (z ) vanish on J () Nx;y ; Mx;y do (Nx;y := Lf0 (x;y) ): From this we can show (4) =) (3): by (6),(7) f vanishes in all special algebras i the operators N; M do; by (4) this happens i Nx;y ; Mx;y vanish on the free algebra FJ , in which case they vanish at 1; z 2 FJ and f (x; y; z ) = Nx;y (1) + Mx;y (z ) = 0 in FJ . Finally, (4) () (5) since a multiplication operator Mx;y is zero in M[x;y] (FJ ) (resp. M[x;y] (FSJ )) i it is zero on FJ (resp. FSJ ) [by de nition of the zero operator] i it is zero on all Jordan algebras (resp. special Jordan algebras) J [evaluating x; y; z at any a; b; c 2 J]. So all 5 forms are equivalent, but are any of them true? We will establish version (5). The canonical map from M[x;y] (FJ ) onto M[x;y] (FSJ ) will be injective (as asserted by (5)) if its composition := "z Æ with evaluation at z 2 FSJ is an injective map : M[x;y] (FJ ) ! FSJ [x; y; z ]. By the Closure Lemma (B.8) M[x;y] (FJ ) is spanned by the elements Mp;q = Mq;p for associative monomials p; q 2 FA[x; y], and we have a general result about linear transformations: (8) A linear map T : M ! N of -modules will be injective if it takes a spanning set fmi g into an independent set in N (in which P case the original spanning set wasPalready a basis for M ). Indeed, T (m) = T ( i mi ) [since the mi span M ] = i i T (mi ) = 0 =) all i = 0 [by independence of the T (mi ) in N ] =) m = 0. Thus we need only verify that the (Mp;q ) = Mp;q (z ) = m(p; z ; q) [by the Action Lemma (B.7)] are independent: the distinct monomials pzr in the free algebra FA[x; y; z ] are linearly independent, and two reversible pzq + qzp; p0 z (q0 ) +q0 z (p0) can share a common monomial only if either pzq = p0 z (q0) [in which case, by the uniqueness of expression in the free algebra we have p = p0 ; q = q0 ; Mp;q = Mp0 ;q0 ] or else pzq = q0 z (p0) [in which case, p = q0 ; q = p0 ; Mp;q = Mq0 ;p0 = Mp0 ;q0 by symmetry of the M 's], so in both cases distinct Mp;q 's contribute distinct monomials, and so are independent. In view of (8), this completes the proof of (5) and hence the entire theorem.
5. ALBERT i-EXCEPTIONALITY Exercise B.9
497
(1) Show that a Jordan polynomial f (x1 ; : : : ; xn ) in n variables in
FJ [X ] (resp. FSJ [X ]) can be uniquely decomposed into its homogeneous comP
ponents of degree ei in each variable xi , f = e1 ;:::en fe1 ;:::en . (2) Show (using indeterminate scalar extensions = [t1 ; : : : tn ]) that f vanishes on all algebras (resp. special algebras) i each of its homogeneous components fe1 ;:::en vanishes on all algebras (resp. special algebras). In particular, conclude f = 0 in FJ i each fe1 ;:::en = 0 in FJ . Our version of Macdonald's Principles has subsumed Shirshov's earlier 1956 theorem. Actually, Macdonald's original theorem only concerned polynomials homogeneous of degree 1 in z , thus amounted to the operator versions (4),(5). The assertion about polynomials homogeneous of degree 0 in z is precisely Shirshov's Theorem, that FJ [x; y] = FSJ [x; y]. The reason (4) ) (3) is tricky is that it is nothing but Macdonald swallowing up Shirshov, and any herpetologist can tell you that swallowing a major theorem requires a major distension of the jaws. Theorem B.10 (Shirshov). The free Jordan algebra on two generators is special: the canonical homomorphism FJ [x; y] ! FSJ [x; y] is an isomorphism. proof. Certainly the canonical specialization 2 (determined by 2 (x) = x; 2 = y) is an epimorphism since its image contains the generators x; y. But it is also a monomorphism: we have noted before that we have a canonical identi cation of FJ [x; y]; FSJ [x; y] with the subalgebras B FJ [x; y; z ]; Bs FSJ [x; y; z ] generated by x; y, under which the canonical projection 2 corresponds to the restriction of the canonical projection 3 : FJ [x; y; z ] ! FSJ [x; y; z ] to B ! Bs . By Macdonald Principle (2), 3 is injective on B, i.e. 2 is injective on FJ [x; y]. This completes the proof that 2 is an isomorphism. In 1959 Cohn combined Shirshov's Theorem with his own Speciality Theorem A.4 to obtain the de nitive result about Jordan algebras with two generators. Theorem B.11 (Shirshov-Cohn). Any Jordan algebra generated by two elements x; y is special, indeed is isomorphic to an algebra H(A; ) for an associative algebra A with involution . proof. As usual we may restrict ourselves to unital algebras, since J is isomorb ; ). In the unital case any phic to H(A; ) i the unital hull Jb is isomorphic to H(A J generated by two elements is a homomorphic image of FJ [x; y ] = FSJ [x; y] [by Shirshov's Theorem], and by Cohn's 2-Speciality Theorem any image of FSJ [x; y] is special of the form H(A; ). This establishes, once and for all, the validity of the basic principles we have used so frequently in our previous work.
5. Albert i-Exceptionality We have just seen that any Jordan polynomial in three variables linear in one of them will vanish on all Jordan algebras if it vanishes on all associative algebras. This fails as soon as the polynomial has degree at least 2 in all variables: we can exhibit s-identities of degree 8,9,11 which hold in all special Jordan algebras but not in the Albert algebra. This re-proves the exceptional nature of the Albert algebra; more, it shows the Albert algebra cannot even be a homomorphic image of a special algebra since it does not satisfy all their identities. Recall the discussion in II.4.7 of i-speciality and i-exceptionality.
498
Macdonald's Theorem
Definition B.12 (i-Special). An s-identity is a Jordan polynomial (element of the free Jordan algebra) which is satis ed by all special algebras, but not by all algebras. Equivalently, the polynomial vanishes on all special algebras but is not zero in the free algebra. A Jordan algebra is i-special if it satis es all the sidentities that the special algebras do, otherwise it is i-exceptional. Thus being i-special is easier than being special: the algebra need only obey externally the laws of speciality (the s-identities), without being special in its heart (living an associative life). Correspondingly, being i-exceptional is harder than being exceptional: to be i-exceptional an algebra can't even look special as regards its identities: without any interior probing, it reveals externally its exceptional nature by refusing to obey one of the speciality laws. The rst s-identities were discovered by Charles Glennie, while a more understandable one was discovered later by Armin Thedy. Definition B.13 (Glennie). Glennie's Identities are the Jordan polynomials Gn := Hn (x; y; z ) Hn (y; x; z ) of degree n = 8; 9 (degree 3 in x; y and degree 2; 3 in z ) expressing the symmetry in x; y of the products H8 (x; y; z ) := fUxUy z; z; fx; ygg UxUy Uz (fx; yg); H9 (x; y; z ) := fUxz; Uy;xUz y2 g Ux Uz Ux;y Uy z: We may also write G8 as f[Ux ; Uy ]z; z; fx; ygg [Ux ; Uy ]Uz fx; yg: Thedy's Identity T10 is the operator Jordan polynomial T10 (x; y; z ) := UU[x;y] (z) U[x;y]Uz U[x;y]
of degree 10 (degree 4 in x; y and degree 2 in z ) expressing the structurality of the transformation U[x;y] := Ufx;yg 2fUx; Uy g); acting on an element q, this produces an element polynomial T11 (x; y; z; w) := T10 (x; y; z )(w) = UU[x;y] (z) w U[x;y]Uz U[x;y]w of degree 11 (degree 4 in x; y, degree 2 in z , and degree 1 in w). Theorem B.14 (s-Identity). Glennie's and Thedy's Identities are s-identities: they vanish in all special algebras, but not in all Jordan algebras, since they do not vanish on the Albert algebra H3 (O). proof. The easy part is showing that these vanish in associative algebras: H8 reduces to the symmetric 8-tad (xyzyx)z (xy + yx) + (xy + yx)z (xyzyx) xyz (xy + yx)zyx = fx; y; z; y; x; z; x; yg + fx; y; z; y; x; z; y; xg fx; y; z; y; x; z; y; xg = fx; y; z; y; x; z; x; yg; and H9 reduces to the symmetric 9-tad (xzx) y(zy2 z )x + x(zy2 z )y + y(zy2z )x + x(zy2 z )y (xzx) xz x(yzy)y + y(yzy)x zx = fx; z; x; y; z; y; y; z; xg + fx; z; x; x; z; y; y; z; yg fx; z; x; y; z; y; y; z; xg = fx; z; x; x; z; y; y; z; yg: The operator U[x;y] on z in a special algebra reduces to [x; y]z [x; y] involving honest commutators [x; y] (which make sense in the associative algebra, but not the special Jordan algebra), so acting on w we have T10 (w) = ([x; y]z [x; y])w([x; y]z [x; y]) [x; y] z ([x; y]w[x; y])z [x; y] = 0.
5. ALBERT i-EXCEPTIONALITY
499
The hard part is showing that these don't vanish in the reduced Albert algebra, i.e. choosing manageable substitutions which don't vanish in H3 (O) for an octonion algebra O with its scalar involution (all traces and norms t(d); n(d) lie in 1). Nonvanishing of G9 Throughout we use the Jacobson box notation and Hermitian Multiplication Rules III.3.7 without comment. For G9 the calculations are not overly painful: take (1) x := 1[12]; y := 1[23]; z := a[21] + b[13] + c[32]: We will not compute the entire value of G9 (x; y; z ), we will apply the Peirce projection E13 and show that already its component in the Peirce space J13 is nonzero. We claim (2) E13 Ux = 0; Uxz = a[12]; Uy z = c[23]; Uz x2 = n(a)(e1 + e2 ) + n(b)e3 + n(c)e3 + ab[23] + ca[31]; Uz y2 = n(c)(e3 + e2 ) + n(a)e1 + n(b)e1 + bc[12] + ca[31]: The rst line follows from the Peirce Triple Product Rules III.13.7(2) Ux J J11 + J12 + J22 . Analogously E13 Uy = 0, which lightens our burden by killing o two of the 4 terms in G9 . The second line follows from Ua[21] + Ub[13] + Uc[32] + Ua[21];b[13] + Ub[13];c[32] + Uc[32];a[21] (e1 + e2 ) = n(a)(e1 + e2)+ n(b)e3 + n(c)e3 + ab[23]+0+ ca[31], and the third line follows analogously, due to symmetry in the indices 1,3. From the Peirce relations we have (3) E13 Va[21] = Va[21] E23 ; E13 Vc[32] = Vc[32] E12 ; E23 Ux;y = Ux;y E12 E12 Ux;y = Ux;y E23 (e.g. fa[23]; Jij g = 0 unless i or j links up, and E13 fa[23]; J2j g = 0 unless j = 1; E13 fc[23]; J3j g = 0, and dually for c[32]). Now we can compute
Dually
E13 Uxz Uy;xUz y2 = E13 Va[12] Ux;y Uz y2 [by (2)] 2 = Va[12] E23 Ux;y Uz y [by (3)] = Va[12] Ux;y E12 Uz y2 [by (3)] = Va[12] U1[12];1[23](bc[12]) [by (2)] = Va[12] (bc[23]) = a(bc)[13]:
E13 Uy z Ux;y Uz x2 = E13 Vc[23] Ux;y Uz x2 = Vc[23] E12 Ux;y Uz x2 = Vc[23] Ux;y E23 Uz x2 = Vc[23] U1[12];1[23](ab[23]) = Vc[23] (ab[12]) = (ab)c[13]:
Thus
G9 (x; y; z ) = (a(bc) (ab)c)[13] = [a; b; c][13] vanishes on H3 (D) i [a; b; c] = 0 for all a; b; c 2 D, i.e. i D is associative. Thus for a non-associative octonion algebra O Glennie's Identity of degree 9 does not vanish. Nonvanishing of G8 For G8 the calculations are considerably more painful. We set (10 ) x := e1 e3 ; y := a[12] + b[13] + c[23]; z := 1[12] + 1[23]. Then from fei ; a[ij ]g = a[ij ]; U1[i2] a[i2] = a[2i] = a[i2]; f1[j 2]; a[i2]; 1[i2]g = a[j 2] we have (20 ) fx; yg = a[12] c[23]; Uz (fx; yg) = (a c)[21] + (a c)[32]. Thus z; Uz (fx; yg) fall in the Peirce spaces J12 + J23 , and if we take E := E12 + E23 then from Ux = E11 E13 + E33 we have UxE = Uy Ux E = 0; UxU y E = (E11 E13 + E33 ) Ua[12] + Ub[13] + Uc[23] + Ua[12];b[13] + Ub[13];c[23] + Uc[23];a[12] (E12 + E23) = (E11 Ua[12];b[13]E32 E13 Ua[12];b[13] E12 ) + (E33 Ub[13];c[23]E12 E13 Ub[13];c[23]E23 ), so
500
Macdonald's Theorem (30 )
[Ux ; Uy ]E
= E11 Ub[13];a[21] E32 + E33 Uc[32];b[13] E21 E13 Ua[12];b[13]E21 E13 Ub[13];c[23]E32 ; E13 [Ux; Uy ]E = E13 Ua[12];b[13]E21 E13 Ub[13];c[23]E32 : We now examine only the 13-components of G8 . On the one hand, by (20 ); (30 ) E13 [Ux ; Uy ]Uz fx; yg = E13 [Ux ; Uy ]E (a c)[21] + (a c)[32] = E13 Ua[12];b[13] E12 (a c)[21] E13 Ub[13];c[23]E32 (a c)[32] = fa[12]; (a c)[21]; b[13]g fb[13]; (a c)[32]; c[23]g = a((a c)b) + (b(a c)) c [13] = a2 b + at(cb) a(bc) t(ba)c (ab)c + bc2 [13] [by alternativity] 2 = a2 b + at(cb) t(ba)c + [a; b; c] + bc [13] 2 2 = bc a b + at(cb) t(ba)c [a; b; c] [13] [by scalar involution]. On the other hand, by (20 ) we have E13 f[Ux; Uy ]z; z; fx; ygg = E13 f[Ux; Uy ]Ez; 1[12] + 1[23]; a[12] c[23]g = E13 f[Ux; Uy ]Ez; 1[12]; a[12]g E13 f[Ux; Uy ]Ez; 1[23]; c[23]g E13 f[Ux; Uy ]Ez; 1[12]; c[23]g + E13 f[Ux; Uy ]Ez; 1[23]; a[12]g = fa[12]; 1[21]; E13[Ux ; Uy ]Ez g fE13 [Ux; Uy ]Ez; 1[32]; c[23]g fE11 [Ux; Uy ]Ez; 1[12]; c[23]g + fa[12]; 1[23]; E33[Ux ; Uy ]Ez g which becomes, using (30 ), = fa[12]; 1[21]; Ua[12];b[13]1[21] + Ub[13];c[23]1[32]g +fUa[12];b[13]1[21] + Ub[13];c[23]1[32]; 1[32]; c[23]g fUb[13];a[21] 1[32]; 1[12]; c[23]g + fa[12]; 1[23]; Uc[32];b[13] 1[21]g = fa[12]; 1[21]; (ab + bc)[13]g + f(ab + bc)[13]; 1[32]; c[23]g ft(ba)[11]; 1[12]; c[23]g + fa[12]; 1[23]; t(cb)[33] g = a(ab + bc) + (ab + bc)c t(ba)c + at(cb) [13] = a2 b a(bc) + (ab)c + bc2 t(ba)c+ at(cb) [13] = bc2 a2 b + [a; b; c] t(ba)c + at(cb) [13]: Subtracting gives E13 G8 (x; y; z ) = 2[a; b; c][13] 6= 0, and the Albert algebra does not satisfy G8 . Nonvanishing of T11 Now we turn to Thedy's Identity. For arbitrary a; b; c 2 O we set (100 ) x := e1 e2; y := 1[12] + a[23] + b[13]; z := 1[13] + c[12]; w = e2 . We immediately get (200 ) Ux = E11 E12 + E22 ; fx; yg = b[13] a[23]; Ufx;yg = Ub[13] + Ua[23] Ub[13];a[23] Ufx;yg (w) = n(a)e3 ; Ufx;yg (z ) = b2 [13] ab[23] t(bca)e3 : We have similar, but longer, formulas for Uy : (300 ) Uy = U1[12] + Ua[23] + Ub[13] + U1[12];a[23] + U1[12];b[13] + Ua[23];b[13] so that (400 ) Uy w = e1 + n(a)e3 + a[13]; Uy 1[13] = b2 [13] + t(a)e2 + b[12] + ab[23]; Uy c[12] = c[12] + ca[23] + cb[13] + t(bca)e3 : From these we get (500 ) U[x;y]w = 4e1 n(a)e3 2a[13]; U[x;y]z = t(a) e2 +t(bca)e3 + 2b + 4c [12] + b2 + 2cb [13] + 2ca ab [23]
5. ALBERT i-EXCEPTIONALITY
501
by computing
U[x;y]e2 := Ufx;yge2 2UxU y e2 2Uy Uxe2 = n(a)e3 2 e1 2 e1 + n(a)e3 + a[13] [using (200 ); (400 )] = n(a)e3 4e1 2a[13] for the rst formula, and for the second U[x;y]z := Ufx;ygz 2UxUy z 2Uy Uxz 00 = Ufx;ygz 2UxUy (1[13] + c[12]) + 2Uy c[12] [using (2 )] = Ufx;yg z 2 E11 E12 + E22 Uy 1[13] 2 E11 E12 + E22 I Uy c[12] [using (200 ) for Ux ] =U + E13 + E23 + 2E12 Uy c[12] fx;yg z 2 E11 E12 + E22 Uy1[13] + 2 E33 = b2 [13] ab[23] t(bca)e3 2 t(a)e2 b[12] +2 t(bca)e3 + cb[13] + ca[23] + 2c[12] [using (200 ); (400 ) twice] = b2 + 2cb [13] + ab + 2ca [23] + t(bca)e3 + 2t(a) e2 + 2b + 4c [12]: Again we will examine only the 13-components of T11 (x; y; z; q). Now in general we have E13 Up e2 = fE12 (p); E23 (p)g, so for the left side of Thedy we get (600 ) E13 UU[x;y] (z) w = 4b(ca) + 8n(c)a 2bab 4c(ab) [13] since by (500 ) fE12 (U[x;y](z )); E23(U[x;y](z ))g = f(2b + 4c)[12]; (2ca ab)[23]g = 4b(ca) 2b(ab) + 8c(ca) 4c(ab) [13] and we use the Kirmse Identity to simplify c(ca) = n(c)a, Attacking Thedy's right side, we start from (500 ) and compute Uz U[x;y]w = U1[13] + Uc[12] +U1[13];c[12] 4e1 n(a)e3 2a[13] = 4e3 n(a)e1 2a[13] + 4n(c)e2 + 4c[32] 2ac[12] : to obtain (700 ) Uz U[x;y]w = n(a)e1 4n(c)e2 4e3 2ac[12] 2a[13] 4c[32]. For the 13-component we have (800 ) E13 U[x;y] = E13 Ub[13] E13 E13 Ub[13];a[23]E23 2E13 U1[12];a[23]E22 + 2E13 U1[12];b[13] E12 00 since by (2 ) E13 Ub[13] + Ua[23] Ub[13];a[23] 2UxUy 2Uy Ux = E13 Ub[13] E13 + 0 Ub[13];a[23]E23 0 2E13 U1[12];a[23]E22 + U1[12];b[13]( E12 ) [from (200 ) and (300 ), noting E13 Ux = 0 by (200 )]. Applying this to (700 ) gives 00 (9 ) E13 U[x;y] Uz U[x;y]c = Ub[13] ( 2a)[13] Ub[13];a[23] ( 4c)[32] 2U1[12];a[23]( 4n(c)e2 ) + 2U1[12];b [13]( 2ac)[12] = 2bab + 4(bc)a + 8n(c)a 4(ca)b [13]: Subtracting (900 ) from (600 ) gives E13 T11 (x; y; z; w) = 4b(ca) 4c(ab) 4(bc)a + 4(ca)b [13] = ( 4[b; c; a]+4[c; a; b] [13] = ( 4[a; b; c]+4[a; b; c] [13] = 8[a; b; c][13]. Since O is nonassociative, we can nd a; b; c for which this is nonzero, hence the Albert algebra does not satisfy T11.
502
Macdonald's Theorem 6. Problems for Appendix B
Suppose for each set nonempty set X we have a free gadget FC [X ] in some category C of pointed objects (spaces with a distinguished element 0 or 1), together with a given map : X ! FC [X ] of sets, with the universal property that any set-theoretic mapping of X into a C -object C extends uniquely (factors through ) to a C -morphism FC [X ] ! C . (1) Show that we have a functor from the category of sets to C . (2) Show that id X ,! Y is an imbedding then the induced FC [X ] ! FC [Y ] is a monomorphism, allowing us to regard FC [X ] as a subobject of FC [Y ]. Problem 2. (1) Verify that the free and free special functors are indeed functors from the category of sets to the category of (resp. special) unital Jordan algebras. (2) Modify the constructions to provide (non-unital) free Jordan and special Jordan algebras F 0 [X ] (resp. FJ 0 [X ] or FSJ 0 [X ]) with appropriate universal properties, and show these provide functors from the category of sets to the category of (resp. special) non-unital Jordan algebras. (3) Show that the canonical homomophism from the free non-unital gadget F 0 [X ] to the free unital gadget FL[X ] induced by the map X ! FL[X ] is a monomorphism, with FL[X ] = F\ 0 [X ] just the unital hull. Problem 3. Show a multiplication operator Mx;y vanishes on FJ (resp. FSJ ) i it vanishes at the generic element z . Problem 1.
APPENDIX C
Jordan Algebras of Degree 3 In III.4 we left the messy calculations to an appendix, and here we are. We will verify in detail that the Jordan identity holds for the Hermitian matrix algebras H3 (D; ) for alternative D with nuclear involution, and for the algebras Jord(N; #; c) of the general Cubic Construction, and that the Freudenthal and Tits norms are sharped.
1. Jordan matrix algebras Here we will grind out the case of a 3 3 Jordan matrix algebra III.3.7 whose coordinate algebra is alternative with nuclear involution. Later we will consider the case of scalar involution in the Freudenthal Construction III.4.5, and show it is a sharped cubic construction as in III.4.2. Our argument requires a few additional facts about associators in alternative algebras and general matrix algebras. Lemma C.1 (Associator Facts). (1) If A is an arbitrary linear algebra, then the algebra A+ with brace product fx; yg := xy + yx has associator [x; y; z ]+ = ([x; y; z ] [z; y; x]) + ([y; x; z ] [z; x; y]) + ([x; z; y] [y; z; x]) + [y; [x; z ]]: (2) Any alternative algebra D satis es the identity [w; [x; y; z ]] = [w; x; yz ] + [w; y; zx] + [w; z; xy]: proof. (1) We compute [x; y; z ]+ = (|{z} xy + |{z} yx )z + z (|{z} xy + |{z} yx ) x(|{z} yz + |{z} zy ) (|{z} yz + |{z} zy )x 1
1
2 2
3
}|
z
z }| {
{
= [x; y; z ] + [y; x; z ] + y(xz ) + | {z }
| {z }
5
z
| {z }
}|
{
+ [x; z; y] (xz )y + | {z } | {z }
Æ
z }| {
z }| {
z
4
|
[z; x; y] + (zx)y {z
}
6
}|
|
1
3
}|
z
5
{
| {z }
{
z }| {
{
z
4
6
4
z }| {
[z; y; x] | {z }
Æ
z }| {
z }| {
[y; z; x] y(zx) = [x; y; z ] + [y; x; z ] [z; x; y] {z
z }| {
} | {z }
z
}|
}|
{
[z; y; x] + [x; z; y] [y; z; x] + y(xz zx) (xz zx)y : (2) We compute [w; [x; y; z ]] [w; x; yz ] [w; y; zx] [w; z; xy ] = w[x; y; z ] + [x; y; z ]w [x; yz;w] + [xy; z; w] [w; y; zx] (by alternativity) = w[y; z; x] + x[y; z; w] [x; y; zw] [w; y; zx] (by the Teichmuller Identity III.21.1(3)) = 0 (linearizing x ! x; w in the Left Bumping Formula III.21.1(2) x[y; z; x] = [x; y; zx]).
503
504
Degree 3
Lemma C.2 (Matrix Associator Facts). (1) In the algebra Mn (A) of n n matrices over an arbitrary linear algebra A, the matrix associator for A = aij ; B = bk` ; C = cpq , reduces to algebra associators for the entries aij ; bk` ; cpq 2 A:
[A; B; C ] =
X X
r;s
j;k
[arj ; bjk ; cks ] Ers : tr
(2) In the algebra H3 (D; ) of 3 3 hermitian matrices X = X over an alternative algebra D with nuclear involution, the diagonal rr-coordinates and o-diagonal rscoordinates (r; s; t a cyclic permutation of 1; 2; 3) of associators are given by (i) (ii) (iii) (iv)
[A; B; C ]rr = [ars ; bst ; ctr ] + [crs ; bst ; atr ] = [C; B; A]rr [A2 ; A]rr = [A; A; A]rr = 2a (a := [a12 ; a23 ; a31 ]); [A; B; C ]rs = [ars ; brs ; crs ] [ars ; bst ; cst ] [atr ; btr ; crs ]; [A; A; A]rs = 0: P
P
P
(1) Writing A = ij aij Eij ; B = k` bk`P Ek` ; C = pq cpq Epq we P have [A; B; C ] = i;j;k;`;p;q [ a ; b ; c ] E E E = ij k` pq ij k` pq i;j;p;q [aij ; bjp ; cpq ]Eiq = P P r;s j;k [arj ; bjk ; cks ] Ers in terms of the standard (associative) matrix units Ers . (2) For hermitian matrices we have aji = aij , and since by hypothesis the symmetric elements a = a, in particular all traces a +a 2 H(D; ), lie in the nucleus of D they vanish from associators: any associator involving aii vanishes, and any aji in an associator may be replaced by aij since [: : : a : : :] = [: : : (a + a) a : : :] = [: : : a : : :]. For the diagonal entry,Pwe may (by symmetry in the indices) assume r = 1. Then by (1) [A; B; C ]11 = j;k [a1j ; bjk ; ck1 ]; any terms with j = 1; j = k, or k = 1 vanish since associators with diagonal terms vanish, so 1; j; k are distinct indices from among 1; 2; 3, leading to only two possibilities j = 2; k = 3 and j = 3; k = 2. Thus [A; B; C ]11 = [a12 ; b23 ; c31 ] + [a13 ; b32 ; c21 ] = [a12 ; b23 ; c31 ] + [ a31 ; b23 ; c12] (replacing aji by aij etc.) = [a12 ; b23; c31 ] [a31 ; b23 ; c12 ] = [a12 ; b23 ; c31 ] + [c12 ; b23 ; a31 ] (by alternativity) as claimed in (i), and this expression is clearly symmetric in A; C . In particular, when A = B = C we get 2[a12 ; a23 ; a31 ] = 2a as in (ii) (note that a is invariant under cyclic permutation, so a = [ars ; ast ; atr ] for any cyclic permutation r; s; t of 1; 2; 3). For the o-diagonal entry, P we can again assume r = 1; s = 2 by symmetry. Then by (1) [A; B; C ]12 = j;k [a1j ; bjk ; ck2 ] where again we can omit terms with j = 1; j = k, or k = 2. When j = 2 then k 6= j; 2 allows two possibilities k = 1; 3, but when j = 3 then k 6= j; 2 allows only one possibility k = 1. Thus the sum reduces to [a12 ; b21 ; c12 ] + [a12 ; b23 ; c32 ] + [a13 ; b31 ; c12 ] = [a12 ; b12 ; c12 ] + [a12 ; b23 ; c23] + [ a31 ; b31 ; c12 ] (again replacing aji by aij ) = [a12 ; b12 ; c12 ] [a12 ; b23 ; c23 ] [a31 ; b31 ; c12 ] as claimed in (iii). In particular, when A = B = C we get [a12 ; a12 ; a12 ] [a12 ; a23 ; a23 ] [a31 ; a31 ; a12 ] = 0 as in (iv) since by alternativity each associator with a repeated term vanishes. proof.
Now we can nally establish the general theorem creating a functor from the category of alternative algebras with nuclear involutions to the category of Jordan algebras.
1. JORDAN MATRIX ALGEBRAS
505
Theorem C.3 (3 3 Coordinate Theorem). A matrix algebra H3 (D; ) under the bullet product A B = 12 fA; B g is a Jordan algebra i the coordinate algebra (D; ) is alternative with nuclear involution. proof. We saw in the Jordan Coordinates Theorem III.14.1 that the condition on the coordinate algebra was necessary, the sticking point is showing that it is also suÆcient. Since commutativity (JAx1) of the bullet product is obvious, it remains to prove the Jordan Identity (JAx2) for the brace associator: [A; B; A2 ]+ = 0 for all hermitian A; B . To prove that its diagonal entries [A; B; A2 ]rr and o-diagonal entries [A; B; A2 ]rs all vanish, it suÆces by symmetry in the indices to prove these for r = 1; s = 2. Using the formula for the brace associator in Alternative Fact C.1(1), for C = A2 we have (1) [A; B; C ]+ = ([A; B; C ] [C; B; A]) + ([B; A; C ] [C; A; B ]) +([A; C; B ] [B; C; A]) + [B; [A; C ]]: For the 11-component, the paired associator terms in (1) disappear by the symmetry in Matrix Associator Fact C.2(2i), and since by by Matrix Associator Fact C.2(2ii); (2iv) the matrix [A; C ] = [A; A2 ] = [A; A; A] is diagonal with associator entries, (2) [A; C ] = [A; A; A] = 2a13; a = [a12 ; b23 ; c31 ];
so the 11-component is [A; B; A2 ]+11 = [B; [A; C ]]11 = [b11 ; 2a] 2 [Nuc(D); [D; D; D]] = 0 by Nuclear Slipping III.21.3(1i). For the 12-component in (1), let us separate the positive and negative summands in the above. From Matrix Associator Fact C.2(2iii), [A; B; C ]12 + [B; A; C ]12 + [A; C; B ]12 1
z
}|
z
}|
z
}|
2
{
z
}|
{
z
}|
{
z
}|
3
{
z
}|
{
z
}|
{
z
}|
{
[a12 ; b12 ; c12 ] + [a12 ; b23 ; c23 ] + [a31; b31 ; c12 ]
=
1
3
{
[b12 ; a12 ; c12 ] + [b12; a23 ; c23 ] + [b31 ; a31 ; c12 ]
2
{
[a12 ; c12 ; b12 ] + [a12; c23 ; b23 ] + [a31 ; c31 ; b12 ]
z
}|
{
z
}|
{
z
}|
{
= [b12 ; a23 ; c23 ] [b12 ; a12 ; c12 ] [b12; a31 ; c31 ] =: t12 (by alternativity). This is clearly skew in A and C , so the negative terms sum to +t12 , so their dierence is 2t12 . The remaining term is the commutator [B; [A; C ]]12 = [b12 ; 2a] (recall (2)). So far we have [A; B; A2 ]+12 = 2([b12; a] + t12 ) =
2 [b12 ; [a12 ; a23 ; a31 ]] + [b12 ; a23 ; c23 ] + [b12 ; a12 ; c12 ] + [b12 ; a31 ; c31 ] : Now we have to look more closely at the entries cij of C = A2 . We have cij = aii aij + aij ajj + aik akj = aii aij + aij ajj + ajk aki ; Since nuclear aii slip out of associators by Nuclear Slipping III.21.3(1i), and associators with repeated terms aij ; aij vanish by alternativity, only the ajk aki terms survive; replacing x in the associators by x leads to [A; B; A2 ]+12 =
2 [b12 ; [a12 ; a23 ; a31 ]] [b12 ; a23 ; a31 a12 ] [b12 ; a12 ; a23 a31 ] [b12 ; a31 ; a12 a23 ] :
506
Degree 3
But this vanishes by Alternative Associator Fact C.1(2). Thus all entries of the brace associator [A; B; A2 ]+ vanish, and the Jordan identity (JAx2) holds.
This establishes the functor (D; ) ! H3 (D; ) from category of unital alternative algebras with nuclear involution to the category of unital Jordan algebras mentioned after Hermitian Matrix Theorem III.3.7. Now we turn to the Jordan matrix algebras coordinatized by alternative algebras with scalar involution, where the Jordan algebra is a cubic factor determined by a cubic norm form.
2. The general construction We begin by verifying that the general construction Jord(N; #; c) creates unital degree 3 Jordan algebras from any sharped cubic form (N; #; c) on a module over a general ring of scalars. Let us recall bygone concepts and results that we have already de ned or established in III.4 for the general setting, independent of nondegeneracy. We will retain the numbering of Chapter 4 for all previously-established results; new results will be indicated by numbers with C. Definition C.4 (Bygones). A basepoint N (c) = 1 for a cubic form determines trace linear and bilinear forms and quadratic form (4:1)(1a) T (x) := N (c; x); T (x; y) := T (x)T (y) N (c; x; y); S (x) := N (x; c), (4:1)(1b) S (x; y) = N (x; y; c) = T (x)T (y) T (x; y) (Spur-Trace Formulas), (4:1)(1c) T (x) = T (x; c) (c-Trace Formula), (4:1)(1d) T (c) = S (c) = 3. A sharp mapping for a cubic form N is a quadratic map on X strictly satisfying the (4:1)(2a) T (x#; y) = N (x; y); (Trace-Sharp Formula), ## (4:1)(2b) x = N (x)x (Adjoint Identity), (4:1)(2c) c#y = T (y)c y (c-Sharp Identity). A sharped cubic form (N; #; c) consists of a cubic form N with basepoint c together with a choice of sharp mapping #. The presence of 12 makes it easy to see that c-Sharp implies (4:1)(2) c# = c since 2c# = c#c = T (c)c c = 3c c = 2c. The Trace-Sharp Formula has many consequences (as in the Little Reassuring Argument of III.4.4: (4:4)(1) T (x#z; y) = T (x; z #y) = N (x; y; z ) (Sharp Symmetry); (4:4)(5) S (x) = T (x# ); S (x; y) = T (x#y) (Spur Formulas): (4:4)(3) T (Uxy; z ) = T (y; Uxz ); T (x y; z ) = T (y; x z )(U-Bullet-Symmetry): The condition that the Adjoint Identity holds strictly is equivalent (assuming the Trace-Sharp Formula) to the following linearizations: (4:4)(4) x# #(x#y) = N (x)y + T (x#; y) (Adjoint0 Identity), (4:4)(4C ) (x#y)# + x# #y# = T (x#; y)y + T (y#; x)x (Adjoint00 Identity). Now we go through the General Cubic Construction Theorem III.4.2 in all detail.
2. THE GENERAL CONSTRUCTION
507
Theorem C.5 (Cubic Construction). Any sharped cubic form (N; #; c) gives rise to a unital Jordan algebra Jord(N; #; c) with unit element c and U -operator (4:2)(2b) Ux y := T (x; y)x x# #y: The square and bilinear product are de ned by (4:2)(2c) x2 := Uxc; fx; yg := Ux;y c; x y := 21 Ux;y c: The sharp map and sharp product are related to the square and bilinear product by the Sharp Expressions (4:2)(2d) x# = x2 T (x)x + S (x)c; x#y = fx; yg T (x)y T (y)x + S (x; y)c; and all elements satisfy the Degree 3 Identity (4:4)(1) x3 T (x)x2 + S (x)x N (x)c = 0 (x3 := Uxx = x x2 ); equivalently (4:4)(2) x x# = N (x)c. proof. The formulas (4.2)(2b,c) hold by de nition. As in the Little Reassuring Argument of III.4.4, the Sharp Expressions (4.2)(2a) follow from c-Trace, Spur, and c-Sharp by the de nition of square and U . The c-Trace and c-Sharp Identities and the identity (4.1)(2C) are precisely the conditions that c be the unit of the Jordan algebra Jord(N; #; c): (C1) Ucy = y; fc; yg = 2y; c y = y (Unitality) since Ucy = T (c; y)c c# #y = T (y)c c#y = y, similarly for brace unitality fc; yg := Uc;y c = T (y; c)c + T (c; c)y (c#y)#c = T (y)c + 3y T (y)c y #c [using (4.1)(1d)] = T (y)c +3y 2T (y)c# + y#c = 2y; bullet-unitality follows by scaling by 1 . Setting z = c and using c-Trace and unitality (C1), we can recover the bilinear 2 from the linear trace: (C2) T (x; y) = T (x y) (Bullet Trace). The Degree 3 Identity (4.2)(1-2), and even more the Jordan axiom, will require a series of (x# ; x)-Identities for the various products of x# and x. (C3) x#x# = [S (x)T (x) N (x)]c T (x)x# T (x# )x, (C4) S (x; x# ) = S (x)T (x) 3N (x), (C5) x x# = N (x)c; T (x; x# ) = 3N (x), (C6) Uxx# = N (x)x; Ux Ux# = N (x)2 1J, (C7) fx; x# ; yg = 2N (x)y. (Recall that in the nondegenerate case III.4.3 we carefully avoided (C4) by moving to the other side of an inner product!) For (C3), using c-Sharp twice we have x# #x = x# #[T (x)c x#c] = T (x)[T (x#)c x# ] x# #(x#c) = T (x)S (x)c T (x)x# [N (x)c + T (x# ; c)x] [using Spur Formula (4.4)(5) and Adjoint0 (4:4)(4)] = T (x)S (x)c T (x)x# N (x)c T (x#)x [by c-Trace]. (C4) follows by taking the trace of (C3) For (C5), we cancel 2 from 2x# x = h (in view of the Spur Formula). i # # # # fx ; xg = x #x + T (x)x + T (x )x S (x; x# )c [by linearized Sharp Expressions (4.2)(2a)] = [S (x)T (x) N (x)]c [S (x)T (x) 3N (x)]c [by (C3),(C4)] = 2N (x)c for the rst part; the second follows by taking traces [using Bullet Trace (C2)], or directly from Trace-Sharp (4.1)(2a) and Euler. For (C6), rst Ux x# = T (x; x# )x x# #x# = 3N (x)x 2(x#)# [by the second part of (C5)] = 3N (x)x 2N (x)x = N (x)x by Adjoint Identity (4.1)(2b). Then for the second part of (C6),
508
Degree 3
UxUx# y = Ux T (x# ; y)x# x## #y = N (x) T (x#; y)x Ux (x#y) [by (C5) and the Adjoint Identity] = N (x) T (x#; y)x T (x; x#y)x + x# #(x#y) = N (x) T (x#; y)x T (x#x; y)x + [N (x)y + T (x# ; y)x] [by Sharp Symmetry (4.4)(1) and Adjoint0 (4.4)(4)] = N (x)N (x)y: (C7) follows from fx; x# ; yg = T (x; x# )y + T (x# ; y)x x# #(x#y) = [3N (x) N (x)]y [by the last part of (C5) and Adjoint0 ] = 2N (x)y: (C5) is just (4.4)(2), which by (4.2)(2a) yields the Degree 3 Identity in terms of x3 := x x2 ; but the Identity holds with x3 = Uxx as well (so the two cubes must coincide) since by Sharp Expression (4.2)(2a) x# + T (x)x S (x)c Uxx T (x)x2 = T (x; x)x x| #{z#x} T (x) |{z} |
1
{z
1
}
2
{
z
| {z }
| {z }
|
z }| {
z }| {
z
}|
{
| {z }
| {z }
|
{z
}
}|
z
| {z }
3
2
| {z }
4
}|
5
{
= T (x)2 x S (x; x)x S (x)T (x)c + N (x)c + T (x)x# + S (x)x 3
4
{z
5
}
| {z }
| {z }
| {z }
Æ
T (x)x# T (x)2 x + T (x)S (x)c
Æ
z }| {
z }| {
= S (x)x + N (x)c by Spur-Trace (4.1)(1b) for (1), (C3) for (2), and Sharp Expression (4.2)(2a) for (3,4,5). Thus Jord(N; #; c) is a unital degree 3 algebra. The key to Jordanity is the identity (C8) fx; x; yg = fx2 ; yg, which metamorphoses out of the linearization of (C3) using Trace-Sharp: 0 = x#(x#y) + y| #{zx#} |
{z
}
1
S (x)T (y) + S (x; y)T (x) + T (x#; y) c |
2
{z
3
}
|
{z
}
4
|
{z
5
+ T (y)x# + T (x)x#y + S (x; y)x + S (x)y | {z }
6 1
z
}|
{
|
{z
}
|
= + x#(x#y) + +
{z
|
5 }| # T (x )T (y) z
|
{z
Æ
z
}
7 2 7 z}|{ z }| { # x + T (x)x
}
00
}|
{z
| {z }
8 3
z }| {
| {z }
3
4
z
}|
{
|
{z
}
S (x)c #y S (T (x)x; y)c }
6 8 { z }| { z }| # # S (x ; y) c + T (y)x + T (x)T (y) |
{z
}
}
| {z }
|
{z
}
{
T (x; y) x | {z }
{
+ T (x2 ) T (x; x) y | {z }
| {z }
[using Spur-Trace (4.1.1b) twice, once in each direction (5) and (8), and Bullet Trace (C2) for (00)]
2. THE GENERAL CONSTRUCTION z}|{ z }| { = x2 #y +T (y) x# + T (x)x
Æ
z }| {
| {z }
|
a
S ( x#
z}|{ |
z }| {
z }| {
S (x)c
{z
Æ
}
b
z
z }| {
509
+ T (x2 )y
z }| {
| {z }
}|
c
{
z
}|
{
z
}|
{
+ T (x)x S (x)c; y)c + x#(x#y) T (x; y)x T (x; x)y {z
}
d
{z
|
}
e
[using Sharp Expressions (4.2)(2a) for , for Æ using Spur Formula (4.4.5) and S (c; y) = T (c)T (y) T (c; y) = 2T (y) by Spur-Trace (4.1.1b)], which then becomes a b c z }| { z }| { x2 #y + T (y)x2 + T (x2 )y fx2 ; yg + Ux;y x = z }| {
d
z }| { 2
z
e
}|
{
= S (x ; y) c + (x#y)#x T (y; x)x T (x; x)y = fx2 ; yg fx; x; yg using the Sharp Expressions (4.2)(2a) for (abcd) and the de nition (4.2)(2b) of U for (e). This identity (C8) has some immediate consequences relating the U -operator and triple product to the bilinear products: (C9) fx; z; yg + fz; x; yg = ffx; zg; yg; (C10) 2Ux = Vx2 Vx2 ; Ux = 2L2x Lx2 ; (C11) fx#; x; yg = 2N (x)y; (C12) x#(x# #y) = N (x)y + T (x; y)x# (Dual Adjoint0 ): (C9) is simply the linearization x ! x; z of (C8), and (C10) follow from this by interpreting fx; y; xg = fy; x; xg + ffy; xg; xg as an operator on y; note that this means the U de ned from the sharp by (4.2)(2b) is the usual Jordan U -operator III.1.9(2) de ned from the brace or bullet. (C11) follows via (C9): fx#; x; yg = ffx; x#g; yg fx; x# ; yg = f2N (x)c; yg 2N (x)y [by (C5) and (C7)] = 4N (x)y 2N (x)y = 2N (x)y by unitality (C1). For (C12), (x# #y)#x = fx#; x; yg + T (x#; x)y + T (y; x)x# [by de nition of U ] = 2N (x)y + [3N (x)y + T (x; y)x#] [by (C11), (C5)] = N (x)y + T (x; y)x#. Now we are over the top, and the rest is downhill: to establish the Jordan identity (JAx2) that Vx and Vx2 commute, it suÆces to prove that Vx and Vx# commute since Vx2 is a linear combination of Vx# ; Vx ; Vc = 21J by the Sharp Expressions (4.2)(2a) and V -unitality (C1). But using linearized (C8) twice we see [Vx ; Vx# ]y = fx; fx# ; ygg fx#; fx; ygg = fx; x# ; yg + fx; y; x#g fx# ; x; yg + fx# ; y; xg = fx; x# ; yg fx#; x; yg = 0 by (C7), (C11). Thus any sharped cubic form gives rise to a unital degree 3 Jordan algebra Jord(N; #; c). The reader should pause and breathe a sigh of relief, then take another deep breath, because there is more tough slogging to establish the composition formula in this general situation. Theorem C.6 (Composition). If N is a sharped cubic form, its sharp mapping is always multiplicative
(4:2)(3a)
Ux y
#
= Ux# y# ;
510
Degree 3
and if is a faithful ring of scalars (e.g. if has no 3-torsion or no nilpotents) then N permits composition with the U -operator and the sharp:
(4:2)(3b) N (1) = 1; N (Ux y) = N (x)2 N (y); N (x# ) = N (x)2 : proof. The multiplicative property of the sharp follows by direct calculation, using the various Adjoint Identities:
Uxy # Ux# y# = T (x; y)x x# #y # T (x# ; y#)x# x## #y# [by de nition (4.2.2b)] = T (x; y)2 x# T (x; y)x#(x# #y) + (x# #y)# T (x#; y# )x# + x###y# 1 2 { z }| { z }| #{ 2 # = T (x; y) x T (x; y) N (x)y + T (x; y)x 3 2 4 }| { z }| { z ##}| #{ z ## + x #y + T (x ; y)y + T (y#; x# )x# z
1
}|
3 { T (x# ; y#)x# + x###y# z
4
}|
{
z
}|
=0 [by Dual Adjoint (C12), Adjoint00 (4.4)(4C), and Adjoint (4.1)(2b) for (2)]. When has no 3-torsion, all is smooth as silk: the norm permits composition with sharp and U since [using (C5) twice] 3N (x# ) = T ([x# ]#; x# ) = N (x)T (x; x# ) = 3N (x)2 and 3N (Uxy) = T ((Uxy)# ; Uxy) =T (Ux# y#; Ux y) [by the above multiplicativity of sharp] =T (UxUx# y#; y) [by U -Symmetry (4.4)(3)] =N (x)2 T (y#; y) [by the second part of (C6)] = 3N (x)2 N (y). When has no 3-torsion or nilpotents then it is automatically faithful: if c = 0 then 3 = T (c) = 0 and 3 = N (c) = 0.We can't prove that N (x# ) equals N (x)2 in general (since it isn't true { see III.4 Problem 5), only that their dierence is a scalar which kills c (hence everybody, J = (c) J = 0), and will be condemned to vanish by our faithful scalar hypothesis. To witness the murder,
N (x)2 N (x# ) c = [N (x)T (x)S (x) N (x# )]c S (x)N (x)x N (x)T (x)x# N (x) [S (x)T (x) N (x)]c T (x)x# S (x)x = [S (x# )T (x# ) N (x# )]c T (x# )x## T (x##)x# N (x) [S (x)T (x) N (x)]c T (x)x# S (x)x [using Adjoint (4.1)(2b), noting S (x# ) = T (x##) = N (x)T (x) and T(x#) = S(x) by Spur (4.4)(5)] ## # # = x #x N (x) x#x [applying (C3)) to x and to x# ] = x## N (x)x #x# = 0 [by Adjoint (4.1)(2b)]. Whenever sharp composition holds strictly, it implies U composition. First, equating coeÆcients of t3 in N ([x + ty]# ) = N (x + ty)2 gives N (x#y) = T (x#; y)T (y#; x) N (x)N (y) since on the left N (x# + tx#y + t2 y#) produces N (x#y) + N (x# ; x#y; y#) [the only combinations of 3 powers 1; t; t2 having total degree 3 are t; t; t and 1; t; t2] = N (x#y) + T (x##(x#y); y#) [by linearized Trace-Sharp (4.1)(2a)] = N (x#y) + T ([N (x)y + T (x# ; y)x]; y#) [by Adjoint0 (4.4)(4) and Trace-Sharp (4.1)(2a)] = N (x#y) + 3N (x)N (y) + T (x# ; y)T (y#; x) [by (C5)], while on the right N (x) +
3. THE FREUDENTHAL CONSTRUCTION
511
tN (x; y) + t2 N (y; x) + t3 N (y) 2 produces 2N (x; y)N (y; x) + 2N (x)N (y) (the only combinations of 2 powers of 1; t; t2; t3 having total degree 3 are t; t2 and 1; t3 ) = 2T (x#; y)T (y#; x) + 2N (x)N (y) [by Trace-Sharp (4.1)(2a)]. From this we can derive U composition directly (using Trace-Sharp repeatedly, and setting n = N (x); t = T (x; y) for convenience): N (Uxy) = N (tx x# #y) = t3 N (x) t2 N (x; x# #y) + tN (x# #y; x) N (x# #y) = t3 N (x) t2 T (x#; x# #y) + tT ([x##y]#; x) N (x# #y) = t3 N (x) t2 T (x##x# ; y) + tT ([ x###y# + T (x##; y)y + T (y#; x# )x# ]; x) [T (x##; y)T (y#; x# ) N (x# )N (y)] [using Adjoint00 (4.4)(4C) and the above linearization of sharp composition, with x replaced by x# ] = t3 n t2 T (2nx; y) + t[ nT (x#x; y# ) + nT (x; y)T (y; x) + T (y#; x# )T (x# ; x)] [nT (x; y)T (y#; x# ) n2 N (y)] [using Adjoint (4.1)(2b), Sharp-Symmetry (4.4)(1), and sharp composition] = t3 n 2nt3 2ntT (x#; y#) + nt3 + 3ntT (y#; x# ) ntT (x#; y# ) + n2 N (y) [using (C5)] 2 2 = n N (y) = N (x) N (y):
This nishes the involved calculations required to establish the general cubic construction. We now turn to the Freudenthal and Tits Constructions, and show they produce Jordan algebras because they are special cases of the general construction.
3. The Freudenthal Construction Our rst, and most important, example of a sharped cubic form comes from the algebra of 3 3 hermitian matrices over an alternative algebra with central involution. Replacing the original scalars by the -center, we may assume the original involution is actually a scalar involution. For the octonions with standard involution this produces the reduced Albert algebras. We already know that the resulting matrix algebra will be Jordan by the 3 3 Coordinate Theorem C.1.3, but in this section we will give an independent proof showing that it arises as a Jord(N; #; c): the scalar-valued norm n(a) on the coordinates D allows us to create a Jordan norm N (x) on the matrix algebra. Recall the results of the Central Involution III.21.4 and Moufang Lemma III.21.1(2): If D is a unital alternative algebra over with involution a 7! a such that there are quadratic and linear norm and trace forms n; t : D ! with n(a)1 = aa; t(a)1 = a + a 2 1, then we have (S1) t(a) = t(a); n(a) = n(a) (Bar Invariance) (S2) t(ab) = n(a; b) = t(ba) (Trace Commutativity) (S3) t((ab)c) = t(a(bc)) (Trace Associativity) (S4) n(ab) = n(a)n(b) (Norm Composition)
512
Degree 3
(S5) a(ab) = n(a)b = (ba)a (Kirmse Identity) (S6) a(bc)a = (ab)(ca), (Middle Moufang). We will establish the Freudenthal construction for the untwisted case H3 (D; ); there is no point, other than unalloyed masochism, in proving the twisted case H3 (D; ), since this arises as the -isotope of the untwisted form, and we know in general (by Cubic Factor Isotopes III.7.5) that isotopes of cubic factors are again cubic. Thus we can avoid being distracted by swarms of gammas. Theorem C.7 (Freudenthal Construction). Let D be a unital alternative algebra over with scalar involution. Then the hermitian matrix algebra H3 (D; ) is a cubic factor Jord(N; #; c) whose Jordan structure is determined by the basepoint c, cubic form N , trace T , and sharp # given by
c = 1 = e1 + e2 + e3 N (x) := 1 2 3 T (x) = x# = where
x=
P
P
3 X
k
P
k k n(ak ) + t(a1 a2 a3 ) P
k k ;
T (x; y) =
k k k +
i j
n(ak ) ek +
P
ai aj
k
3 X
P
k t(ak bk )
k ak [ij ]
3 X
3 X
i ei + ai [jk]; y = i ei + bi [jk]; i=1 i=1 i=1 i=1 ai ; bi 2 D, with box notation ei := Eii ; d[jk] = dEjk
kj , and for i ; i 2 ; + dE (ijk) always a cyclic permutation of (123). proof. Certainly c is a basepoint, N (c) = 1. We verify the conditions (4.1)(2abc) for a sharped cubic form. Observe P
P
N (x; y) = 1 2 3 + 1 2 3 + 1 2 3 k k n(ak ; bk ) k k n(ak ) +t(a1 a2 b3 + a1 b2 a3 + b1 a2 a3 ) P P = k [i j n(ak )] k + k t([ai aj k ak ]bk ) by trace commutativity and associativity (S2),(S3). InPthis relation, if we set x = P c; k = 1; ak = 0 we get N (c; x) = k [1 1 k 0]1 + k 0, so T (y) =
X
k
k P
as indicated,Pand if instead we linearize x ! x; c we get k (i + P P N (x; c; y )P= j P 0) k + P t ([0 1 a ] b ) = ( + + ) t ( a k k i j k k k k k k k k k bk ) = P T (x) k k t ( a b ), k k k k k k
N (x; c; y) = T (x)T (y)
X
k
P
k k
X
k
P
t(ak bk ):
Thus T (x; y) = T (x)T (y) N (x; y; c) = k k k + k t(ak bk ) as claimed. Comparing this formula for T (x; y) with forPN (x; y) shows N (x; y) = P the above formula T (x#; y) for the adjoint x# := k i j n(ak ) ek + k ai aj k ak [ij ], and the Trace-Sharp Formula (4.1)(2a) holds. P3 P3 To verify the Adjoint Identity (4.1)(2b), let x# = P i=1 i ei + i=1 bi [jk ]. P Then by de nition of sharp we have x## = 3i=1 Æi ei + 3i=1 di [jk] where the
4. THE TITS CONSTRUCTION
513
diagonal entries are given by
Æk = i j n(bk ) = j k n(ai ) k i n(aj ) n ai aj k ak = j k n(ai ) k i n(aj ) n(ai aj ) k n(ak ; ai aj ) + 2k n(ak ) = k k i j j n(aj ) i n(ai ) k n(ak ) + n(ak ; ai aj ) + n(ai )n(aj ) n(ai aj ) = N (x)k [by (S1), (S2), (S4)] and the o-diagonal entries are given by dk = bi bj k bk = bj bi k bk i j n(ak ) aj ai k ak = ak ai j aj aj ak i ai = (ak ai )(aj ak ) j aj (aj ak ) i (ak ai )ai + i j aj ai i j aj ai k n(ak )ak + i j k ak + n(ak )ai aj = k i j j n(aj ) i n(ai ) k n(ak ) + t(ai aj ak )]ak = N (x)ak by (S5), (S6), (S3) and a(bc)a + (bc)n(a) = [a(bc) + (bc) a]a = t(abc)a. This establishes the Adjoint Identity x## = N (x)x. Finally, to establish the c-Sharp Identity (4.1)(2c), linearize x ! c; y in the de nition of # to get P P c#y = k (1 j + i 1 0)ek + k (0 + 0 1bk 0)[ij ] P P = k ( j + i )ek ij ] k bk[P P P = k ( j + i + k )ek k ek + k bk [ij ] k P = k T (y)ek y = T (y)c y: Thus, by the Cubic Construction Theorem C.5, the space H3 (D; ) carries the structure of a Jordan algebra Jord(N; #; c), and in the Freudenthal Construction Theorem III.4.5 we veri ed that this Jordan structure determined by the sharp norm form coincides with that of the Jordan matrix algebra. 4. The Tits Construction The second example of a sharp norm form leads to a construction, due to Jacques Tits, of Jordan algebras of Cubic Form Type out of degree 3 associative algebras. These algebras need not be reduced, and provide our rst examples of exceptional division algebras. A.A. Albert was the rst to construct a (very complicated) example of an exceptional division algebra. Tits' beautiful method is both easy to understand and completely general: all Albert algebras over a eld arise by the Tits First or Second Construction. We begin by describing the associative setting for these constructions. Definition C.8 (Degree 3). An associative algebra A of degree 3 over is one with a cubic norm form n satisfying the following three axioms. First, the algebra strictly satis es the generic degree 3 equation (A1) a3 t(a)a2 + s(a)a n(a)1 = 0 (t(a) := n(1; a); s(a) := n(a; 1); n(1) = 1):
514
Degree 3
In terms of the usual adjoint a# := a2 t(a)a + s(a)1, this can be rewritten as (A10 ) aa# = a# a = n(a)1: Secondly, the adjoint is a sharp mapping for n, (A2) n(a; b) = t(a# ; b) (t(a; b) := t(a)t(b) n(1; a; b)): Finally, the trace bilinear form is the linear trace of the associative product, (A3) t(a; b) = t(ab): Axiom (A3) can be rewritten in terms of the linearization of the quadratic form s, (A30 ) s(a; b) = t(a)t(b) t(ab) since by symmetry always s(a; b) = n(a; b; 1) = n(1; a; b) = t(a)t(b) t(a; b). From the theory of generic norms, it is known that these conditions are met when A is a nite-dimensional separable associative algebra of degree 3 over a eld ; in that case the algebras are just the forms of the split matrix algebras M3 ( ) or M1 ( ) M2 ( ) or M1 ( ) M1 ( ) M1( ) for the algebraic closure of , and the axioms (A1,2,3) are immediate consequences of the Hamilton-Cayley Theorem and properties of the usual adjoint, trace, and determinant. We want to be able to construct Jordan algebras of Cubic Form Type over general rings of scalars, and the above axioms are what we will need. These immediately imply other useful relations. Lemma C.9. Any cubic form on a unital associative algebra with n(1) = 1 automatically satis es (A4) t(1) = s(1) = 3; 1# = 1, (A5) s(a; 1) = 2t(a); 1#a = t(a)1 a (a#b := (a + b)# a# b#), If (A2-3) hold then (A6) s(a) = t(a#); 2s(a) = t(a)2 t(a2 ). When n is a cubic norm satisfying (A1-3) then we have (A7) a## = n(a)a, (A8) aba = t(a; b)a a# #b, (A9) (ab)# = b#a# , (A10) n(ab)1 = n(a)n(b)1; t(ab) = t(ba); t(abc) = t(bca). proof. (A4): t(1) = s(1) = n(1; 1) = 3n(1) = 3 by Euler's Theorem, so 1# = 12 t(1)1 + s(1)1 = 1 3 + 3 = 1. (A5): By symmetry s(a; 1) = n(a; 1; 1) = n(1; 1; a) = 2n(1; a) = 2t(a). Then 1#a = 1a + a1 t(1)a t(a)1 + s(a; 1)1 = 2a 3a t(a)1 + 2t(a)1 = a + t(a)1. (A6): When (A2-3) hold, setting b = 1 yields s(a) := n(a; 1) = t(a# ; 1) = t(a# ), so taking the trace of the de nition of the adjoint gives s(a) = t(a2 ) t(a)2 + 3s(a). The next three formulas require a bit of eort. It will be convenient to use the bar mapping a := t(a)1 a; (though, unlike the quadratic form case, this map is not an involution, nor of period 2), so we may abbreviate the adjoint by
a# = s(a)1 aa: We will make use twice of the fact z = t(z )1 =) z = 0;
4. THE TITS CONSTRUCTION
515
which follows by taking traces to get t(z ) = 3t(z ) hence 2t(z ) = 0, hence the existence of 12 insures t(z ) = 0 and z = 0. Turning to (A7) and (A8), the element z := a## n(a)a of (A7) has
z = (a# )2 t(a# )a# + s(a#)1 n(a)a = a# [s(a)1 aa] s(a)a# + t(a## )1 n(a)a [using (A6) twice] ## 0 = n(a)[a + a] + t(a )1 [using (A1 )] ## = t(a n(a)a)1 = t(z )1; so z = 0. For (A8) we linearize the degree 3 equation (A1) and use (A2) to get (writing fa; bg := ab + ba as usual) aba = ( =
fa2; bg + t(a)fa; bg) + t(b)a2 s(a)b s(a; b)a + t(a# ; b)1 f[a2 t(a){za + s(a)1]; bg} + 2| s{z (a)b + t(b) |{z} a# + t(a)a s(a)1 | } | {z } | {z } 1
3
2
s(a)b [t(a)t(b) t(a; b)]a + [t(a# )t(b) s(a# ; b)]1 | {z }
2
| {z }
4
| {z }
6
|
{z
5
}
1 2 3 z }| { z }| { fa#; bg + t(a#)b + t(b)a#
= = a##b + t(a; b)a: For (A9),
7
7 6 { z }| { # s(a ; b)1 + t(a; b)a
z
}|
5
| {z }
[using (A3,30 ) to switch between the bilinear forms s; t] z }| {
4
[by (A6) for 5's]
(ab)# = (ab)(ab) t(ab)ab + s(ab)1 [by de nition] = [aba t(a; b)a]b + s(ab)1 [by (A3)] = [ a##b]b + t((ab)# )1 [by (A8),(A6)] # # # # = [b a t(b ; a )1] + t((ab)# )1 [linearizing b ! b; a# in (A10 ) b#b = n(b)1, and using (A2)] # = [b a# t(b# a# )1] + t((ab)# )1 [by (A3)], # # # so the element z = (ab) b a has z = t(z )1 and again z = 0. The rst part of (A10) follows from (A10 ) and (A9), n(ab)1 = (ab)(ab)# = (ab)(b#a# ) = a(bb#)a# = a(n(b)1)a# = n(b)aa# = n(a)n(b)1; the last two parts follow from (A3) and the symmetry of the trace bilinear form.
Now we have the associative degree 3 preliminaries out of the way, and are ready to construct a Jordan algebra. Theorem C.10 (Tits Construction). Let n be a cubic norm on a unital associative algebra A over , and 2 an invertible scalar. De ne J = A 1 A0 A1 to be the direct sum of 3 copies of A. De ne a basepoint c, norm N , trace T , and
516
Degree 3
sharp # on J by
c := 0 1 0 = (0; 1; 0); N (x) := 1 nP(a 1 ) + n(a0 ) + n(a1 ) t(a 1 a0 a1 ); = 1i= 1 i n(ai ) t(a 1 a0 a1 ); P T (x) := t(a0 ); T (x; y) := 1i= 1 t(a i ; bi ); [x# ] i = i a# a[i+1] a[i 1] i for elements x = (a 1 ; a0 ; a1 ); y = (b 1 ; b0 ; b1): Then (N; #; c) is a sharped norm form, and the algebra Jord(A; ) := Jord(N; #; c) is a Jordan algebra. proof. It will be convenient to read subscripts n modulo 3, and to always choose coset representative [n] = 1; 0; 1: 8 >
: 1 if n 2 mod 3. From (A2) and the de nition of the norm we compute, for x = (a 1 ; a0 ; a1 ), y = (b 1 ; b0; b1 ), the linearization P N (x; y) = i i t(a# i ; bi ) t(b 1 a0 a1 ) t(a 1 b0 a1 ) t(a 1 a0 b1 ) P i # = i t(ai ; bi ) t(a0 a1 b 1) t(a1 a 1 b0 ) t(a 1 a0 b1) P = 1i= 1 t([i a# a[i+1] a[i 1] ]; bi ) i by cyclicity (A10). Setting x = c (where a0 = 1 = a# 0 ; a 1 = a1 = 0) in this gives [n] :=
T (y) := t(b0 = N (c; y)); so the trace form given in the theorem is derived from the cubic norm. Linearizing x ! x; c gives P N (x; c; y) = i t([i ai #ci a[i+1] c[i 1] c[i+1] a[i 1] ]; bi ) = t(1a1; b 1 ) + t(a0 #1; b0) t(a 1 1; b1 ) = t(a1 ; b 1 ) + t(t(a0 )1 a0 ; b0 ) t(a 1 ; b1 ) [by (A5)] = t(a0 )t(b0 ) t(a1 ; b 1 ) t(a0 ; b0 ) t(a 1 ; b1) P1 = T (a0)T (b0 ) [by (A3)]: i= 1 t(a i ; bi ) From these we see T (x; y) := T (x)T (y) N (x; y; c) =
1 X
i= 1
t(a i ; bi );
so the trace bilinear form given in the theorem is that derived from the norm. Then P from the above we see N (x; y) = 1i= 1 t([i a# a[i+1] a[i 1] ]; bi ) = T (x# ; y) for i the sharp given in the theorem, verifying the Trace-Sharp Formula (4.1)(2a). Now we are ready to verify the Adjoint Identity (4.1)(2b), x## = N (x)x. Set x = (a0 ; a1 ; a2 ); y = x# = (b0 ; b1 ; b2) as usual; then we have as ith component [x## ]i = [y# ] ( i) = i b#i b[ i+1] b[ i 1] [by de nition] # i = b i b [i 1] b [i+1] = i i a# a[i+1] a[i 1] # i
4. THE TITS CONSTRUCTION
517
1] a# [i+1] a# [i 1] ai a[i+1] [i+1] a[i 1] ai i # [i 1] [i+1] a# a# = i a## a# i i #a[i+1] a[i 1] + (a[i+1] a[i 1] ) [i 1] [i+1] [i 1] a# a +[i+1] ai a[i+1] a# + a a a a a i i i [i+1] [i 1] [i+1] [i 1] [i 1] = i n(ai )ai t(ai ; a[i+1] a[i 1] )ai + [i+1] ai n(a[i+1] ) + [i 1] n(a[i 1] )ai [using (A7),(A10 ) twice, (A8),(A9) and [i 1] [i+1] = i ] P1 j = [from (A3),(A10)] j = 1 n(aj ) t(a 1 a0 a1 ) ai
[i
= N (x)ai = N (x)[x]i [from the de nition of N ]. Since n continues to be a cubic norm in all scalar extensions, the Adjoint Identity continues to hold, and thus holds strictly. Finally, we verify the c-Sharp Identity (4.1)(2c) by comparing components on both sides: linearizing x ! c; y in the sharp mapping gives [c#y]0 = 0 c0 #b0 c1 b 1 c1 b 1 = 1#b0 0 0 = t(b0 )1 b0 [by (A5)] = [T (y)c y]0 ; [c#y] 1 = 1 c1 #b1 c 1 b0 b 1c0 = 0 0 b 1 = [T (y)c y] 1 ; [c#y]1 = 1 c 1 #b 1 c0 b1 b0 c1 = 0 b1 0 = [T (y)c y]1 . This completes the veri cation that (N; #; c) is a sharped cubic form and hence produces a Jordan algebra. For the second construction, let A be an associative algebra of degree 3 over
, and let be an involution of second kind on A, meaning that it is not -linear, only semi-linear (!a) = ! a for a nontrivial involution on with xed ring := H( ; ) < . (For example, the conjugate-transpose involution on complex matrices is only real-linear). The involution is semi-isometric with respect to a norm form if n(a ) = n(a) . Theorem C.11 (Tits Second Construction). Let n be the cubic norm of a degree 3 associative algebra A over as in C.10, with a semi-isometric involution of second kind over . Let u = u 2 A be a symmetric element with norm n(u) = for an invertible scalar 2 . From these ingredients we de ne a space J = H(A; ) A to be the direct sum of a copy of the symmetric elements and a copy of the whole algebra, and de ne a basepoint c, norm N , trace T , and sharp # on J by
c := 1 0 = (1; 0); N (x) := n(a0 ) + n(a) + n(a ) t(a0 aua); T (x) := t(a0 ); T (x; y) := t(a0 ; b0 ) + t(ua ; b) + t(au; b); x# = (a# aua; (a )# u 1 a0 a) 0 for elements x = (a0 ; a); y = (b0 ; b); (a0 ; b0 2 H(A; ); a; b 2 A): Then (N; #; c) is a sharped cubic form, and the algebra Jord(A; u; ; ) := Jord(N; #; c) is a Jordan algebra. Indeed, the map (a0 ; a) '! (ua ; a0 ; a) identi es J with the xed points H(Jord(A; ); e ) of the algebra Jord(A; ) over obtained by the First Construction relative to the semi-linear involution e : (ua 1 ; a0 ; a1 ) ! (ua1 ; a0 ; a 1 ). If contains invertible skew elements (e.g. if it is a eld), then the second construction is a form of the rst: Jord(A; u; ; ) = Jord(A; ). proof. We could verify directly that J := Jord(A; u; ; ) is Jordan by verifying (N; #; c) is a sharped cubic, but Jordanity will follow from the more precise e ; c~) obtained by the e # representation of J as H(eJ; e) for eJ := Jord(A; ) = Jord(N; First Tits Construction over .
518
Degree 3
The rst thing to do is verify that e is indeed an involution of the Jordan algebra Je. Since the products are built out of the basepoint c~, norm Ne , and sharp e in the Cubic Construction, we need only prove that e # preserves these. Certainly it preserves the basepoint, c~e = (u0; 1; 0) = (u0 ; 1; 0 ) = (u0; 1; 0) = c~: Since the original is semi-isometric on A, it -preserves norms, traces, and sharps: n(a ) = n(a) ; t(a ) = t(a) ; s(a ) = s(a) ; (a )# = (a# ) : From this we see the norm Ne involution e interacts smoothly with the norm: for any element x~ = (ua 1 ; a0 ; a1 ) we compute Ne (~xe ) = Ne (ua1 ; a0 ; a 1 ) [by de nition of e] 1 = n(ua1 ) + n(a0 ) + n(a 1 ) t(ua1 a0 a 1 ) [by def. C.10 of norm] = 1 n(u)n(a1 ) + n(a0) + n(a 1 ) t(a 1a0 a1 u) [by (A10) and u = u] = n(a1 ) + n(a0 ) + ( ) 1 n(u) n(a 1 ) t(a 1 a0 a1 u) [by n(u) = ] = n(a1 ) + n(a0 ) + 1 n(ua 1 ) t(ua 1 a0 a1 ) [by (A10)] e e = N (ua 1 ; a0 ; a1 ) = N (~x) . e The involution interacts less smoothly with the adjoint: by de nition C.10 of # e # e (~xe )# = ua1 ; a0 ; a 1 [by de nition of e] # = (a 1 ) (ua1 )(a0 ); (a0 )# (a 1 )(ua1 ); 1 (ua1 )# (a0 )(a 1 ) = (a 1 )# ua1 a0 ; (a0 )# a 1 ua1 ; 1 (a1 )# u# a0 a 1 [by (A9)] a 1 ua1 ; (a# ) u 1 a0 a 1 = u[( ) 1 u#(a#1 ) a1 a0 ]; (a# 0) 1 # 0 # [since uu = n(u)1 = by (A1 ) and (a ) = (a# ) ] = u[ 1a#1 u# a0 a1 ] ; [a# a1 ua 1] ; [u 1a# a 1 a0 ] [by u = u] 0 1 = u[u 1a# a 1 a0 ]; [a# a1 ua 1 ]; [ 1 a#1 u# a0 a1 ] e [def. of e] 1 0 = a# (ua 1 )a0 ; a# a1 (ua 1 ); 1 (ua 1 )# a0 a1 e [by (A9)] 1 0 = (ua 1 ; a0 ; a1 )# e = x~# e .
Thus e preserves all the ingredients of the Jordan structure, and is a semilinear involution on Jord(N; #; c). From this it is clear that the space H(eJ; e) of symmetric elements is precisely all (ua 1 ; a0 ; a1 ) where a0 = a0 ; a 1 = a1 , so we have a -linear bijection ' : J ! H := H(e J; e ) given by '(a0 ; a) = (ua ; a0 ; a): To show that this is an isomorphism of Jordan algebras, it suÆces to prove that the map preserves the basic ingredients for both cubic constructions, the basepoints, norms, and sharps. As usual the basepoint is easy,
'(c) = '(1; 0) = (0; 1; 0) = c~: The norm, as usual, presents no diÆculties:
5. ALBERT DIVISION ALGEBRAS
519
Ne ('(a0 ; a)) = Ne (ua; a0 ; a) = 1 n(ua ) + n(a0 ) + n(a) t(ua a0 a) [by def. C.10 of Ne ] = n(a0 ) + n(a) + 1 n(u)n(a) t(a0 aua) [by (A10)] = n(a0 ) + n(a) + n(a) t(a0 aua ) [by n(u) = ] = N (a0 ; a) [by de nition of N ]. Once more, the sharp is messier: '(a0 ; a)#e = (ua ; a0 ; a)#e e = (a# (ua )a0 ; a# a(ua ); 1 (ua)# a0 a) [by def. C.10 of #] 0 # 1 # 1 # # = (u[u a a a0 ]; a0 a(ua ); (a ) u a0 a) [by (A9)] # # 1 # 1 = (u[ (a ) u a0 a] ; [a0 aua ]; [ (a ) u a0 a]) 1 # 1 [since u = u; (a0 ) = a0 ; u = u by (A10 )] # = ' a0 aua ; (a )# u 1 a0 a = ' (a0 ; a)# [by de nition of #]. This completes the veri cation that ' induces an isomorphism of J with H. Since the original involution on A is of second kind, there exist scalars ! 2
with ! 6= !, hence there exist nonzero skew elements = ! ! . Assume that some such is invertible; then Skew( ; ) = and Skew(eJ; e) = H (if s 2 eJ is skew then by commutativity h := 1 s is symmetric, h~ = ( 1 ) s~ = ( 1 )( s) = 1 s, so s = h (heavily using invertibility of ). The presence of 1 guarantees spaces are the direct sum of their symmetric and skew parts, = 2 H( ; ) S k( ; ) = ; eJ = H(J; ~) S k(J; ~) = H H, and the isomorphism ' : J ! H extends to an -isomorphism of J with H , which by standard properties of tensor products is isomorphic to eJ. (Recall: H = H ( ) = (H ) (H ) = H H = eJ:) Notice that our veri cation that the Freudenthal Construction produces degree 3 Jordan algebras involves only basic facts (S1)-(S6) about alternative algebras with scalar involution, and the veri cation for the Tits Constructions involves only basic properties (A1)-(A10) about cubic norm forms for associative algebras. In both cases the hard part is discovering recipe, and once the blueprints are known, any mathematical carpenter can assemble the algebra from the ingredients. 5. Albert division algebras We mentioned in III.6.5 that the Tits Construction easily yields Albert division algebras, i.e. 27-dimensional Jordan algebras of anisotropic cubic forms. Theorem C.12 (Tits Division Criterion). The Jordan algebra Jord(A; ) and Jord(A; u; ; ) constructed from a degree 3 associative algebra A over a eld , will be Jordan division algebras i (I) A is an associative division algebra and (II) 62 n(A) is not a norm. proof. We know that Jord(A; ) will be a division precisely when its cubic norm form is anisotropic. A direct attempt to prove anisotropy in terms of the norm N (x) := 1 n(a 1 ) + n(a0 ) + n(a1 ) t(a 1 a0 a1 ) for x = (a 1 ; a0 ; a1 ) is diÆcult; as the witches in Macbeth said, \Equation cauldron boil and bubble, two terms good, three terms trouble." Necessity of the conditions is easy: A must be a division algebra (anisotropic norm n) since it is a subalgebra with the
520
Degree 3
same norm N (0; a; 0) = n(a), and must not be a norm since if = n(u) then N (0; u; 1) = 0+n(u)+n( 1) = n(u) = 0 and N would be isotropic. The hard part is showing that these two conditions are suÆcient. We nesse the diÆculty of the tree-term norm equation by replacing it by two-term adjoint equations:
N (y) = 0 for some y 6= 0 =) x# = 0 for some x 6= 0: Indeed, either already x = y# is zero, or else x 6= 0 but x# = (y# )# = N (y)y = 0 by the fundamental Adjoint Formula. But the vector equation x# = 0 for x = (a 1 ; a0 ; a1 ) implies three two-term equations 0 = [x# ] i = i a# a[i+1] a[i 1] for i i = 1; 0; 1 Then i a# = a a ; so multiplying on the left or right by ai and [i+1] [i 1] i # # i using a a = aa = n(a)1 yields n(ai )1 = a[i+1] a[i 1] ai = ai a[i+1] a[i 1] : Now some ai 6= 0; hence n(ai ) 6= 0, so a[i+1] a[i 1] ai 6= 0 and none of the ai vanishes. Then n(a0 )1 = (a1 a 1 )a0 = a1 (a 1 a0 ) = n(a1 ) implies = n(a0 a1 1 ) is a norm.
Example C.13 (Albert Division). Let A be a 9-dimensional central-simple associative division algebra over a eld . (Such do not exist over R or C ; but do exist over Q and p-adic elds.) Then the extension A(t) = A (t) by the rational function eld in one indeterminate remains a division algebra, but does not have the indeterminate t as one of its norms. Pproof. Let P x1 ; : : : ; x9 be a basis for A over , with cubic norm form n(x) = n( i xi ) = i;j;k i j k for scalars 2 . We rst show that A(t) remains a division algebra, i.e. the extended norm form Premains anisotropic. Suppose instead that n(x(t)) = 0 for some nonzero x(t) = i (t)xi for rational functions i (t) 2 (t). We can clear denominators to obtain a new isotropic x(t) with polynomial coeÆcients i (t) 2 [t], and then divide through by the g.c.d. of the coeÆcients to get them = 0 into the polynomial P relatively prime. Then we can substitute t P relation i;j;k i (t)j (t)k (t) = n(x(t)) = 0 in [t] to get i;j;k i (0)j (0)k (0) = n(x(0)) = 0 in , where x(0) has constant coeÆcients in and so lies in A; yet is not zero since relative primeness implies the polynomial coeÆcients i (t) are not all divisible by t and hence do not all vanish at t = 0, contradicting anisotropy of n on A. Next we check that P t is not a norm on this division algebra over (t). Suppose, on the contrary, that n( i (t)xi ) = t; where we can write i (t) = Æi((tt)) for polynomials i ; Æ which are \relatively prime" in thePsense that no factor of Æ divides all the i . Now substituting t = 0 in n(y(t)) = n( i (t)xi ) = tÆ(t)3 gives n(y(0)) = 0, so by anisotropy on A we have y(0) = 0, and by independence of the xi each coeÆcient i (0) = 0. But then each i (t) = ti0 (t) is divisible by t, y(t) = ty0 (t) for y0 (t) 2 A[t] and t3 n(y0 (t)) = n(y(t)) = tÆ(t)3 in [t]. Then t2 divides Æ3 ; hence the irreducible t divides Æ, which contradicts Æ(t) being relatively prime to the original i (t). Thus A(t) and = t can be used in the Tits First Construction to produce a 27-dimensional Jordan Albert division algebra which is a form of the split Albert algebra Alb() over the algebraic closure , since A = M3 () (see Problem 2 below), where the Tits Construction applied to a split M3 ( ) produces a split Albert algebra Alb( ):
6. PROBLEMS FOR APPENDIX C
521
6. Problems for Appendix C Show that Jord(A; n(v)) = Jord(A; ) for any invertible scalar and invertible element v 2 A. In particular, the scalar can always be adjusted by an invertible cube (just as the scalar in the Cayley-Dickson Construction can always be adjusted by an invertible square). Problem 2. Verify that the Tits First Construction applied to a split matrix algebra produces (for any invertible scalar) a split Albert algebra, Jord(M3 (); ) = Alb(): [Hint: You can make your life easier by changing to = 1, since every element of is a norm in this case.] Problem 1.
APPENDIX D
The Density Theorem Since not all readers will have been exposed to the Density Theorem, we have made our exposition (cf. Strict Simplicity III.1.13, Prime Dichotomy IV.9.2) independent of it. However, students should be aware of this fundamental associative result, so we include a brief treatment here.
1. Semisimple Modules Recall that an R-module is called simple if it is non-trivial, R M 6= 0; and it has no proper submodules. A module is called semisimple if it is a direct sum of simple modules. These are frequently called irreducible and completely reducible modules respectively; we prefer to speak of irreducible and completely reducible representations, using the terms simple and semisimple to refer to algebraic structures (in keeping with the terminology of simple groups and rings). The crucial fact we need about semisimple modules is their complementation property: as in vector spaces, every submodule N M is a direct summand and has a complement N0 : M = N N0 : Indeed, this property is equivalent to semisimplicity. In complete analogy with vector spaces, we have the following powerful equivalence. Theorem D.1 (Semisimple). The following are equivalent for an R-module M: (i) M is a direct sum of simple submodules (semisimple); (ii) M is a sum of simple submodules; (iii) every submodule N has a complement which is a direct sum of simple submodules; (iv) every submodule N has a complement. These conditions are inherited by every submodule, every quotient module, and all sums. Moreover, even though R need not be unital, we have RN = N for all b m always contains m. submodules N, in particular Rm = R proof. (i) ) (ii) is clear. (ii) ) (iii) is highly nontrivial, depending on Zorn's Lemma. Let fM g2S be the collection of all simple submodules of M. Choose a subset X S maximal among those having the two properties P (1) S := 2X M misses N; S \ N = 0; (2) the submodules corresponding to the indices in X are P independent, M \ = 6 2X M = 0: Such maximal X exist by Zorn's lemma since both properties are of \ nite type": the union X = [X of a directed collection of such sets X again has the two properties, since BY DIRECTEDNESS anyP nite set of indices i 2 Xi are contained in some P X ; so any element m of S := 2X M and any ( nite) dependence relation m = 0 lives in some S , which by hypothesis (1) misses N and by (2) has 523
524
D. THE DENSITY THEOREM
the M ( 2 X ) independent. [It is important that we don't just take directed sets of semisimple submodules missing N: to make sure the union is again semsimple, it is important to make sure we have a directed set of \bases" X as well.] We claim this S is a complement for N. By independence (1) we certainly have S N M. To prove it is all of M, it suÆces by (ii) to show S N contains each simple submodule M of M. But if NOT, by simplicity M 6 S N =) M \ (S N) 6= M =) M \S = M \(SN) = 0 [by simplicity of M ] =) (SM )\N = 0 [if s + m = n 6= 0 then m = s + n 2 M \ (S N) = 0; hence s = n 2 S \ N = 0 implies s = n = 0, contrary to n 6= 0]. This would produce a larger independent direct sum S M (larger set X [ f g) missing N, CONTRARY to maximality. Thus S is the desired complement for N. Note that (iii) ) (i) is clear by taking N = 0, so we have a closed circle (i) , (ii) , (iii): We prefer to show the stronger result that complementation alone implies semisimplicity. Clearly (iii) ) (iv); and we will complete the circle by showing (iv) ) (ii). To squeeze consequences from (iv) we need to show that this property is inherited. If we have global complementation (iv) in M then we have local complementation in each submodule N/M: if P/N then M = PP0 [by global complementation], hence N = P (P0 \ N) by Dedekind's Modular Law [n = p + p0 ) p0 = n p 2 N shows N = P + P0 \ N, and certainly P \ (P0 \ N) P \ P0 = 0]. Similarly, we have complementation in each quotient M := M=N: any submodule of M has the form P for P N, so if M = P P0 globally then M = P P0 in the quotient [the two span M, and they are independent in the quotient since p = p0 ) p0 2 P + N = P ) p0 = 0 by global independence of P; P0 in M]. Once we establish semisimplicity is equivalent to (ii), it will be clear that semisimplicity is preserved by arbitrary sums, and this will establish the assertions about heredity at the end of the theorem. We return to the problem of extracting the consequence (ii) out of complementation (iv). Let N be the sum of all the simple submodules of M (for all we know at this stage, there may not be any at all, in which case N = 0), and suppose N 6= M; so some m 62 N: But any \avoidance pair" (m; N) consisting of a submodule N / M and element m 62 N avoiding N (for example, N = 0 and m any nonzero element) gives rise to a simple submodule. Indeed, choose any submodule P maximal among those missing containing N but missing m: An easy Zorni cation shows such P exist (a union of a chain of submodules missing a subset is always another submodule of the same sort). Then complementation (iv) gives M = P P0 , and we claim this complement P0 is simple. Indeed, if it had a proper submodule Q0 then by the above inheritance of complementation it would have a local complement, P0 = Q0 Q, hence M = P Q Q0 where Q; Q0 6= 0 ) P Q; P Q0 > P; by maximality these larger submodules cannot miss m, we must have m 2 P Q; m 2 P Q0 and hence m 2 (P Q 00 ) \ (P 0 Q0 ) = P 0 00 ; contrary to our hypothesis m 62 P: But the particular N we chose was the sum of all simple submodules, so P0 N P, contradicting directness. This contradiction shows there is no such m, and N = M: This establishes the equivalence of (i)-(iv), and we have already noted inheritance of semisimplicity by submodules. We have M RM trivially for simple M [recall RM = 0 is explicitly ruled out in the de nition of simplicity], hence is true
2. THE JACOBSON-BOURBAKI DENSITY THEOREM
525
for direct sum of simples, i.e. for all semisimple modules, hence all N / M. In parb m = RR b m = Rm; ticular, for the submodule generated by an element m we have R establishing the last assertions of the theorem.
2. The Jacobson-Bourbaki Density Theorem Recall that an associative algebra R is primitive i it has a faithful irreducible representation, i.e. a faithful simple module M. Then the left regular representation r ! Lr is (by faithfulness) an isomorphism of R with an algebra of -linear transformations on a left vector space V = M over = EndR (M) (which by Schur's Lemma is a division algebra). The Density Theorem describes approximately what this algebra of transformations looks like: it is \thick" or \dense" in the full ring End (V ): More precisely, we say an algebra of linear transformations is dense in End (V ) if for any -independent x1 ; : : : ; xn 2 V and arbitrary y1 ; : : : ; yn 2 V there is an r 2 R with rxi = yi for i = 1; : : : ; n. (Density can be interpreted as ordinary topological density of LR with respect to a certain topology on End (V ).) Theorem D.2 (Jacobson Density). (1) An algebra is primitive i it is isomorphic to a dense ring of transformations on a vector space. (2) If M is a simple left R-module, then V = M is a left vector space over the division ring := EndR (M) and LR is a dense algebra of linear transformations in End (V ). proof. Density in (1) is certainly suÆcient for primitivity, because then R End (V ) has a faithful representation (the identity) on V , which is irreducible since Rx = V for any nonzero vector x 2 V : x 6= 0 is -independent, hence for any y 2 V there is by density an r 2 R with rx = y. Density in (1) is necessary for primitivity by (2). The result (2) about simple modules will follow from a more general density theorem for semisimple modules below. To describe the general setting, we need to reformulate density in terms of double centralizers. For any independent set of k vectors xi and any k arbitrary vectors yi in a vector space V there is a linear transformation T 2 End (V ) = CentV (R ) = CentV (CentV (LR )) End (V ) taking each xi to the corresponding yi ; T (xi ) = yi . Thus the above density will follow if every linear transformation T 2 CentV (CentV (LR )) in the double centralizer agrees at least locally with an element of R: for every nite-dimensional -subspace W M we can nd an element r 2 R with Lr jW = T jW . Such double-centralizer theorems play an important role in group representations as well as ring and module theory. Moreover, as Bourbaki emphasized, Jacobson's somewhat tricky computational proof becomes almost a triviality when the situation is generalized from simple to semisimple modules. Theorem D.3 (Jacobson-Bourbaki Density). If M is a semisimple left Rmodule, then M is a right R0 -module and left R00 -module for the centralizer R0 := CentM (LR ) and double-centralizer R00 := C0entM (R0 ) = CentM (CentM (LR ))00, and for any nite set m1 ; : : : ; mk 2 M and any R -linear transformation T 2 R , there is an element r 2 R with Lr mi = T (mi ) for i = 1; : : : ; k: proof. The case k = 1 is easy (for both Jacobson and Jacobson-Bourbaki Density): by complementation D.1(iv) we have M = Rm P for a complementary R-submodule P, and the projection E (tm p) := tm on Rm along P belongs to R0 = EndR (M) [projection of a direct sum of R-modules on the rst factor is
526
D. THE DENSITY THEOREM
always R-linear], so T 2 R00 commutes with E and T (m) = T (E (m)) [remember that m 2 Rm by D.1] = E (T (m)) 2 E (M) Rm implies T (m) = rm for some r. Now comes the elegant leap to the general case, where k elements are fused e := into one. For any nite set m1 ; : : : ; mk 2 M set m e = m1 mk 2 M 00 e . As a direct M M; and for any T 2 R set Te := T T on M 0 e remains a left module over R e := R, hence also a left module over R e := sum M CentMe (LRe ) and left module over Re 00 := CentMe (CentMe (LRe )). [Warning: while Re = R, 0 00 e 6= R0 ; R e 6= R00 are NOT the same, as they are computed in the centralizers R e to End (Me ) instead of End (M). For this reason we have re-christened R as R 0 e e denote its new elevated role on M.] Indeed, by directness R can be identi ed with 0 e = EndR (M e ) has the Mk (R0 ) (acting by left matrix multiplication): any S 2 R L form S = 1i;j;k Sij of a k k matrix with entries Sij 2 EndR (M) = R0 [if we e by [M]i we have the left matrix action S ([n]i ) = denote the ith copy of M in M P P P we have Te(SL (ne) = P Te S ( ki=1[ni ]i ) L= P Te i;j [Sji (ni )]j = j [Sji (n)]j . LThus P (T T ) j [ i Sji (ni )]j = j [T i Sji (ni ) ]j = j [ i Sji (T (ni ))]j [since L T 2 R00 commutes with Sji 2 R0 ] = S i [T (ni )]i = S (T T )(n1 nk ) = S Te(ne) : e is still semisimple by closure under sums in D.1, by the case k = 1 Since M there is r 2 R with rm e = Te(m e ); i.e. rmi = T (mi ) for each i. This nishes the proof of both density theorems. Exercise D.3 In the case of modules over a noncommutative ring, if M is a left R-module it is preferable to write the linear transformations T 2 R0 = EndR (M) on the right of the vectors in M, (x)T [so we have the \associativity" condition (rm)T = r(mT ) instead of the \commutativity" condition T (rm) = rT (m)], and dually when M is a right R0 -module we write transformations in R00 = EndR0 (M) on the left [so T (mr0 ) = (T m)r0 ]. Because most beginning students have not developed suÆcient ambidexterity to move smoothly between operators on the left and the right, we have kept our discussion left-handed. Try out your right hand in the following situations. (1) Show that if V is nite-dimensional, the only dense ring of linear transformations on V is the full ring End (V ). (2) Explain why in the n-dimensional case V = n we have End(V ) = Mn (op ) instead of Mn (). (3) Show that the dual space V = Hom (V; ) of a left vector space carries a natural structure of a right vector space over via fÆ = RÆ Æ f , but no natural structure of a left vector space [LÆ Æ f is not in general in V ]. (4) Show that for a left vector space V there is a natural bilinear pairing on V V ! [additive in each variable with < Æx; f >= Æ < x; f >; < x; fÆ >=< x; f > Æ for all x 2 V; f 2 V ; Æ 2 ] via < x; f >= (x)f:
APPENDIX E
Hints Here we give hints (varying from unhelpful to mildly helpful to outright answers) to certain parts of the starred exercises at the end of each chapter in Parts III and IV.
1. Hints for Part III Chapter 1. The Category of Jordan Algebras This is clearly a homomorphism of non-unital algebras, corresponding to the imbedding of a subalgebra in a larger algebra. (2) There is no unital homomorphism M2 () ! M3 () when is a eld. Indeed, E11 ; E22 are supplementary orthogonal idempotents in M2 (), conjugate under the inner automorphism '(X ) = AXA 1 for A = E12 + E21 , so their images e; e0 would have to be two conjugate supplementary idempotents in M3 (), so they would both have the same integral rank r (r = 1or2), and 3 = rank(1) = rank(e)+ rank(e0 ) = 2r, which is impossible. [In characteristic 0 we can use traces instead of ranks of idempotents, since then trace(e) = r1 and 0 6= r 2 N =) 0 6= r1 2 .] Exercise 1.4:
(1) = 1 yields u + v = 0, hence 2 u + 2 v = 0 for any , so subtracting = 0 gives u = 0 for all = 2 If = 21 then = 14 is invertible with inverse 4. (2) f (x + z ; y) f (x; y) 3 f (z ; y) = a + 2 b for a = f (x; y; z ); b = f (z ; y; x). Exercise 1.10:
u + 2 v
(1) Reduce q.i. to inverse, then cancel T from T [T 1; S ]T = [S; T ] = 0. (2) Ker(T ), Im(T ) are S -invariant.
Exercise 1.12A:
Compute directly, heavily using commutativity of c, or use the identity [x; y; z ] [x; z; y] + [z; x; y] = [z; x]y + x[z; y] z; xy] valid in all linear algebras. Exercise 1.12B:
The relation is shorter since the commutator and associators vanish for i = 1 where M (x1 ) = 1 is central, so the relation must vanish entirely; by independence of the !i this implies the associators vanish for each i. Once the M (xi ) are scalars P in they can be moved to the other side of the tensor product, forming ! = i !i . Exercise 1.13:
Consult Barry Mitchell \The Theory of Categories", Academic Press, New York, 1965, page 2. Show (1) that the left unit is unique, (2) identify the objects X with the identity maps e; e = 1X , (3) de ne Mor(X; Y ) := 1X M1Y , (4) show M is the disjoint union of the Mor(X; Y )'s, (5) show fg is de ned i f 2 Mor(X; Y ); g 2 Mor(Y; Z ), i.e. the domain of f is the codomain of g. Problem 1.1:
527
528
Hints
(1) By the universal property, any hull A1 = 1 + A is an b , and a quotient A b =I0 faithfully contains A i the kernel I0 is epimorphic image of A disjoint from A. (2) Tightening is possible: maximal disjoint ideals exist by Zorn's Lemma, A remains faithfully imbedded in the resulting quotient, and all nonzero ideals of the quotient hit A (by maximality of M0 ). (3) Ann(A) is a pathological ideal, which is not permitted in semiprime algebras. (4) Any disjoint ideal I0 has b ) = 0 (by disjointness), hence I0 A + AI0 I0 \ A (since both I0 and A are ideals of A 0 0 b I is a subset of the ideal M = Ann(A), which is a disjoint ideal in the robust case (M0 \ A = Ann(A) = 0), so M0 is the unique maximal disjoint ideal. Problem 1.3:
b = (1 e) A, and (1 e) = Ann b (A). (2) (1) We noted A A b consisting of all (n1 2m) = "(1 0) + M = 2(1 1)Z = (2 2)Z with A (0 (2m + 2k)) = "1 + 2(m + k) when n = 2m + "; " = 1 or 0. Problem 1.4:
(1) A is always an ideal in any unital hull. (2) Nonzero orthogonal e would have nonzero orthogonal traces on A. In A e we always have ideals in A 0 0 M A = 0. Always M consists of all ! 1 z (! ) for z (! ) 2 A with za = !a = az for all a 2 A; the set of such ! 2 forms a -subspace (i.e. ideal) / . In in nite matrices these z 's are just all !11. (3) Nonzero self-orthogonal ideals in e would have nonzero self-orthogonal traces on A. An ! 1 z (! ) 2 M0 as above A b ; then ! is a squares to 0 i !2 !z (!) = 0, which means !2 = 0 in 1 A trivial ideal in , and generates a trivial ideal !A in A if acts faithfully on A. If 0 = for a trivial -ideal , then A becomes a 0 -algebra via A = 0, and b has a trivial ideal 1 no matter how nice A is as -algebra. (4) If z (! ) acts like A !1, then ! 1z (!) is already a unit for A. (5) IfA? 1 + M0 were a larger ideal then it hits A, some 1 + m = a 2 A, so ax = x + mx = mx 2 M0 \ A = 0 and dually, a 2 Ann(A) = 0 by robustness. Problem 1.5:
(2) The maximum number of pairwise orthogonal idempotents in Mk () over a eld is k, achieved when all the idempotents are \rank one" (eAe = e). If Mn () ! Mm () is a unital homomorphism when is a eld. then the pairwise-conjugate supplementary orthogonal idempotents E11 ; : : : ; Enn in Mn () would be carried to pairwise-conjugate supplementary orthogonal idempotents e1 ; : : : en in Mm () in Mm (); by conjugacy each ei would have the same decomposition into r completely primitive idempotents, hence would have integral rank r, therefore m = rank(1) = rank(e1 ) + : : : + rank(en ) = nr, which is impossible unless n divides m. Question 1.1: Think of the example in Problem 5. Problem 1.6:
Almost never. It suÆces if itself is Boolean ring of scalars, and this is also necessary if B is free is a -module. Question 1.2:
Question 1.3:
1 2.
Outer ideals U^J I I reduce to ordinary ideals in the presence of Chapter 2. The Category of Alternative Algebras
(2.8) KD(A; 2 ) ! KD(A; ) via '(a + bm) = a + bm 0 has '(1) = 1; N ('(a + bm)) = n(a) n(b) = n(a) 2 n(b) = N (a + bm). Exercise 2.8, 2.12B:
1. HINTS FOR PART III
529
(2.12B) Alternately, in C = KD(A; ) itself we have an element m0 := m with C = A + Am0 ; N (A; m0 ) = 0; 0 = N (m0 ) = 2 N (m) = 2 ; so C also equals KD(A; 0 ). Take A = RR~e1 RR~e1 spanned by E11 := ( 10 00 ) ; E22 := ( 00 01 ) ; E12 := 00 ~e01 ; E21 := 0 0 ~e1 0
[which is clearly isomorphic to M2 ()], and take ` = 0~e2 ~e02 so N (`) = 1. Show ` ? A. Compute the space A` as the span of E11 ` = 00 ~e02 ;E22 ` = 0~e2 00 ; 0 ~e2 = 0 0 = 0 0; E12 ` = 00 ~e01 ~ e ~e2 0 ~e3 0 ~ e 0 1 2 0 ~e 0 0 0 ( ~ e ( ~ e ) 2 1 2 E21 ` = ~e1 0 = 00 ~e03 : ~e2 0 = 0 0
Take B spanned by 1; i = 0~e1 ~e01 ; j = 0~e2 ~e02 ; k = ~e03 0~e3 [beware the minus sign in k!]; show B is a copy of H , and Z = H + H m for m = 10 01 with m ? B; N (m) = 1. Chapter 3. Three Special Examples By imbedding in the unital hull, we may assume A is unital, so the left regular representation x 7! Lx is by the left alternative law an injective specialization A+ ! End (A)+ . Exercise 3.5A: (3) Take A0 to be the algebra direct sum of A with a space N with trivial product A0 N = NA0 = 0 and skew involution n = n for all n; if A is unital and you want the extension A0 to remain unital with the same unit, take N to be a direct sum of copies of the regular bimodule Ai with skew involution and trivial products Ai Aj = 0. Exercise 3.1:
(1) To show a basepoint-preserving isometry ' is an algebra isomorphism, show it preserves traces T 0 ('(x)) = T (x) and hence involutions '(x ) = ('(x)) , then use the U -formula of3.9 to show ' preserves products (or use the degree 2 formula to show it preserves squares). To show an algebra isomorphism is a basepoint-preserving isometry, note rst that ' must preserve unit elements '(c) = c0 , then apply ' to the degree 2 equation 3.9 in J and subtract o 3.9 applied to '(x) in J0 , to get [T 0('(x)) T (x)]'(x) + [Q(x) Q0 ('(x))]c0 = 0. If '(x) is independent of c0 , show Q(x) = Q0 ('(x)). If '(x) is dependent on c0 , show x = c; Q(x) = Q0 ('(x)) = 2 . Problem 3.2: (1) The measures (x; y) := '(xy) '(x)'(y); (x; y) := '(xy) '(y)'(x) of homomorphicity and anti-homomorphicity satisfy (x; y) (x; y) = 2 2 ' (xy) + Ux y fxy; y; xg = 0. Exercise 3.10:
Problem 3.4:
=
1+ . 1+2
Hint: u = 1
e; v = 1 + e; = 6
i; 62
R.
Answer:
(1) Linearizing x ! x; c in Q(D(x); x) = 0 yields T (D(x)) = 0, so D(x) = D(x ) = D(x), therefore D(Ux y) Ux(D(y )) UD(x);x(y ) = Q(D(x); x)y Q(D(x); y) + Q(x; D(y)) x, and the two conditions suÆce. Conversely, for a derivation the vanishing of this last linear combination of for y independent of x forces each coeÆcient to vanish [or setting y = x forces Q(D(x); x)x = 0, Problem 3.7:
530
Hints
applying Q(D(x); ) to this yields Q(D(x); x)2 = 0, so absence of nilpotents compels the term to vanish]. (2) T (1) = 1 , "D(1) = 0; Q(T (x)) = Q(x) for all x 2 J , "Q(D(x); x) = 0. Chapter 4. Jordan Algebras of Cubic Forms (1) We have intrinsically determined A := N (x + y) N (x) N (y) = N (x; y)+ N (y : x); B := N (x + y) N (x) 3 N (y) = N (x; y)+ 2 N (y : x); C := 2 A B = (1 )N (x : y). = (Vx Vy Ux;y ) = Vy Vx Ux;y = Vy;x. Exercise 4.4B: Vx;y
Exercise 4.0:
(1) x#Ux y = x#(T (x; y)x x# #y) = 2T (x; y)x# [N (x)y + # T (x; y)x ] (2) Use Sharp-Symmetry to see T (fx; y; zg; w) T (z; fy; x; wg) = T (x; y)T (z; w)+
Exercise 4.4D:
T (z; y)T (x; w) T (x#z; y#w) T (y; x)T (z; w) T (w; x)T (z; y)+ T (z #x; y#w) = 0. (3) fUxy; z; xg Uxfy; x; zg = T ((2 1) Uxy; z )x + T (x; z )[T (x; y)x x# #y] [x#Uxy]#z T (x; Vy;xz )x x# #fy; x; zg = T ([2Uxy T (x; y)x+x##y+T (x; y)x Vx;y x]; z )x [T (x; y)x# N (x)y]#z x# #[T (x; z )y fy; x; zg] = T (x# #y; z )x + N (x)y#z x##[T (x; y)z +T (x; z )y fy; x; zg] = T (x#; y#z )x+N (x)y#z x##[x#(y#z )] = 0. Problem 4.1: (1): Set y = 1 in the Commuting Formula to get V1;y = Vy;1 or (acting on x) using U1 = Id. (2) Set y = 1 in the V U -Commuting Identity. (3): Apply the V U -Commuting Identity to 1 to get Ux;x2 = Ux Vx , linearize x ! x; 1 to get U1;x2 + 2Ux;x = Ux;1Vx + Ux V1 , and use Ux;x = 2Ux; V1 = 2Id. (1): Use the c-Sharp Identity and 21 for the rst, the c-Sharp Identity for the second. (2) For Adjoint0 , linearize of the strict Adjoint Identity; for U x-Sharp, use Sharp Symmetry and Adjoint0 Ux(x#y) = T (x; x#y)x x# #(x#y) = 2T (x#; y)x [N (x)y + T (x# ; y)x] = T (x#; y)x N (x)y. (3) For Dual Adjoint0 use Sharp Symmetry to move T (Adjoint0 (y); z ) = T (y; Dual Adjoint0 (z )), then use nondegeneracy. (4) Compute Vx;y Ux UxVy;x z = fx; y; Uxz g Uxfy; x; zg = T (x; z )fx; y; xg fx; y; x##z g Ux T (z; x)y+T (y; x)z x#(y#z ) = 2T (x; z )Uxy T (x##z; y)x T (x; y)x## z + y#[N (x)z + T (x; z )x#] T (x; z )Uxy + T (x; y)Uxz # [T (x ; y#z )x N (x)y#z ] (using the above formulas for x#(x# #z ) and Ux(x#(y#z ))) = T (x; y)[Uxz +x# #z ]+T (x; z )[Uxy +y#x# (by Sharp Symmetry for T (x##z; y)) = T (x; y)[T (x; z )x] + T (x; z )[T (x; y)x] = 0. Problem 4.2:
(2) Show (iii) implies (x)'(x) = 0 for (x) := N ('(x)) N (x), and (ii); (iii) imply (x + ty) = (x) + t3 (y); these continue to hold in any extension, so linearized Adjoint in J[t] yields (x)'(y) = 0 for all x; y. Problem 4.4:
(1) If c = 0 then 3 = N (c) = 0; and take " := 2 (unless 2 is already zero, in which case take ). (2) T (c) = 3 = 0 guarantees N 0 (c) = 1; T (x + y)3 = T (x)3 + 3 T (y)3 guarantees N 0 (x; y) = N (x; y), so T 0 = T; S 0 = S; #0 = # remains a sharp mapping, and the Adjoint Identity holds since "J = 0. (3) x = e1 + e2 ; y = e3 = x# = y2 have T (x) = 2; T (y) = T (x# ) = T (y2 ) = 1; N (x) = N (y) = 0; so N 0 (x) = "; N 0(y) = "; and therefore N 0 (x)2 = N 0 (y)2 N (c) = 0 but N 0 (x# ) = N 0 (Uy c) = " 6= 0. Chapter 5. Two Basic Principles Problem 4.5:
1. HINTS FOR PART III
531
(2) For an example where nondegeneracy of H(A; ) doesn't force nondegeneracy of A, choose A2 to be trivial and skew, so that it contributes nothing to H. (3) Show if z 2 K then all w 2 Uz (bJ) are trivial, using the Fundamental Formula to show that Uw (bJ) Uz K UK K = 0; either some w 6= 0 is trivial or else z itself is trivial. Problem 5.3: By induction, with M1 = 1J ; M2 = Vx , nd a formula for Mn+2 . [Answer: UxMn + Vx;xn ] Exercise 5.11:
U[x)bJ (x]; U[x) J (x) by Ux+Ux a = UxB; a;x. [x) is not inner, only weakly inner, if x2 62 [x); the inclusions are (x) both [x] and (x] [x]: Chapter 6. Inverses Question 5.2:
(1) x = 1+ " has UxbJ = 0 if = 0, otherwise x = (1+ 1 ") is invertible with x 1 = 1 (1 1 "). (2) Let b 6= 0 be an arbitrary nozero element ^ 6= 0 so bA ^ b = A; (2) b = bcb for some c 2 A, so bAb = A; (3) of A. Show (1) bAb bca = a = acb for all a, hence bc = cb = 1 is a unit element for A; (4) conclude A is unital and all nonzero elements are invertible. [Subhints for (1)-(4): (1) else B := b would be inner, b = A, and AA = bb = 0; (2) b = ba ^b ( b = bcb for c = a^ba^ 2 A; (3) write a = ba0 b; (4) if A has a left unit and a right unit, they coincide and form the (unique) unit.] Exercise 6.3:
(1) Ux is invertible from (Q2) and the Fundamental Formula; Ux(xn y) = Ux (xn 1 ) for n = 1; 2 (using (Qn)) since Lxn commutes with Ux. (2) (L1-2) =) Uxy = x as in (Q1); x2 y2 = 1 from [x2 ; y; y] = 2[x y; y; x] (linearized Jordan identity (JAx20 )) = 0; then x y2 = (x2 y) y2 = (x2 y2 ) y = y, so Uxy2 = 1 as in (Q2). (3) 0 = 2[y2 ; x; x] + 2[y x; x; y] [x2 ; y; y] + 2[x y; y; x] = Uxy2 1. Exercise 6.6:
Cancel Ux from Vx;x 1 Ux = UUx x 1 ;x = 2Ux to get Vx;x 1 = 21J, then D(x) = D(Ux x 1 ) = 2D(x) + UxD(x 1 ). Problem 6.2:
(1) D(y) = D(y2 x) = Ly2 D(x) + Vy Rx D(y) where D(y)x = D(yx) yD(x) = Ly D(x). (2) For nuclear y we have Ly Vy Ly2 = Ly Ry . Chapter 7. Isotopes Problem 6.3:
The Inverse Condition requires only the Fundamental Formula, but the Linear Inverse Condition requires the formula fy; x 1 ; Uxzg = fy; z; xg. Exercise 7.3A:
(1) Example of orthogonal non-automorphism: think of T = 1J. (7) If T (1) = x2 , try Ux. Problem 7.2:
For associativity, [x; y; z ]u = uuxyz uxuyz = u[u; x]yz vanishes identically i u lies in the center of A. For unitality, the element u 1 is always a left unit, but is a right unit i u lies in the center: xu u 1 = uxu 1 = [u; x]u 1 + x. Question 7.1:
(1) Compute, either directly from the de nitions or in terms of (~u) as a subalgebra of eJ , that the whole proposition goes over. (2) Let J be the
Question 7.2:
(~u) J
532
Hints
nite matrices in the algebra of all in nite matrices, or the transformations of nite rank in all linear transformations on an in nite-dimensional vector space, and u an invertible diagonal matrix (the case u = 1 gives the original J, u = 1 gives an \-isotope", and non-constant diagonal u give complicated isotopes. Chapter 8. Peirce Decompositions Use the formulas Ux = 2L2x Lx2 ; Ux;y = 2(LxLy + Ly Lx Lxy ) 0 for x; y = e; e . [Answer: E2 = Le + 2L2e = Le (1J 2Le ); E1 = 4Le 4L2e = 4Le(1J Le ); E0 = 1J 3Le + 2L2e = (1J Le )(1J 2Le ):] Exercise 8.2:
a $ u is a bijection with involutory elements, but u $ U = Uu is not bijective between involutory elements and involutory automorphisms: u(1 e) = u(e); and u have the same U . Chapter 9. O-Diagonal Rules Exercise 8.3:
(1) Use the Peirce U -Formulas 8.5. (2) Use (1), the Commuting Formula 5.7(FII), the Eigenspace Laws 8.4 since J1 (e) = J1 (1 e)). (3) Use the Peirce U -Formulas, Triple Switch, the Peirce U -Formulas. (4) Use (1). (5) Use the Peirce U -Formulas to show 0 = Ei (2Ux1 (ai )) = Ei fx1 ; fx1 ; ai gg fx21 ; ai g = qi (x1 ; fx1 ; ai g) fqi(x1 ); ai g, then qi (x1 ; fx1 ; ai g) = Vai qi (x1 ); from this 2Uai qi (x1 ) = Va2i Va2i qi (x1 ) = Vai qi (fai ; x1 g; x1 ) qi (fa2i ; x1 g; x1 ) (above) = qi (fai ; x1 g; fai; x1 g) +qi (fai ; fai ; x1 gg; x1) qi (fa2i ; x1 g; x1 ) (above) = 2qi (fai ; x1 g) (by Peirce Specialization 9.1). Exercise 9.5A:
(1) One way is to use 9.5(1) on the element z1 + tx1 in J[t] for z1 2 to get qi (z1 ) = 0; qi (z1 ; x1 )2 = Uz1 qj (x1 ); z12 = 0; qi (z1 ; x1 )4 = 0. (2) Use 9.5(2) in Jb to show a radical z1 is trivial; to show a trivial z1 is radical, qi (z1 ; y1 ) = 0, use the general identity fy; z g2 = fy; Uz (y)g + Uy (z 2 ) + Uz (y2 ). Chapter 11. Spin Coordinatization Exercise 9.6: ad(qj ); x1 J1
R
2
(1) Use Commuting Formula to show UxVx2 = Ux3 ;x then apply Ux 1 on the left and right and use the Fundamental Formula.
Exercise 11.10:
~ (b0 ) = (b0 )(v0 ) 1 from Creating Involutions 10.5(2). (2) q~(y) = ~2 q~2 (y) = q2 (y; v0 1 y) [using 10.5(3)], and q~(y) = ~0 q~0 (y) = ~0 q0 (y) (v0 ) 1 . Chapter 12. Hermitian Coordinatization (1)
Problem 11.1:
Exercise 12.1:
D0 (v) because v
1
(1) fa2 ; vg = fq0 (v); a0 ; vg for a0 := Uv 1a2 . (2) v = v 2 v.
1
2 D2 (v) +
(1) For any x1 2 J1 set y1 := Uv 1 x1 . Then fUv ai ; fuj ; Uv y1 gg = fUv ai ; uj ; Uv y1 g = Uv fai ; Uv uj ; y1 g = Uv fai ; ei ; y1 g = Uv Vai y1 . (2) T ! Uv T Uv 1 is an (inner) automorphism of linear transformations on J1 , and maps the generators (hence all) of Di into Dj ; the reverse inclusion holds by applying this to the invertible element v 1 . Exercise 12.3B: (1) VUv (ai ) (v ) = Vqi (v) Vai (v ). Chapter 13. Multiple Peirce Decompositions Exercise 12.3A:
1. HINTS FOR PART III
533
Multiply EA + AE = 0 on the left and right to get EA = EAE = AE , then get 2EA = 2AE = 0. From the linearizations (i) Ux2 = 2 ; Ux2 ; (ii) Ux2 ;fx;yg = Ux Ux;y + Ux;y Ux ; (iii) Ux2 ;y2 + Ufx;yg = fUx; Ux;y g + Ux;y (iv) Ux2 ;fy;zg + Ufx;yg;fx;zg = fUx;y ; Ux;z g + fUx; Uy;z g show (i) ) (1); (ii) ) (4); (iii) ) [(2) , (3)]; (iv) ) [(5) , (6)]. Show (3) ) (2); (5); (7) since e ? f; g; f + g; e; f; e + f ? g; h; g + h. Multiply (iii) by Ue and use (4) and Ue f = 0 to get (3). The general identity follows from the Macdonald Principle; (2) follows directly since Vei ;ej = 0; Uei ej = 0 by Peirce Orthogonality since we have ei 2 J2 ; ej 2 J0 relative to e = ei . Exercise 13.3A:
(1) In the expansion of E (t)E (t3 ) = E (t4 ) (i) one variable arises only for fi; j g = fk; `g = fig equal sets of size 1; (ii) two variables t2i t6k arises only if fi; j g = fig; fk; `g = fkg are disjoint sets of size 1; (iii) two variables ti t7j arises only if fi; j g of size 2 contains fk; `g = fj g of size 1; (iv) two variables t5k t3` arises only if fi; j g = fkg of size 1 is contained in fk; `g of size 2; (v) three variables t2i t3k t3` arises only if fi; j g = fig of size 1 is disjoint from fk; `g of size 2; (vi) three variables ti tj t6k arises only if fi; j g of size 2 is disjoint from fk; `g = fkg of size 1; (vii) two variables t4i t4j arises only if fi; j g = fk; `g are equal of size 2; (viii) three variables ti t4j t3` arises only if fi; j g of size 2 overlaps fk; `g = fj; `g of size 2 in precisely one element; (ix) four variables ti tj t3k t3` arises only if fi; j g; fk; `g are disjoint sets of size 2. Exercise 13.3B:
t8i
Q
When n = 0; = 1; n = ni>j1 [ti tj ] = tj ] n 1 ; check that the extremely monic tn tj are nonsingular. Exercise 13.12:
Q
n>j 0 [tn
(1) If we use S $ T to indicate that the operators S; T commute, show Vx ; Vx2 ; Ux $ Vx ; Vx2 ; Ux; show Ux y = x2 y; show UUx y;y = Vx2 Uy ; show Ufx;y;1g = (2Ux + Vx2 Vx2 )Uy . (3) Macdonald. (4) Le = Le2 commutes with Lf = Lf 2 since e 2 J2 (e); f 2 J0 (e), and use Peirce Associativity 8.3 and Orthogonality 7.5(4). Problem 13.2: (4) Try any x; y 2 A := "A0 for " 2 with "3 = 0. Problem 13.1:
Chapter 14. Multiple Peirce Consequences
P
P
(1) By Peirce Orthogonality 13.7(1) x y = i xii yii = i ei = 1P; x2 y = yii = Pi xii = x. (2) Taking the component in Jii of 1 = x y = 2 y = P x2 y + P (x2 + x2 ) y x y + ( x + x ) y ; x = x jj ij ij jj i ii ii i<j ii i ii ii i<j ii gives ei = xii yii ; xii = x2ii yii . Chapter 15. Hermitian Symmetries Exercise 14.5:
P 2 i xii P
(1) The Supplementary Rule 15.1(1) and h2ii = hii in 15.1(2) hold by de nition; show hij 2 Jij so fhii ; hij g = hij . (2) Deduce Hermitian Orthogonality 15.1(3) from Peirce Orthogonality. (3) Establish h2ij = hii + hjj (i 6= j ) by considering the cases (1) i or j equal to 1, (2) for i; j; 1 distinct. (5) Establish fhij ; hjk g = hik by considering separately the cases (i) (j = 1) fv1j ; v1k g, (ii) (i = 1 6= j; k) fhij ; hjk g = fv1j ; fv1j ; v1k gg, (iii) (dually if i; j 6= 1 = k), (iv) (i; j; k; 1 6=) fhij ; hjk g = ffv1i ; v1j g; fv1j ; v1k gg. Exercise 15.3:
534
Hints
The explicit actions in (2) [except for U ij on Jij ] and (3) involve only the hij and xk` in distinct Peirce spaces. Exercise 15.4A:
Exercise 15.4B: For Index Permutation (3), if fk; `g \ fi; j g has size 0 we have U (ij) = 1 on hk` by Action (2), if fk; `g \ fi; j g = fi; j g has size 2 we have U (ij) hk` = Uhij hij [by (2)] = hij [by 9.3], and if fk; `g\fi; j g = j has size 1 (dually if fk; `g \ fi; j g = i) then by (2) U (ij) hk` = Vhij hj` = fhij ; hj` g = hi` by the Hermitian Product Rules 15.2. Chapter 16. The Coordinate Algebra
1L = e left unit imples 1R = e is right unit and always if 1L; 1R exist they are equal and are the unit, so e = e . Exercise 16.3A:
(1) Æ0 (a11 ) = Uh12 Vh12 (a11 ) = Uh212 ;h12 (a11 ) = fa11 ; h12 g = Æ0 (a11 ). (2) d d = fU (23) (d); U (13) U (12) (d)g = fU (23) (d); U (23) U (13) (d)g = U (23) fd, U (13) (d)g = U (23) fd; fd; h13 gg = U (23) fd2 ; h13 g = U (23) fq11 (d); h13 g = fq11 (d); U (23) (h13 )g = fq11 (d); h12 g = Æ0 (q11 (d)). (3) Æ0 (a11 )d = fU (23) (fa11 ; h12 g); U (13) (d)g = ffa11; h13 g; U (13) (d)g = fa11 ; fh13 ; U (13) (d)gg = fa11 ; U (13) U (13) (d)g = fa11 ; dg. (4) Æ0 (a11 ) Æ0 (a11 ) = fa11 ; Æ0 (a11 )g = fa11 ; fa11; h12 gg = fa211 ; h12 gg = Æ0 (a211 ). (5) Æ0 (a11 ) = 0 () 0 = q11 (h12 ; fh12 ; a11 g) = fa11 ; q11 (h12 )g = fa11 ; h11 g = 2a11 $ 0 = a11 . Chapter 18. Von Neumann Regularity Exercise 16.3B:
(2) vNr 18.2(2) shows x is vNr i (x] = [x]: Always U(x]Jb
Exercise 18.2:
(x) (x]:
(1) Hint: expand UTb(1+x) = Tb 2 1 + Vx + Ux Tb evaluated h on 1 + y, and collect coeÆcients of like i j . Answer: (1) (0 0 ) : T UxT y = UT (x)y (i.e. the given structural linkage), (1 0 ) : T VxT = Ut^;T (x) [= VT (x) + Ut;T (x)]; (2 0 ) : T T = Ut^ [= 2 1J + Vt + Ut]; (0 1 ) : T (x)2 = T (Uxt^ ) [= T (x2 )+i T (Ux t )]; (1 1 ) : Vt^T = T Vt^ [Vt T = T Vt ]; (2 1 ) : Tb(t^ ) = t^2 [T (t) = t + t2 ] . (2) Combine the conditions of (1) for T and for T . Chapter 19. Inner Simplicity Question 18.2:
(1) The element b is paired with c = (1 + d)b(1 + d) since bcb = b(1 + d)b(1 + d)b = bdbdb (by nilpotence b2 = 0) = b (by pairing bdb = b) and cbc = (1 + d)b(1 + d)b(1 + d)b(1 + d) = (1 + d)b(1 + d) (by the above) = c. So far we have only used b2 = 0, but to get c2 = 2c we need to assume d2 = 0 too: c2 = (1 + d) b(1 + d)2 b (1 + d) = (1 + d) (b(1 + 2d)b) (1 + d) (by nilpotence d2 = 0) = (1 + d) (2bdb) (1 + d) (by nilpotence b2 = 0) = 2(1 + d) (b) (1 + d) = 2c. Therefore e = 21 c is idempotent with (e] = (c] structurally paired with (b] by Principal Pairing 18.9, and again e is a simple idempotent. Chapter 20. Capacity Exercise 19.3B:
Problem 20.1: (2) en+1 J0 (e)
2
(1) x 2 J0 (~e); e 2 J2 (~e) =) fe; xg = 0 by Peirce Orthogonality. \ J2 (~e).
1. HINTS FOR PART III
535
(1) Replace J by J2 (e1 + e2 ), which remains nondegenerate by Diagonal Inheritance (9.1.2). (3) Use (8.4.5). (4) Use nondegeneracy. (5) Any v with q2 (v) 6= 0 automatically has q0 (v) 6= 0 as well (using (3)), hence qi (v) are invertible in the division algebras Ji , so v2 (hence v) is invertible. Problem 20.2:
Problem 20.3:
2 ) Uvjk (vij2 ) + Uvij (vjk
(3) Use the identity v2 = fvij ; vjk g2 = fvij ; Uvjk (vij )g + (by Macdonald).
(1) Uei K 6= 0 =) ei 2 K =) all ej 2 K =) 1 2 K. (2) Work with e = ei in the nondegenerate subalgebra J0 = J2 (ei + ej ) where Uei ;ej K = K1 , show (1) =) all qr (K1 ) = 0 =) UK1 (J0 ) = 0 (using the q-Lemma (8.5.3)) =) K1 = 0 (using nondegeneracy). Problem 20.4:
Chapter 21. Herstein-Kleinfeld-Osborn Theorem (2) Left alternativity implies x ! Lx is a monomorphism ! End(Ab)+ of linear Jordan algebras, hence automatically of quadratic Jordan algebras when 12 2 , where 2Uxy := x(xy + yx)+(xy + yx)x (x2 y + yx2 ) = 2x(yx) Exercise 21.1:
+
A
by Left and linearized Left Moufang.
Its square b2 is symmetric and still invertible, hence with symmetric inverse, so b 1 = (b2 ) 1 b 2 HB B.] Exercise 21.3B:
For norm composition (4), n(xy) n(x)n(y) 1= n(xy)1 (n(x)1) (n(y)1) = (xy)(xy) (xx)(n(y)1) = [x; y; xy] + x(y(y x)) x(n(y)x) (since is an involution and n(y) is a scalar) = [x; y; xy] x[y; y; x] (removing a bar) = [x; y; xy] x[y; y; x]; (removing two bars) = [x; xy; y] 0 (by alternativity) = (x2 y)y x(xy2 ) (by alternativity) = x2 y2 x2 y2 (by alternativity again) = 0. For Kirmse (5), n(x)y x(xy) = [x; x; y] = [x; x; y] (removing a bar) = 0 (by left alternativity), and dually on the right Exercise 21.4:
The centroid forms a division algebra by a standard Schur'sLemma argument : its nonzero elements are bijective because Ker(T ) and Im(T ) are ideals, respectively not A and not 0 if T 6= 0, therefore Ker(T ) = 0 and Im(T ) = A by simplicity. Bijectivity guarantees the inverse map T 1 exists, and it clearly also commutes with multiplications and so lies in the centroid too. Simplicity also guarantees AA = A, which by a standard Hiding Trick forces the centroid to be commutative (hence a eld): T1 T2 (ab) = T1(aT2 (b)) = T1 (a)T2 (b) = T2 (T1 (a)b) = T2T1 (ab): Problem 21.1:
(I)(3) Some commutator = [; ] 2 is invertible, so H generates ( 1 ; 1) ((; )( ; ) ( ; )) = (1; 0), also (1; 1) (1; 0) = (0; 1), so all (Æ; Æ)(1; 0) = (Æ; 0) and (Æ; Æ)(0; 1) = (0; Æ), hence all of (; 0) + (0; ) = (; ) = D. (II) An associative division algebra with non-central involution is symmetrically generated by Step 4 of the proof of Herstein-Kleinfeld Osborn 21.5. (III) H(D; ) equals the -center , and the subalgebra generated by H is , so D is symmetrically generated i D = H, ie. the involution is trivial. Problem 21.2:
536
Hints
(1) It suÆces to prove L; from Left and Middle Moufang LxL2y Lx = Lxy2x = L(xy)(yx) = L1 = 1 shows Lx has both a left and right inverse, hence is invertible, so from Lx Ly Lx = Lxyx = Lx we can cancel to get LxLy = Ly Lx = 1. Problem 21.3:
(2) For idealness (zx)y = z (yx), argue that (zx)y = y(zx) = z (yx) + (yz + z y)x = z (yx) + (z (y + y))x = z ( yx + (y + y)x) = z (yx). For triviality, use the Middle Moufang Identity (zx)(yz ) = z (xy)z , valid in all alternative algebras, or argue directly (zx)(yz ) = z ((yz )x) = z ((z y)x) = z (z (xy)) = z 2(xy) = 0. Problem 21.4:
(4) Hide m in an associator till the coast is clear: nm[x; y; z ] = n[mx; y; z ] = [mx; y; z ]n = m[x; y; z ]n = mn[x; y; z ]. Problem 21.5:
(5) For z z = zz = 0, note they can't both be invertible by (4), and if one is zero so is the other by (1) with x; y taken to be z; z. For z Hz = 0, show it lies in H and kills z , or is not invertible by (1) with x = hz; y = z. (6) ^ D; ^ K := Dy ^ D^ would be nonzero ideals which kill each other if Otherwise I := Dx ^ D^ would be a nonzero ideal which kills itself. xy = 0, and if xy 6= 0 then I := Dxy Problem 21.6A:
Use the results from Question 21.2.
Question 21.1: Question 21.2:
representation x 21.2).
(1) Use the results from (2). (2) Think of the left regular and the left Moufang identity (cf. the hint for Problem
! Lx
Chapter 22. Osborn's Capacity Two Theorem (1) q0 ([Va2 ; Vb2 ]x1 ; y1 ) = q0 (x1 ; [Vb2 ; Va2 ]y1 ) by the U 1q-Rule 9.5(2) twice. (2) Setting z1 := [[a2 ; b2 ]; x1 ] for short, compute 2q0 (z1 ) = q0(z1 ; z1) = q0 ([Va2 ; Vb2 ]x1 ; z1 ) =q0 (x1 ; [Vb2 ; Va2 ]z1 ) (by(1)) = q0 x1 ; [Va2 ; Vb2 ]2 (x1 ) = q0 x1 ; V[a2 ;b2 ]2 (x1 ) (by Peirce Specialization 9.1, since [a; b]2 is a Jordan product) = fx1 ; [a2 ; b2 ]2 ; x1 g (by the U 1q-Rule 9.5(2) again) = 2Ux1 [a2 ; b2]2 . Exercise 22.1B:
2. Hints for Part IV Chapter 1. The Radical Exercise 1.6:
Try z = 1.
Exercise 1.9:
(1) Check that the given proof of 1.9(2) uses only weak struc-
Exercise 1.10:
Show is structural with = .
turality.
Since the quasi-inverse is unique, it suÆces to check that is indeed the inverse of 1 tx in the formal power series algebra. Problem 1.2:
P
n xn
n=0 t
Show (x; y) q.i. in J(z) i x is q.i. in (J(z) )(y) by J(Uz y) = (J(z) )(y) , and also i fx; yg(z) Ux(z) y(2;z) q.i. in J by 1.8(4)(vi). Problem 1.4:
2. HINTS FOR PART IV
537
Set v := u 1 for typographical convenience. (1) u x = 1(v) x is invertible in J i it is in J(v) , i.e. i x is q.i. in J(v) . (2) Apply (1) to u = 1 y in the unital hull. Problem 1.7: If f is homogeneous of degree di in xi let eJ = bJ ["1 ; : : : ; "n] d +1 i where "i = 0 [realized as bJ[t1 ; : : : ; tn ]=htd11 +1 ; : : : ; tdnn+1 i], so that ui := 1 + ti xi is invertible in Je. Then the coeÆcient of "d11 : : : "dnn in f (u1; : : : ; un ) = 0 is f (x1 ; : : : ; xn ). Problem 1.5:
(2) B;x;y = U(y1)(y) x ; UB;x;y (z) Uy = UU(y()y) (1(y) x)(z); Uz B;y;x Uy = Uz Uy B;x;y = Uz(y)U(y1)(y) x . (3) UB;x;y (Ux w) = UUx Ux (y) (w) = Ux Ux(y) Uw Ux Ux(y) = B;x;y Ux Uw UxB;y;x = B;x;y UUx (w) B;y;x: Problem 1.8:
(1) fz; xg = fz; x; ^1g; Vz;x + Vx;z = Vfx;yg ; Uz x = 12 Vz;x z; Ux z = (2) zx = xz; zxy = yxz = +yxz for all generators y 2 J implies zx is central in A, yet (zx)2 = (zxz )x = 0; which forces zx = 0 in a semiprime associative ring. (3) Use linearized Triple Shift (FIIIe) VU (z)a;x = VU (z)x;a + Vz;fx;z;ag . (4) Use symmetry z 2 Z (x) ) x 2 Z (z ) ) Ux y 2 Z (z ) [by innerness] ) z 2 Z (Ux y): (5) If z 0 = Uz y 6= 0 it is nonzero in R and has larger annihilator by (4), so by maximality the two annihilators must coincide. (6) Induct; if y(n;z) 62 Z ) (z) then y(n;z) 62 Z ann(Uz y) ) 0 6= Vy(n;z) ;Uz y = Vy((zn;z ) ;y = Vy(n+1;z) [by operator power-associativity in the z -homotope] = Vy(n+1) ;z) ) y(n+1) 62 Z : (7) Then J is algebraic, and we can apply Radical Nilness 2.5; z 2 R ) J(z) radical ) J(z) nil.
Problem 1.9:
1 V x. 2 x;z
The only minimal inner ideals B in a nondegenerate algebra are Type II, III determined by an idempotent or structurally paired to an idempotent; if B R it can't contain an idempotent. If b 2 R is paired with d then d = Ud b 2 R too, so D = UdJ can contain an idempotent either. Examining the proof of II.19.3, we see that every minimal B is regularly (not just structurally) paired with a D of idempotent type, so no B's exist and R = 0. Problem 1.10:
Chapter 2. Begetting and Bounding Idempotents Exercise 2.1A:
afx = e.
Exercise 2.1B:
(2) If e = ax is idempotent then f = xea is idempotent with
U[b] [b] (b) by Ub+U (b)a = Ub B; a;b :
f < g () 1 f > 1 g () f g = f g + f g = 1 g.
Exercise 2.6:
1 f
() (1 f ) (1 g) =
Chapter 3. Bounded Spectra Beget Capacity (5) Show w 2 I if z 2 I and = 0 2 ResJ (z ) then I = J so trivially 0 2 ResI (z ): Exercise 3.1A: Problem 3.3: [J : ] [J : ]
2 ResJ(z ) is nonzero, and if
(1) is just vector space theory. (2) Follows from j ].
[J :
j j j jj >
538
Hints
Repeat the Jordan proof, using in (1) Ci := \k6=i Bk in place of Cij , R1 z A = A(1 z ) in place of U1 z J. (3) If 2 f -S pec;A (z (s)) then in particular f vanishes at z (s) and the polynomial g(t) = f (t z (s)) vanishes at 2 , where g(t) of degree N in [t] ( := [s] an integral domain) cannot have more than N roots in . Chapter 4. Absorbers of Inner Ideals Problem 3.4: and in (2) B :=
Exercise 4.4: The intersection I := Q J = I is isomorphic to a subalgebra
\I is co-whatever-special because J=I ,!
of a direct product, and subalgebras and direct products inherit whatever-speciality (it is quotients that preserve i-speciality but not speciality).
Vr;s (Uz x) = (UVr;s z;z Uz Vs;r )x q(B) [by Fundamental Lie III.5.7(FII) and double absorption by z ] and Ur (Uz x) = (Ufr;zg + UUr z;z + Uz Ur Ufr;zg;z Vr + Uz Vr2 )x [by Macdonald, or taking coeÆcients of 2 in the linearized Fundamental Formula III.5.7] UB(J) B [by innerness and absorption fr; z g; Urz 2 B.] Exercise 4.5A:
Exercise 4.5B:
See the proof of (2) in the Quadratic Absorber Theorem 4.5.
Write ( 21 )( 21 1) : : : ( 12 i + 1)= ( 21 )i (1)(3) : : : (2i 1) where (1)(3) : : : (2i 1) = (2i 1)!= (2)(4) : : : (2i 2) and (2)(4) : : : (2i 2) = 2i 1 (1)(2) : : : (i 1) = 2i 1 (i 1)!. Chapter 5. Primitivity Exercise 4.9:
c
b (2) U1 c1 is never in B J. (3) c modularity of the hull B c 2 2 b b b requires (1 c) = (1 c) + (c c) and f1 c; J; 1 cg = 2U1 cJ to lie in B . (4) The modularity terms (i) (iii) for the contraction fall in J and are in B0 by hypothesis. Note that if c c2 belongs to B0 as well as U1 c1, then 1 c c b must belong too, so B B0 . Conversely, if b0 = 1 + x c2 B0 for x 2 J then b . b0 (1 c) = c + x 2 B0 \ J = B so b0 = (1 c) + b 2 B Exercise 5.1:
Problem 5.2:
(1) Note fz 2; y2 g 2 Rad(J) core(B) B.
Consider inverses of 1 the polynomial subalgebra Jb[t]. Problem 5.3:
tz in the power series algebra bJ[[t]] and
(1) If the result holds for J := J=I, then z p.n.b. in eJ =) z p.n.b. in J[T ] = eJ=eI =) z = 0 =) z 2 eI \ J = I. (3) Choose (by in niteness of T ) a t which doesn't appear in the polynomial y~; then identify coeÆcients of t in z (2n;y~+tx) = 0 to conclude Uz(n 1;y~) x = 0 for all x 2 J, so Uz(n 1;y~) Je = 0, forcing z (n 1;y~) = 0 by (2).
Problem 5.4:
Exercise 5.8:
k 1 vanish].
Note z (2n;y~) = z (n;y~)
(2;y~)
and z (2n+k;y~) = Uz(n;y~) Uy z (k;y~) for
Chapter 7. Filters and Ultra lters
If Yi N n Fi then Y1 \ Y2 N n (F1 [ F2 ): Any set with nite complement consisting of n1 < : : : < nr contains (nr ; 1). Problem 7.2:
2. HINTS FOR PART IV
539
If F = fx1 g [ : : : [ fxn g 2 F then some xi0 2 F and F = fY j 2 Y g is the principal ultra lter F xi0 . Problem 7.4: (F1) Y1 \ Y2 U1 \ U2 is still open containing x0 ; (F2) Y 0 Y U =) Y 0 U ; (F3) x0 62 ; so ; 62 F . Problem 7.3:
xi0
Chapter 8. Ultraproducts (1) z (y) 6= 0 , z (y) 62 Rad(Qy ) , either (i) Qy (z (y)) 6= 0 or (ii) there exists a( y) with Qy (z (y); a(y)) 6= 0, ie. y 2 Y1 [ Y2 . By the basic property of ultra lters, supp(z ) = Y = Y1 [ Y2 2 F , Y1 2 F or Y2 2 F , Q0 (z 0 ) 6= 00 or some Q0 (z 0; a0 ) 6= 00 . (2) Similarly, z 0 2 Rad(Q0 ) , Zer(Q(z )) = fx 2 X j Qx(z (x) = 0g and Zer(Q(z; )) = fx 2 X j Qx (z (x); Vx ) = 0g 2 F , Zer(Q(z )) \ Zer(Q(z; )) 2 F , fx j z (x) 2 Rad(Qx )g 2 F , z F w 2 Q Rad(Qx ). Question 8.1. All answers except to the rst are \Yes". Chapter 9. The Final Argument Exercise 8.6
Note that characteristic p is nitelyQde ned (p 1 = 0) whereas characteristic 0 is not (for all n > 0; n 1 6= 0). In p Zp=F , the characteristic will always be 0 UNLESS the lter is the principal ultra lter determined by some particular p, in which case the ultraproduct is Zp again. In general, the ultraproduct Q x =F will have characteristic 0 unless the lter is \homogeneous", containing the set Xp := fx 2 X j x has characteristic pg. Question 9.2:
APPENDIX F
Indexes 1. Pronouncing Index If I were delivering these lectures in person, you would hear an approximately correct pronunciation (now that Ottmar Loos and Holger Petersson, Professor Higgins-like, have polished up my German). But in aural absence I include this guide to prevent readers (especially Americans) from embarrassing mistakes. The names of most American mathematicians in our story are pronounced in the goodold American way, and won't merit much explication.
Adrian A. Albert: AY-dree-uhn AL-burt Eric Alfsen: eric AHLF-zen Shimshon Amitsur: shim-shohn AHH-mee-tsur Emil Artin: AY-meel AHHR-teen (but a short ahhr, heading towards uhhr) Stefan Bergmann: shtef-fahn BAIRG-mun Arthur Cayley: arthur KAY-lee Paul Cohn: paul CONE Eugene Dickson: yew-jeen DICK-son John Faulkner: John FAHLK-ner Hans Freudenthal: Hunnts FROY-den-tahl Charles Glennie: charles GLENN-ie William Hamilton: William Hamilton Israel N. Herstein: hurr-styne (known to his close friends as \Yitz") L.K.Hua: Whoo-ah ?? Adolf Hurwitz: adolf HOOR-vitz (but don't overdoo the oo) Nathan Jacobson: nay-thun JAY-kub-son (\Jake" to his friends) Pascual Jordan: pass-kwahl YOR-dahn Victor Kac: victor KAHTZ Issai Kantor: ISS-eye KAHN-tor Irving Kaplansky: irving kap-LANN-ski (\Kap" to his close friends) Wilhelm Kaup: vill-helm COW-pp ?? Kirmse: KEERM-ze Erwin Kleinfeld: KLINE-fellt Max Koecher: muks KOEU-cchherr (oeur as in Fr. Sacre Coeur, cchh as in Scot. locchh) Yuri Medvedev: yoo-ree med-VAY-devv
541
542
F. INDEXES
Ruth Moufang: root MOO-fung Sophus Lie: SO-foos LEE Ottmar Loos: ought-marr LOW-ss Kurt Meyberg: koort MY-bairg Ehrad Neher: air-hart NAY-(h)urr J. Marshall Osborn: marshall OZZ-born Sergei Pchelintsev: tsair-gay chel-EEN-tsevv Benjamin Peirce: benjamin PURSS Holger Petersson: HOLL-gerr PAY-ter-sun Frederic Schultz: frederic SHULtz Ivan Shestakov: ee-VAHN SHES-ta-kovv Arkady Slin'ko: ahr-KAH-dee SLEENG-ko Tonny Springer: Tonny SHPRING-err Erling Stormer: air-ling SHTERR-mer Armin Thedy: ahr-meen TAY-dee Jacques Tits: zhock TEETS John von Neumann: john fon NOY-mun Eugene Wigner: yew-jeen VIG-nerr E m Zel'manov: e-EEM tsell-mahn-ovv Max Zorn: muks TSORN (not ZZorn as in ZZorro)
2. INDEX OF NAMED THEOREMS 2. Index of Named Theorems
543
544
F. INDEXES 3. Index of De nitions
4. INDEX OF TERMS 4. Index of Terms
545