IN
INTERSCIENCE TRACTS PURE AND APPLIED MATHEMATICS Editors:
L.
HERS
R.
COURANT
J.
J.
STOKER
1.
D. Montgomery ...
66 downloads
722 Views
8MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
IN
INTERSCIENCE TRACTS PURE AND APPLIED MATHEMATICS Editors:
L.
HERS
R.
COURANT
J.
J.
STOKER
1.
D. Montgomery and L. Zippin
TOPOLOGICAL TRANSFORMATION GROUPS 2.
Fritz
John
PLANE WAVES AND SPHERICAL MEANS Applied to Partial Differential Equations
3.
E. Artin
GEOMETRIC ALGEBRA
Additional volumes in preparation
IN
INTERSCIENCE TRACTS PURE AND APPLIED MATHEMATICS Editors:
L.
BRIIS
R.
COUUANT
J.
J.
STOKER
Number 3
GEOMETRIC ALGEBRA By
E. Artin
INTERSCIENCE PUBLISHERS,
INC.,
NEW YORK
INTERSCIENCE PUBLISHERS LTD., LONDON
GEOMETRIC ALGEBRA E.
ARTIN
Princeton University, Princeton,
INTERSCIENCE PUBLISHERS,
New Jersey
INC.,
NEW YORK
INTERSC1ENCE PUBLISHERS LTD., LONDON
1957 BY INTERSCIENCE PUBLISHERS, INC.
LIBRARY OF CONGRESS CATALOG CARD NUMBER 57-6109
INTERSUIBNCB PUBLISHERS, INC. 250 Fifth Avenue,
New York
1,
N. Y.
For Great Rritain and Northern Ireland:
INTERSCIENCE PUBLISHERS LTD. 88/90 Chancery Lane, London, W.C.
2,
England
PRINTED IN THE UNITED STATES OF AMERICA
TO NATASC1IA
SUGGESTIONS FOR USE OF BOOK
Vlll
and IV. The problems of Chapter IV are investigated for the groups introduced in Chapter III. Any one of these chapters contains too much material for an advanced undergraduate course or seminar. I could make the following suggestions for the content of such courses.
The easier parts of Chapter II. The linear algebra of the first five paragraphs of Chapter I followed by selected topics from Chapter III, either on orthogonal 1)
2)
or on symplectic geometry. 3) The fundamental theorem some parts of Chapter IV.
of projective
geometry, followed by
4) Chapter III, but with the following modification: All that is needed from 4 of Chapter I is the statement :
W
of a non-singular If W* is the space orthogonal to a subspace dim W* = dim V. This statement could space V then dim be obtained from the naive theory of linear equations and the instructor could supply a proof of it. Our statement implies then and no further reference to 4 of Chapter I is needed. TF** =
W+
W
CONTENTS PREFACE SUGGESTIONS FOR THE USE OP Tins BOOK:
v vii
CHAPTER
I
Preliminary Notions 1.
Notions of set theory
1
2.
Theorems on vector spaces
4
3.
More
detailed structure of
homomorphisms
10 16
Duality and pairing 5. Linear equations 6. Suggestions for an exercise 7. Notions of group theory 8. Notions of field theory 9. Ordered fields 10. Valuations 4.
23 28 29 33 40 47
CHAPTER
II
Affine arid Projective
5.
Introduction and the first three axioms Dilatations and translations Construction of the field Introduction of coordinates Affine geometry based on a given field
6.
Desargues' theorem
7.
Pappus theorem and the commutative law Ordered geometry
1.
2. 3. 4.
8. 9.
10. 11.
Geometry 51
1
Harmonic points
The fundamental theorem The projective plane
of projective
CHAPTER
geometry
54 58 63 66 70 73 75 79 85 98
III
Symplectic and Orthogonal Geometry 1
.
2. 3. 4. 5. 6.
7.
Metric structures on vector spaces Definitions of symplectic and orthogonal geometry Common features of orthogonal and symplec tic geometry Special features of orthogonal geometry Special features of symplectic geometry Geometry over finite fields Geometry over ordered fields -Sylvester's theorem ix
105 110 114 126 136 143 148
X
CONTENTS
CHAPTER IV
The General Linear Group 1.
Non-commutative determinants
2.
The
3.
Vector spaces over
structure of
finite fields
CHAPTER
The Structure 1.
2. 3. 4.
5. 6. 7.
V
of Symplectic and Orthogonal Groups
Structure of the symplectic group The orthogonal group of euclidean space Elliptic spaces The Clifford algebra
The The The
151 158 169
GLn (k)
spinorial noim 4 cases dim
V
T
or
S
A T.
If S -A T and T -A U we also write S-^T-^U.IfssS then we can form g(f(s)) e U and thus obtain a map from S to U denoted by U. Notice that the associative law holds trivially for these S "products" of maps. The order of the two factors gf comes from
^
the notation /($) for the image of the elements. Had we written (s)f it would have been natural to write fg instead of gf.
instead of /(s),
Although we
will stick (with rare exceptions) to the notation f(s) the reader should be able to do everything in the reversed notation. f Sometimes it is even convenient to write s instead of /($) and we f should notice that in this notation ($')" = s' .
GEOMETRIC ALGEBRA
2 If
S
-A-
T and S
denoted by /(
is
C
S then
); it is
images of elements of S S This can be done image
the set of
called the
all
of
.
{}
Then f(S) C T; should f(S) = T we call the map onto and say that / maps S onto T. Let T be a subset of T. The set of all s t S for which J(s) t T is called the inverse image of T and is denoted by f~ (T ). Notice
particularly for
S
itself.
l
l
) may very well be empty, even if TQ is not empty. Remember also that /"* is not a map. By f~ (f) for a certain / e T we mean the inverse image of the set {/} with the one element /. It may happen that /~ (0 never contains more than one clement. Then we say that / is a one-to-one into map. If / is onto and one-to-
that f~ (T
l
l
one into, then we say that / is one-to-one onto, or a "one-to-one corl respondence." In this case only can f~ be interpreted as a map T '-^i S and is also one-to-one onto. Notice that /""*/ S > S and l T > T and that both maps are identity maps on S respectively ff~ :
:
T. If
j
/,
t2
are elements of T, then the sets f~
l
(ti)
and f"
l
(t 2 )
are
a given element of S and /(s) = /, then s will be in l which shows that S is the disjoint union of all the sets f~ (t): /~ (0> disjoint. If s is l
Some
of the sets /"*(/) may be empty. Keep only the non-empty ones and call S f the set whose elements are these non-empty sets
f~ (). Notice that the elements of S f are sets and not elements of *S. S f is called a quotient set and its elements are also called equivalence classes. Thus, Si and s 2 are in the same equivalence class if and only l
=
given clement s lies in precisely one equivalence l then the equivalence class of s is f~ (f). We construct now a map /, S * 8, by mapping each 5 e S onto its equivalence class. Thus, if /(s) = /, then /i(s) = J~ (t). This map is an onto map. if
f(si)
/(s 2 )-
class; if /(s)
=
Any /,
:
l
Next we construct a map / 2 Sf > f(S) by mapping the nonempty equivalence class f~ (f) onto the element / e f(S). If / e f(S), :
l
hence
=
then
l
the image of the equivalence class f~ (f) and of no other. This map / 2 is therefore one-to-one and onto. If /
/(s),
/
is
S and /() = then /,(s) = f~ (t) and the image of /"*(/) under the map / 2 is t. Therefore, / 2 /i(s) = > T Finally we construct a very trivial map / 3 /(/S) by setting / 3 (/) = t f or / E f(S). This map should not be called identity since l
s e
/,
/.
:
CHAPTER it is
a
kind
=
/(s)
S^S, map /
To
of a subset into
map
is
I
a possibly bigger set T.
A map
of this
an injection and is of course one-to-one into. For / we had / 2/i(s) = t and thus / 3 / 2 /i(s) = /. We have f -> f(S) ^> T, so that / 3 / 2 /i S -> T. We see that our original called
:
is
factored into three
maps
a one-to-one correspondence and / 3 is one-to-one into. We will call this the canonical factoring of the map /. The word "canonical" or also "natural" is applied in a rather loose is
repeat: /i
is
onto, / 2
sense to any mathematical construction which as no free choices of objects are used in it.
is
unique in as much
As an example, let G and H be groups, and / G > // a homomorphism of G into H, i.e., a map for which f(xy) = f(x)f(y) holds for all x, y c G. Setting x = y = 1 (unit of G) we obtain /(I) = 1 1 We (unit in H). Putting y = of we obtain next f(x~ ) = (f(x))~ will now describe the canonical factoring of / and must to this effect first find the quotient set G r The elements x and y are in the same l equivalence class if and only if f(x) = f(y) or f(xy~ ) = 1 or also l 1; denoting by K the inverse image of 1 this means that f(y~ x) l l both xy~ e K and y~ xzK (or x e Ky and x e yK). The two cosets yK and Ky are therefore the same and the elements x which are equivalent to y form the coset yK. If we take y already in K, hence y in the equivalence class of 1 we obtain yK = K, so that K is a group. The equality of left and right cosets implies that K is an invariant subgroup and our quotient set merely the factor group G/K. The :
l
l
.
,
.
associates with each
map /i
The point now
is
that /i
is
a:
e
G the
coset
xK as image: f^(x) =
xK.
a homomorphism (onto). Indeed fi(xy)
=
xyK - xyK-K = x-Ky-K = xK-yK = f^f^y). This its
map
is
called the canonical
homomorphism
of a
group onto
factor group.
The map / 2 maps xK onto f(x) f z (xK) = :
f (xy-K)
=
=
=
f(x). Since it is
f.2
(xK-yK)
=
a homomorphism.
f(xy) f (xK)f (yK) f(x)f(y) a one-to-one correspondence it is an isomorphism and yields the statement that the factor group G/K is isomorphic to the 2
Since
z
2
it is
image group f(G). The invariant subgroup of the
map
/.
The map / 3 into
H.
K of G is called the kernel
is
just
an injection and therefore an isomorphism
GEOMETRIC ALGEBRA
4 2.
We
shall
Theorems on vector spaces
assume that the reader
is
familiar with the notion
and
the most elementary properties of a vector space but shall repeat its definition and discuss some aspects with which he may not
have come into contact.
DEFINITION 1.1. A right vector space V over a field k (k need not be a commutative field) is an additive group together with a composition Aa of an element A t V and an element a e k such that
Aa
V and
E
such that the following rules hold:
+ E)a =
1)
(A
3)
(Aa)b
=
Aa
+ Ba,
A(ab),
+
2)
A(a
4)
A-l = A,
6)
= Aa
+
Ab,
where A, B e V a, b t k and where 1 is the unit element of k. In case of a left vector space the composition is written aA and similar laws are supposed to hold. Let V be a right vector space over k and S an arbitrary subset of V. By a linear combination of elements of S one means a finite sum y
A 2 a2 A r a r of elements A % of S. It is easy to see that the set (S) of all linear combinations of elements of S forms a subspace of V and that (S) is the smallest subspace of V which contains S.
A&i
+
S
is
If
+
the
empty
+
we mean by
set
which contains S and, since
(S) the smallest subspace of
is in
consists of the zero vector alone. This subspace is also denoted
We 5
is
A A^i
call (S)
the space generated (or spanned) by
S and
a system of generators of (S). subset S is called independent if a linear A 2 a2 A r a r of distinct elements of
+
+
+
vector only in the case
when
all a,
=
V
any subspace, the space (S)
0.
The empty
by
0.
say that
combination
S
is
the zero
set is
therefore
independent.
independent and (S) = V then S is called a basis of V. This means that every vector of V is a linear combination of distinct elements of S and that such an expression is unique up to trivial terms A-0. If T is independent and L is any system of generators of V then T can be "completed" to a basis of V by elements of L. This means that there exists a subset LQ of L which is disjoint from T such that the set T VJ L is a basis of V. The reader certainly knows this If
S
is
CHAPTER
5
I
statement, at least when V is finite dimensional. The proof for the dimensional case necessitates a transfinite axiom such as
infinite
Zorn's all
lemma but a
reader
who
is
not familiar with
it
may
restrict
the following considerations to the finite dimensional case. If V has as basis a finite set S, then the number n of elements of
=
S
(n empty) depends only on V and is called the dimension of V. We write n = dim V. This number n is then the maximal number of independent elements of V and any independent set T if
S
is
with n elements
dim
V and
The
U is a subspace of F, then dim U < = V. sign holds only for U
a basis of V.
is
If
the equal V does not have such a finite basis
is denoted by proper subspace U of V may then still have (One could introduce a more refined definition of dim F, namely the cardinality of a basis. We shall not use it, however, and warn the reader that certain statements we are going to make
fact that
writing dim V the dimension
=
o
of the
new
basis.
Mathematical education is still suffering from the enthusiams which the discovery of this isomorphism has aroused. The result has been that geometry was eliminated and replaced by computations. Instead of the intuitive maps of a space preserving addition and multiplication by scalars (these maps have an immediate geometric meaning), matrices have been introduced. From the innumerable absurdities from a pedagogical point of view let me point out one example and contrast it with the direct description. Matrix method: A product of a matrix A and a vector (which is then an n-tuple of numbers) is defined; it is also a vector. Now the poor student has to swallow the following definition: A vector is called an eigen vector if a number X exists such that
X
X
AX =
\X.
Going through the formalism, the characteristic equation, one then ends up with theorems like: If a matrix A has n distinct eigen values, then a matrix D can be found such that DAD~ is a diagonal l
matrix.
The student if
will of course learn all this since
he does not. Instead one should argue
of the space
F into itself.
Given a
like this:
Does there
he
will fail the course
linear transformation
exist a line
which
/
kept fixed one include the value should then In order to eigen modify by /? the question by asking whether a line is mapped into itself. This
means
is
of course for a vector spanning the line that
f(X)
=
\X.
Having thus motivated the problem, the matrix
A
describing /
moment for the actual computation of X. It again. Then one proves all the customary theorems
will enter only for
a
should disappear without ever talking of matrices and asks the question: Suppose we can find a basis of F which consists of eigen vectors; what does
GEOMETRIC ALGEBRA
14
this' imply for the geometric description of /? Well, the space is stretched in the various directions of the basis by factors which are
the eigen values. Only then does one ask
what
this
description of / by a matrix in terms of this
basis.
We have obviously
the diagonal form. I should of course soften lately
for the
reproach since books have appeared
my
stress this point of view so that
which
means
improvements are to
be expected.
my experience that proofs involving matrices can be shortened 50% if one throws the matrices out. Sometimes it can not be
It is
by
done; a determinant may have to be computed. Talking of determinants we assume that the reader
with them. In Chapter IV the non-commutative case; case. If
k
is
a commutative
we
use a
new
basis,
A
familiar
we
give a definition which works even in let us right now stick to the commutative field
and /
the determinant of / as follows: Let put det / = det A. If
is
A
Hom(F, F), then we define be a matrix describing / and
c
has to be replaced by
DAD~
l ,
the deter-
1
minant becomes det D- det A- (det D)" by the multiplication theorem of determinants; since k is commutative det D cancels and
we
see that the
map
=
/ -> det / is
well defined
and canonical.
If g
fg corresponds to the matrix shows det fg = det /-det g.
THEOREM
det
A
corresponds to the matrix B, then and the multiplication theorem
AB
There exists a well defined
1.6.
Hom(F, F) called the determinant of
>
an endomorphism
det(/0)
=
map
k
(if
k
is
commutative)
/. // satisfies
det /-det
g.
In view of this fact it should be possible to describe det / in an intrinsic manner. The reader will find such a description in Bourbaki, Algfcbre, 6)
V
Chapter is
III.
a two-sided space.
V
is both a right and a left Suppose that vector space over k and that we have the additional rule (aA)b =
DEFINITION
1.4.
CHAPTER a(Ab) for over k.
all
a,btk and A
t
15
T
V. Then we
call
V
THEOREM 1.7. // V is a right and a then Hom(F, V) can be made into a left
over
7c,
V a two-sided space two-sided vector space
vector space over k in
a
canonical way.
To this effect we have to define a product af for a e / Hom(F, V). We mean by af the function which sends the X e V onto the vector a-f(X) in V. Since e
(af)(X
+
7)
=
a-f(X
+
7)
=
k and vector
a
and (a/)(X6)
=
=
a-f(Xb)
a
a/Hom(F,
7').
The equations (a(/
((a
+
+
6)/)(X)
0))(X)
=
(a
=
a((f
=
(af)(X)
+
+
6)-/(Z)
g)(X))
+ =
=
a(f(X)
(ag)(X)
a/(X)
+
=
(af
+
6/(X)
=
ajr)(X),
(a/
+
show that IIom(F, F') is a left space over k. in a natural The question arises: Can one make any right space way into a two-sided space by defining a product aX from the left? One thinks of course of the definition aX = Xa. But then a(bX) = (bX)a = (Xb)a = X(ba) = (ba)X, whereas we should have obtained (ab)X. This "natural" definition works only if the field k is com-
V
mutative.
THEOREM field k,
then
V and V are vector spaces over a Hom(F, V) can be made in a natural way 1.7'.
//
commutative into
a
vector
space over k. If k is again any field, then the most important example of a two-sided vector space is the field k itself if one defines addition
V
and multiplication as they are defined in the field. this case in the next paragraph.
We shall investigate
GEOMETRIC ALGEBRA
16
4. Duality
DEFINITION set
The elements
V
If
1.5.
V = Hom(F,
k) is
a
a right vector space over fc, then the vector space over k called the dual of V.
is
left
of t^ are called functionals of V.
^, ^,
the definition of a functional
=
J5)
The
and pairings