Lecture Notes in Mathematics Editors: J.–M. Morel, Cachan F. Takens, Groningen B. Teissier, Paris
1787
3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Stefaan Caenepeel Gigel Militaru Shenglin Zhu
Frobenius and Separable Functors for Generalized Module Categories and Nonlinear Equations
13
Authors Stefaan Caenepeel Faculty of Applied Sciences Vrije Universiteit Brussel, VUB Pleinlaan 2 1050 Brussels Belgium e-mail:
[email protected] http://homepages.vub.ac.be/˜scaenepe/welcome.html Gigel Militaru Faculty of Mathematics University of Bucharest Strada Academiei 14 70109 Bucharest 1 Romania e-mail:
[email protected] Shenglin ZHU Institute of Mathematics Fudan University Shanghai 200433 China e-mail:
[email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Caenepeel, Stefaan: Frobenius and separable functors for generalized module categories and nonlinear equations / Stefaan Caenepeel ; Gigel Militaru ; Shenglin Zhu. Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in mathematics ; Vol. 1787) ISBN 3-540-43782-7
Mathematics Subject Classification (2000): primary 16W30, secondary 16D90, 16W50, 16B50 ISSN 0075-8434 ISBN 3-540-43782-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science + Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready TEX output by the author SPIN: 10878510
41/3142/ du - 543210 - Printed on acid-free paper
Dedicated to Gilda, Lieve and Xiu
Preface
One of the key tools in classical representation theory is the fact that a representation of a group can also be viewed as an action of the group algebra on a vector space. This has been (one of) the motivations to introduce algebras, and modules over algebras. During the passed century, it has become clear that several different notions of module can be introduced, with a variety of applications in different mathematical disciplines. For example, actions by group algebras can also be used to develop Galois descent theory, with its applications in number theory. Graded modules originated from projective algebraic geometry. In fact a group grading can be considered as a coaction by the group algebra, i.e. the dual of an action. One may then consider various types of modules over bialgebras and Hopf algebras: Hopf modules (in integral theory), relative Hopf modules (in Hopf-Galois theory), dimodules (when studying the Brauer group). Perhaps the most important ones are the Yetter-Drinfeld modules, that have been studied in connection with the theory of quantum groups, the quantum Yang-Baxter equation, braided monoidal categories, and knot theory. Frobenius fuctors generalize the classical concept of Frobenius algebra that appeared first 100 years ago in the work of Frobenius on representation theory. The study of Frobenius algebras has seen a revival during the passed five years, serving as an important tool in problems arising from different fields: Jones theory of subfactors of von Neumann algebras ([98], [100]), topological quantum field theory ([3], [8]), geometry of manifolds and quantum cohomology ([79], [129] and the references indicated there), the quantum Yang-Baxter equation ([15], [42]), and Yetter-Drinfeld modules ([49], [88]). Separable functors are a generalization of the theory of separable field extensions, and of separable algebras. Separability plays a crucial role in several topics in algebra, number theory and algebraic geometry, for example in classical Galois theory, ramification theory, Azumaya algebras and the Brauer group theory, Hochschild cohomology and ´etale cohomology. A more recent application can be found in the Jones theory of subfactors of von Neumann algebras, already mentioned above with respect to Frobenius algebras. In this monograph, we present - from a purely algebraic point of view - a unification schedule for actions and coactions and their properties, where we are mainly interested in generalizations of Frobenius and separability prop-
VIII
Preface
erties. The unification theory takes place at four different levels. First, we have a unification on the level of categories of modules: DoiKoppinen modules were introduced first, and all modules mentioned above can be viewed as special cases. Entwined modules arose from noncommutative geometry; they are at the same time more general and easier to deal with, and provide new fields of applications. Secondly, there is a unification at the level of functors between module categories: one can introduce morphisms of entwining structures, and then associate such a morphism a pair of adjoint functors. Many “classical” pairs of adjoint functors (the induction functor, forgetful functors, restriction of (co)scalars, functors forgetting a grading, and their adjoints) are in fact special cases of this construction. A third unification takes place at the level of the properties of these pairs of adjoint functors. Here the inspiration comes from two at first sight completely different algebraic notions, having their roots in representation theory: separable algebras and Frobenius. We give a categorical approach, leading to the introduction of separable functors and Frobenius functors. Not only this explains the at first sight mysterious fact that both separable and Frobenius algebras can be characterized using Casimir elements, it also enables us to prove Frobenius and separability type properties in a unified framework, with several new versions of Maschke’s Theorem as a consequence. The fourth unification is based on the theory of Yetter-Drinfeld modules, their relation with the quantum Yang-Baxter equation, and the FRT Theorem. The pentagon equation has appeared in the theory of duality for von Neumann algebras, in connection with C ∗ -algebras. Here we explain how they are related to Hopf modules. In a similar way, another nonlinear equation which we called the Long equation is related to the category of Long dimodules, that finds its origin in generalizations of the Brauer-Wall group. Finally, the FS equation can be used to characterize Frobenius algebras, as well as separable algebras, providing yet another explanation of the relationship between the two notions. For all these equations, we have a version of the FRT Theorem. In Chapter 1, some preliminary results are given. We have included a Section about coalgebras and bialgebras, and one about adjoint functors. Section 1.2 deals with a classical treatment of Frobenius and separable algebras over fields, and we explain how they are connected to classical representation theory. Chapter 2 provides a discussion of entwining structures and their representations, entwined modules, and we discuss how they generalize other types of modules and how they are related to the smash (co)product and the factorization problem of an algebra through two subalgebras. We also give the general pair of adjoint functors mentioned earlier. First properties of the category of entwined modules are discussed, for example we discuss when the category of entwined modules is a monoidal category. We use entwining structures mainly as a tool to unify all kinds of modules, but we want to point
Preface
IX
out that they were originally introduced with a completely different motivation, coming from noncommutative geometry: one can generalize the notion of principal bundles to a very general setting in which the role of coordinate functions on the base is played by a general noncommutative algebra A, and the fibre of the principal bundle by a coalgebra C, where A and C are related by a map ψ : A ⊗ C → C ⊗ A, called the entwining map, that has to satisfy certain compatibility conditions (see [32] and [33]). Entwined modules, as representations of an entwining structure, were introduced by Brzezi´ nski [23], and he proved that Doi-Koppinen Hopf modules and, a fortiori, graded modules, Hopf modules and Yetter-Drinfeld modules are special cases. Entwined modules can also be applied to introduce coalgebra Galois theory, we come back to this in Section 4.8, where we also explain the link to descent theory. The starting points of Chapter 3 are Maschke’s Theorem from Representation Theory (a group algebra is semisimple if and only if the order of the group does not divide the characteristic of the base field), and the classical result that a finite group algebra is Frobenius. Larson and Sweedler have given Hopf algebraic generalizations of these two results, using integrals. Both the Maschke and Frobenius Theorem can be restated in categorical terms. Let us first look at Maschke’s Theorem. If we replace the base field k by a commutative ring, then we obtain the following result: if the order of the group G is invertible in k, then every exact sequence of kG-modules that splits as a sequence of k-modules is split as a sequence of kG-modules. If k is field, this implies immediately that kG is semisimple; in fact it turns out that all variations of Maschke’s Theorem that exist in the literature admit such a formulation. In fact we have more: the kG-splitting maps are constructed deforming the k-splitting maps in a functorial way. A proper definition of functors that have this functorial Maschke property was given by N˘ ast˘ asescu, Van den Bergh, and Van Oystaeyen [145]. They called these functors separable functors because for a given ring extension R → S, the restriction of scalars functor is separable if and only if S/R is separable in the classical sense. A Theorem of Rafael [158] gives necessary and sufficient conditions for a functor with an adjoint to be separable: the unit (or counit) of the adjunction has to be split (or cosplit). We will see that the separable functor philosophy can be applied successfully to any adjoint pair of functors between categories of entwined modules. We will focus mainly on the functors forgetting the action and the coaction, as this is more transparent and leads to several interesting results. A similar functorial approach can be used for the Frobenius property. It is well-known that a k-algebra S is Frobenius if and only if the restriction of scalars functors is at the same time a left and right adjoint of the induction functor. This has lead to the introduction of Frobenius functors: this is a functor which has a left adjoint that is also a right adjoint. An adjoint pair of Frobenius functors is called a Frobenius pair.
X
Preface
Let η : 1 → GF be the unit of an adjunction; as we have seen, to conclude that F is separable, we need a natural transformation ν : GF → 1. Our strategy will be to describe all natural transformations GF → 1; we will see that they form a k-algebra, and that the natural transformations that split the unit are idempotents (separability idempotents) in this algebra. A look at the definition of adjoint pairs of functors tells us that we have to investigate natural transformations GF → 1 and 1 → F G; the difference is that the normalizing properties for the separability property and the Frobenius property are not the same. But still we can handle both problems in a unified framework, and this is what we will do in Chapter 3. In Chapter 4, we will apply the results from Chapter 3 in some important subcases. We have devoted Sections to relative Hopf modules and Hopf-Galois theory, graded modules, Yetter-Drinfeld modules and the Drinfeld double, and Long dimodules. For example, we prove that, for a finitely generated projective Hopf algebra H, the Drinfeld double D(H) is a Frobenius extension of H if and only if H is unimodular. Part I tells us that Hopf modules, Yetter-Drinfeld modules and Long dimodules over a Hopf algebra H can be regarded as special cases as DoiKoppinen Hopf modules and entwined modules, and that a unified theory can be developed. In Part II, we look at these three types of modules from a different point of view: we will see how they are connected to three different nonlinear equations. The celebrated FRT Theorem shows us the close relationship between Yetter-Drinfeld modules and the quantum Yang-Baxter equation (QYBE) (see e.g. [115], [108], [128]). We will discuss how the two other types of modules, Hopf modules and Long dimodules, are related to other nonlinear equations. It comes as a surprise that the nonlinear equation related to the category of Hopf modules H MH is the pentagon (or fusion) equation, which is even older, and somehow more basic then the quantum Yang-Baxter equation. Using Hopf modules, we will present two different approaches to solving this equation: a first approach is to prove an FRT type Theorem for the pentagon equation; a second, completely different, approach was developed by Baaj and Skandalis for unitary operators on Hilbert spaces ([10]) and, more recently, by Davydov ([65]) for arbitrary vector spaces. We will conclude Chapter 6 with a few open problems that may have important consequences: from a philosophical point of view the theory presented herein views a finite dimensional Hopf algebra H simply as an invertible matrix R ∈ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k) that is a solution for the pentagon equation R12 R13 R23 = R23 R12 . Furthermore, in this case dim(H)|n. This point of view could be crucial in reducing the problem of classifying finite dimensional Hopf algebras (currently in full development and using different and complex techniques) to the elementary theory of matrices from linear algebra. At this point a new Jordan theory (we called it restricted Jordan theory) has to be developed. In Chapter 8, we will focus on the Frobenius-separability equation, all solu-
Preface
XI
tions of which are also solutions of the braid equation. An FRT type theorem will enable us to clarify the structure of two fundamental classes of algebras, namely separable algebras and Frobenius algebras. The fact that separable algebras and Frobenius algebras are related to the same nonlinear equation is related to the fact that separability and Frobenius properties studied in Chapters 3 and 4 are based on the same techniques. As we already indicated, the quantum Yang-Baxter equation has been intensively studied in the literature. For completeness sake, and to illustrate the similarity with our other nonlinear equations, we decided that to devote a special Chapter to it. This will also allow us to present some recent results, see Section 5.5. The three authors started their common research on Doi-Koppinen Hopf modules in 1995, with a three month visit by the second and third author to Brussels. The research was continued afterwards within the framework of the bilateral projects “Hopf algebras and (co)Galois theory” and “Hopf algebras in Algebra, Topology, Geometry and Physics” of the Flemish and Romanian governments, and “New computational, geometric and algebraic methods applied to quantum groups and differential operators” of the Flemish and Chinese governments. We benefitted greatly from direct and indirect contributions from - in alphabetical order - X-TO-Status: 00000003 Margaret Beattie, Tomasz Brzezi´ nski, Sorin Dˇ ascˇalescu, Jose (Pepe) G´omez Torrecillas, Bogdan Ichim, Bogdan Ion, Lars Kadison, Claudia Menini, Constantin Nˇ astˇasescu, S¸erban Raianu, Peter Schauenburg, Mona Stanciulescu, Dragos S ¸ tefan, Lucien Van hamme, Fred Van Oystaeyen, Yinhuo Zhang, and Yonghua Xu. Chapters 2 and 3 are based on an seminar given by the first author in Brussels during the spring of 1999. The first author wishes to thank Sebastian Burciu, Corina Calinescu and Erwin De Groot for their useful comments. Finally we wish to thank Paul Taylor for his kind permission to use his “diagrams” software. A few words about notation: in Part I, we work over a commutative ring k; unadorned Hom, ⊗, M etc. are assumed to be taken over k. In Part II, we are always assuming that we work over a field k. For k-modules M and N , IM will be the identity map on M , and τ : N ⊗ M → M ⊗ N will be the switch map mapping m ⊗ n to n ⊗ m. Also it is possible to read part II without reading part I first: one needs the generalities of Chapter 1, and the definitions in the first Sections of Chapter 2.
Brussels, Bucharest, Shanghai, February 2002
Stefaan Caenepeel Gigel Militaru Shenglin Zhu
Table of Contents
Part I Entwined modules and Doi-Koppinen Hopf modules 1
Generalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Coalgebras, bialgebras, and Hopf algebras . . . . . . . . . . . . . . . . . 3 1.2 Adjoint functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.3 Separable algebras and Frobenius algebras . . . . . . . . . . . . . . . . . 28
2
Doi-Koppinen Hopf modules and entwined modules . . . . . . 2.1 Doi-Koppinen structures and entwining structures . . . . . . . . . . 2.2 Doi-Koppinen modules and entwined modules . . . . . . . . . . . . . . 2.3 Entwined modules and the smash product . . . . . . . . . . . . . . . . . 2.4 Entwined modules and the smash coproduct . . . . . . . . . . . . . . . 2.5 Adjoint functors for entwined modules . . . . . . . . . . . . . . . . . . . . 2.6 Two-sided entwined modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Entwined modules and comodules over a coring . . . . . . . . . . . . 2.8 Monoidal categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39 39 48 50 59 64 68 71 78
3
Frobenius and separable functors for entwined modules . . . 3.1 Separable functors and Frobenius functors . . . . . . . . . . . . . . . . . 3.2 Restriction of scalars and the smash product . . . . . . . . . . . . . . . 3.3 The functor forgetting the C-coaction . . . . . . . . . . . . . . . . . . . . . 3.4 The functor forgetting the A-action . . . . . . . . . . . . . . . . . . . . . . . 3.5 The general induction functor . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89 89 99 124 137 146
4
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Relative Hopf modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Hopf-Galois extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Doi’s [H, C]-modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Yetter-Drinfeld modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Long dimodules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Modules graded by G-sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Two-sided entwined modules revisited . . . . . . . . . . . . . . . . . . . . . 4.8 Corings and descent theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159 159 168 179 181 193 195 198 204
XIV
Table of Contents
Part II Nonlinear equations 5
6
Yetter-Drinfeld modules and the quantum Yang-Baxter equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 The quantum Yang-Baxter equation and the braid equation . . 5.3 Hopf algebras versus the QYBE . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 The FRT Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 The set-theoretic braid equation . . . . . . . . . . . . . . . . . . . . . . . . . .
217 217 218 225 235 238
Hopf modules and the pentagon equation . . . . . . . . . . . . . . . . . 6.1 The Hopf equation and the pentagon equation . . . . . . . . . . . . . 6.2 The FRT Theorem for the Hopf equation . . . . . . . . . . . . . . . . . . 6.3 New examples of noncommutative noncocommutative bialgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 The pentagon equation versus the structure and the classification of finite dimensional Hopf algebras . . . . . . . . . . . .
245 245 253
7
Long dimodules and the Long equation . . . . . . . . . . . . . . . . . . . 7.1 The Long equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 The FRT Theorem for the Long equation . . . . . . . . . . . . . . . . . . 7.3 Long coalgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
301 301 304 311
8
The Frobenius-Separability equation . . . . . . . . . . . . . . . . . . . . . . 8.1 Frobenius algebras and separable algebras . . . . . . . . . . . . . . . . . 8.2 The Frobenius-separability equation . . . . . . . . . . . . . . . . . . . . . . 8.3 The structure of Frobenius algebras and separable algebras . . 8.4 The category of FS-objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
317 317 320 332 339
267 277
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
1 Generalities
1.1 Coalgebras, bialgebras, and Hopf algebras In this Section, we give a brief introduction to Hopf algebras. A more detailed discussion can be found in the literature, see for example [1], [63], [140] or [172]. Throughout, k will be a commutative ring. In some specific cases, we will assume that k is a field. k M = M will denote the category of (left) k-modules (we omit the index k if no confusion is possible). ⊗ and Hom will be shorter notation for ⊗k and Homk . Let M and N be k-modules. IM : M → M will be the identity map, and τM,N : M ⊗ N → N ⊗ M the switch map. Indices will be omitted if no confusion is possible. M ∗ = Hom(M, k) is the dual of the k-module M . For m ∈ M and m∗ ∈ M ∗ , we will often use the duality notation m∗ , m = m∗ (m) Let M be a finitely generated and projective k-module. Then there exists a (finite) dual basis {mi , m∗i | i = 1, · · · , n} for M . This means that m=
n
m∗i , mmi and m∗ =
i=1
n
m∗ , mi m∗i
i=1
for all m ∈ M and m∗ ∈ M ∗ . Algebras and coalgebras Recall that a k-algebra (with unit) is a k-module together with a multiplication map m = mA : A⊗A → A and a unit element 1A ∈ A satisfying the conditions m ◦ (m ⊗ I) = m ◦ (I ⊗ m) m(a ⊗ 1A ) = m(1A ⊗ a) = a for all a ∈ A. The map η = ηA : k → A mapping x ∈ k to x1A is called the unit map of A and satisfies the condition
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 3–37, 2002. c Springer-Verlag Berlin Heidelberg 2002
4
1 Generalities
m ◦ (η ⊗ I) = m ◦ (I ⊗ η) = I The opposite Aop of an algebra A, is equal to A as a k-module, with multiplication mAop = mA ◦ τ . A is commutative if A = Aop , or m ◦ τ = m. k-alg will be the category of k-algebras, and multiplicative maps. Coalgebras are defined in a similar way: a k-coalgebra C is a k-module together with k-linear maps ∆ = ∆C : C → C ⊗ C and ε = εC : C → k satisfying (∆ ⊗ I) ◦ ∆ = (I ⊗ ∆) ◦ ∆
(1.1)
(ε ⊗ I) ◦ ∆ = (I ⊗ ε) ◦ ∆ = I
(1.2)
∆ is called the comultiplication or the diagonal map, and ε is called the counit or augmentation map. (1.2) tells us that the comultiplication is coassociative. We will use the Sweedler-Heyneman notation for the comultiplication: for c ∈ C, we write c(1) ⊗ c(2) = c(1) ⊗ c(2) ∆(c) =
(c)
The summation symbol will usually be omitted. The coassociativity can then be reformulated as follows: c(1)(1) ⊗ c(1)(2) ⊗ c(2) = c(1) ⊗ c(2)(1) ⊗ c(2)(2) and therefore we write ∆2 (c) = (∆ ⊗ I)(∆(c)) = (I ⊗ ∆)(∆(c)) = c(1) ⊗ c(2) ⊗ c(3) and, in a similar way, ∆3 (c) = c(1) ⊗ c(2) ⊗ c(3) ⊗ c(4) The counit property (1.2) can be restated as ε(c(1) )c(2) = ε(c(2) )c(1) = c The co-opposite C cop of a coalgebra C is equal to C as a k-module, with comultiplication ∆C cop = τ ◦ ∆C . C is called cocommutative if C = C cop , or τ ◦ ∆ = τ , or c(1) ⊗ c(2) = c(2) ⊗ c(1) for all c ∈ C. A k-linear map f : C → D between two coalgebras C and D is called a morphism of k-coalgebras if ∆D ◦ f = (f ⊗ f )∆C and εD ◦ f = εC
1.1 Coalgebras, bialgebras, and Hopf algebras
5
or f (c)(1) ⊗ f (c)(2) = f (c(1) ) ⊗ f (c(2) ) and εD (f (c)) = εC (c) for all c ∈ C. We also say that f is comultiplicative. The category of kcoalgebras and comultiplicative map is denoted by k-coalg. The tensor product of two coalgebras C and D is again a coalgebra. The comultiplication and counit are given by the formulas ∆C⊗D = (IC ⊗ τC,D ⊗ ID ) ◦ (∆C ⊗ ∆D ) and εC⊗D = εC ⊗ εD Example 1. Let X be an arbitrary set, and C = kX the free k-module with basis X. On C we define a comultiplication and counit as follows: ∆C (x) = x ⊗ x and εC (x) = 1 for all x ∈ X. kX is called the grouplike coalgebra. The convolution product Let C be a coalgebra, and A an algebra. Then we can define a multiplication on Hom(C, A) in the following way: for f, g : C → A, we let f ∗ g = mA ◦ (f ⊗ g) ◦ ∆C , that is, (f ∗ g)(c) = f (c(1) )g(c(2) ) This multiplication is called the convolution. ηA ◦ εC is a unit for the convolution. In particular, if A = k, we find that C ∗ is a k-algebra, with unit ε, and comultiplication given by c∗ ∗ d∗ , c = c∗ , c(1) d∗ , c(2) In fact, the multiplication on C ∗ is the dual of the comultiplication on C. If A is an algebra, which is finitely generated and projective as a k-module, then A∗ is a coalgebra. The comultiplication is given by ∗
mA ⊗ A)∗ ∼ A∗ −→(A = A∗ ⊗ A∗
This means that ∆(a∗ ) = a∗(1) ⊗ a∗(2) if and only if a∗ , ab = a∗(1) , aa∗(2) , b for all a ∈ A and b ∈ B. The comultiplication can be described in terms of a dual basis {ai , a∗i | i = 1, · · · , n} of A: ∆(a∗ ) =
n i,j=1
a∗ , ai aj a∗i ⊗ a∗j
(1.3)
6
1 Generalities
for all a∗ ∈ A∗ . From (1.3), it also follows that n
ai aj ⊗ a∗i ⊗ a∗j =
i,j=1
n
ai ⊗ ∆(a∗i )
(1.4)
i=1
For later use, we rewrite this formula in terms of coalgebras: put C = A∗ , and let {ci , c∗i | i = 1, · · · , n} be a finite dual basis for C. Then ∆(ci ) ⊗ c∗i = ci ⊗ cj ⊗ c∗i ∗ c∗j (1.5) i
i,j
Bialgebras and Hopf algebras Proposition 1. For a k-module H that is at once a k-algebra and a kcoalgebra, the following assertions are equivalent: 1. mH and ηH are comultiplicative; 2. ∆H and εH are multiplicative; 3. for all h, g ∈ H, we have ∆(gh) = g(1) h(1) ⊗ g(2) h(2) ε(gh) = ε(g)ε(h)
(1.6) (1.7)
∆(1) = 1 ⊗ 1 ε(1) = 1
(1.8) (1.9)
In this situation, we call H a bialgebra. A map between bialgebras that is multiplicative and comultiplicative is called a morphism of bialgebras. Proof. This follows from the following observations: mH is comultiplicative ⇐⇒ (1.6) and (1.8) hold; ηH is comultiplicative ⇐⇒ (1.7) and (1.9) hold ∆H is multiplicative ⇐⇒ (1.6) and (1.7) hold; εH is multiplicative ⇐⇒ (1.8) and (1.9) hold Definition 1. A bialgebra H is called a Hopf algebra if the identity IH has an inverse S in the convolution algebra Hom(H, H). Thus we need a map S : H → H satisfying S(h(1) )h(2) = h(1) S(h(2) ) = η(ε(h))
(1.10)
The map S is called the antipode of H. Let f : H → K be a morphism of bialgebras between two Hopf algebras H and K. It is well-known that f also preserves the antipode, that is, SK ◦ f = f ◦ SH and f is called a morphism of Hopf algebras.
1.1 Coalgebras, bialgebras, and Hopf algebras
7
Example 2. Let G be a semigroup. Then kG is a coalgebra (see Example 1), and a k-algebra. It is easy to see that kG is a bialgebra. If G is a group, then kG is a Hopf algebra. The antipode is given by S(g) = g −1 , for all g ∈ G. If H is bialgebra, then H op , H cop and H opcop are also bialgebras. If H antipode S, then S is also an antipode for H opcop . An antipode S for also an antipode for H cop , and is called a twisted antipode. S has to the property S(h(2) )h(1) = h(2) S(h(1) ) = η(ε(h))
has an H op is satisfy (1.11)
for all h ∈ H. Proposition 2. Let H be a Hopf algebra. Then S is a bialgebra morphism from H to H opcop . If S is bijective, then S −1 is a twisted antipode. If H is commutative or cocommutative, then S ◦ S = IH , and consequently S = S. Proof. Consider the maps ν, ρ : H ⊗ H → H given by ν(h ⊗ k) = S(k)S(h) and ρ(h ⊗ k) = S(hk) It is easy to prove that both ν and ρ are convolution inverses of the multiplication map m, and ν = ρ, and S(hk) = S(k)S(h) for all h, k ∈ H. Furthermore 1 = η(ε(1)) = (I ∗ S)(1) = I(1)S(1) = S(1) and we find that S : H → H op is multiplicative. In a similar way, we prove that S : H → H cop is comultiplicative: the maps ψ, ϕ : H → H ⊗ H given by ψ(h) = ∆(S(h)) and ϕ(h) = S(h(2) ) ⊗ S(h(1) ) are both convolution inverses of ∆H , and therefore ψ = ϕ and ∆(S(h)) = S(h(2) ) ⊗ S(h(1) ) for all h ∈ H. Finally ε(h) = ε((η ◦ ε)(h)) = ε(S(h(1) )h(2) ) = ε(S(h(1) ))ε(h(2) ) = ε(S(h)) Assume that S is bijective. Then S −1 (hk) = S −1 (k)S −1 (h), and S −1 (1) = 1. Applying S −1 to (1.10), we find (1.11), and S −1 is a twisted antipode. Finally, if H is commutative or cocommutative, then S is also a twisted antipode, and we have for all h ∈ H that (S ∗ (S ◦ S))(h) = S(h(1) S(S(h(2) )) = S S(h(2) )h(1) = S((η ◦ ε)(h)) = (η ◦ ε)(h) proving that S ◦ S is a convolution inverse for S, and S ◦ S = I.
8
1 Generalities
Modules Let A be a k-algebra. A left A-module M is a k-module, together with a map l ψ = ψM : A ⊗ M → M, ψ(a ⊗ m) = am such that a(bm) = (ab)m and 1m = m for all a, b ∈ A and m ∈ M . We say that ψ is a left A-action on M , or that A acts on M from the left. Let M and N be two left A-modules. A k-linear map f : M → N is called left A-linear if f (am) = af (m), for all a ∈ A and m ∈ M . The category of left A-modules and A-linear maps is denoted by A M. In a similar way, we can introduce right A-modules, and the category of right A-modules MA . Let B be another k-algebra. A k-module M that is at once a left A-module and a right B-module such that a(mb) = (am)b for all a ∈ A, b ∈ B and m ∈ M is called an (A, B)-bimodule. A MB will be the category of (A, B)-bimodules. Observe that we have isomorphisms of categories ∼ ∼ A MB = A⊗B op M = MAop ⊗B Take M ∈ MA and N ∈ A M. The tensor product M ⊗A N is by definition l r the coequalizer of the maps IM ⊗ ψN and ψM ⊗ IN , that is, we have an exact sequence M ⊗A⊗N - M ⊗ N −→M ⊗A M −→0 If H is a bialgebra, then the tensor product of two (left) H-modules M and N is again an H-module. The action on M ⊗ N is given by h(m ⊗ n) = h(1) m ⊗ h(2) n We also write M H = {m ∈ M | hm = ε(h)m, for all h ∈ H} Module algebras and module coalgebras Assume that H is a bialgebra. Let A be a left H-module, and a k-algebra. We call A a left H-module algebra if the unit and multiplication are left H-linear, or h(ab) = (h(1) a)(h(2) b) and h1A = ε(h)1A
(1.12)
for all h ∈ H, and a, b ∈ A. In a similar way, we introduce right H-module algebras. If A is a left H-module algebra, then Aop is a right H opcop -module algebra. A k-coalgebra that is also a left H-module is called a left H-module coalgebra if the counit and the comultiplication are left H-linear. This is equivalent to ∆C (hc) = h(1) c(1) ⊗ h(2) c(2) and εC (hc) = εH (h)εC (c)
(1.13)
1.1 Coalgebras, bialgebras, and Hopf algebras
9
for all h ∈ H and c ∈ C. We can also introduce right module coalgebras, and if C is a left H-module coalgebra, then C cop is a right H opcop -module coalgebra. If C is a right H-module coalgebra, then C ∗ is a left H-module algebra. The left H-action on C ∗ is given by the formula h · c∗ , c = c∗ , ch
(1.14)
In a similar way, if C is a left H-module coalgebra, then C ∗ is a right Hmodule algebra, with c∗ · h, c = c∗ , hc (1.15) Example 3. Let G be a group, and X a right G-set. This means that we have a map X × G → X : (x, g) → xg such that (xg)h = x(gh), for all g, h ∈ G. Then the coalgebra kX is a right kG-module coalgebra. Comodules Let C be a coalgebra. A right C-comodule M is a k-module together with a map ρ = ρrM : M → M ⊗ C such that (ρ ⊗ IC ) ◦ ρ = (IM ⊗ ∆C ) ◦ ρ and (IC ⊗ εC ) ◦ ρ = IM
(1.16)
We will say that C acts from the right on M . We will use the SweedlerHeyneman notation ρ(m) = m[0] ⊗ m[1] and (ρ ⊗ IC )(ρ(m)) = (IM ⊗ ∆C )(ρ(m)) = m[0] ⊗ m[1] ⊗ m[2] The second identity in (1.16) can be rewritten as ε(m[1] )m[0] = m for all m ∈ M . A map f : M → N between two right comodules is called a morphism of C-comodules, or a right C-colinear map if ρrN ◦ f = (f ⊗ IC ) ◦ ρrM or f (m)[0] ⊗ f (m)[1] = f (m[0] ) ⊗ m[1] for all m ∈ M . MC will be the category of right C-comodules and right C-colinear maps.
10
1 Generalities
Example 4. Let C = kX, with X an arbitrary set. Let M be a k-module graded by X, that is Mx M= x∈X
where every Mx is a k-module. Then M is a kX-comodule, the coaction is given by ρr (m) = mx ⊗ x if m = mx with mx ∈ Mx . Conversely, every kX-comodule M is graded by X, one defines the grading by Mx = {m ∈ M | ρ(m) = m ⊗ x} Thus we have an equivalence between MkX and the category of X-graded modules. We have a functor F : MC → C ∗ M defined as follows: for a right C-comodule M , we let F (M ) = M , with left C ∗ -action given by c∗ · m = c∗ , m[1] m[0] for all c∗ ∈ C ∗ and m ∈ M ; if f : M → N is right C-colinear, then it is easy to prove that f is also left C ∗ -linear, and we let F (f ) = f . Proposition 3. The functor F : MC → C ∗ M is faithful. If C is projective as a k-module, then F is fully faithful. If C is finitely generated and projective, then F is an isomorphism of categories. Proof. Take two right C-comodules M and N . Obviously HomC (M, N ) → C ∗ Hom(F (M ), F (N )) is injective, so F is faithful. Assume that C is k-projective, and let {ci , c∗i | i ∈ I} be a dual basis. Let M and N be C-comodules, and assume that f : M → N is left C ∗ -linear. We claim that f is also right C-colinear. Indeed, for all m ∈ M , we have f (m[0] ) ⊗ c∗i , m[1] ci f (m[0] ) ⊗ m[1] = =
i∈I
f (c∗i · m) ⊗ ci
i∈I
=
c∗i · f (m) ⊗ ci
i∈I
=
c∗i , f (m)[1] f (m)[0] ⊗ ci
i∈I
= f (m)[0] ⊗ f (m)[1]
1.1 Coalgebras, bialgebras, and Hopf algebras
11
Assume moreover that C is finitely generated, and let {ci , c∗i | i = 1, · · · , n} be a dual basis for C. We define a functor G : C ∗ M → MC as follows: G(M ) = M as a k-module, with right C-coaction ρ(m) =
n
c∗i · m ⊗ ci
i=1
We will show that ρ defines a coaction, and leave all other verifications to the reader. We obviously have (IM ⊗ ε)(ρ(m)) =
n
ε(ci )c∗i · m = ε · m = m
i=1
Next we want to prove that (ρ ⊗ IC ) ◦ ρ = (IM ⊗ ∆C ) ◦ ρ
(1.17)
For all c∗ , d∗ ∈ C ∗ , we have (IM ⊗ c∗ ⊗ d∗ ) ◦ (ρ ⊗ IC ) ◦ ρ (m) ∗ ∗ = (IM ⊗ c∗ ⊗ d∗ ) (cj ∗ ci ) · m ⊗ cj ⊗ ci i,j
c∗ , cj d∗ , ci (c∗j ∗ c∗i ) · m = i,j ∗
= (c ∗ d∗ ) · m = c∗ ∗ d∗ , ci c∗i · m = (IM ⊗ c∗ ⊗ d∗ ) c∗i · m ⊗ δ(ci ) = (IM ⊗ c∗ ⊗ d∗ ) ((IM ⊗ ∆C ) ◦ ρr )(m) and (1.17) follows after we apply Lemma 1 Lemma 1. Let M, Nbe k-modules, and assume that N is finitely generated and projective. Take j mj ⊗ pj and k mk ⊗ pk in M ⊗ N . If n∗ , pj mj = n∗ , pk mk j
for all n∗ ∈ N ∗ , then
k
j
mj ⊗ p j =
mk ⊗ pk
k
Proof. Let {ni , n∗i | i = 1, · · · , n} be a dual basis for N . Then mj ⊗ p j = mj ⊗ n∗i , pj ni = mk ⊗ n∗i , pk ni = mk ⊗ pk j
i,j
i,k
k
12
1 Generalities
Let H be a bialgebra. If M and N are right H-comodules, then M ⊗ N is again a right H-comodule. The H-coaction is given by ρrM ⊗N (m ⊗ n) = m[0] ⊗ n[0] ⊗ m[1] n[1] We call M coH = {m ∈ M | ρ(m) = m ⊗ 1} the submodule of coinvariants of M . We can also introduce left C-comodules. For a left C-comodule M , the Sweedler-Heyneman notation takes the following form: ρlM (m) = m[−1] ⊗ m[0] ∈ C ⊗ M The category of left C-comodules and left C-colinear maps is denoted by C M. We have an isomorphism of categories C
M∼ = MC
cop
If M is at once a left C-comodule and a right D-comodule in such a way that (ρl ⊗ ID ) ◦ ρr = (IC ⊗ ρr ) ◦ ρl then we say that M is a (C, D)-bicomodule. We then write, following the Sweedler-Heyneman philosophy: (m[0] )[−1] ⊗ (m[0] )[0] ⊗ m[1] = m[−1] ⊗ (m[0] )[0] ⊗ (m[0] )[1] = m[−1] ⊗ m[0] ⊗ m[1] = ρlr (m) Observe that C itself is a (C, C)-bicomodule. C MD is the category of (C, D)bicomodules and left C-colinear right C-colinear maps. We have isomorphisms cop cop C MD ∼ = MC ⊗D = C⊗D M ∼ Proposition 4. Let C be a coalgebra, and M a finitely generated projective k-module. Right C-coaction on M are in bijective correspondence with left C-coactions on M ∗ . Proof. Let {mi , m∗i | i = 1, · · · , n} be a dual basis for M , and let ρr : M → M ⊗ C be a right C-coaction. We define ρl = α(ρr ) : M ∗ → C ⊗ M ∗ by ρl (m∗ ) =
n i=1
This is a coaction on M ∗ since
mi[1] ⊗ m∗ , mi[0] m∗i
(1.18)
1.1 Coalgebras, bialgebras, and Hopf algebras
(IC ⊗ ρl )(ρl (m∗ )) =
n
13
mi[1] ⊗ mj[1] ⊗ m∗ , mi[0] m∗i , mj[0] m∗j
i,j=1
=
n
mj[1] ⊗ mj[2] ⊗ m∗ , mj[0] m∗j
j=1
= (∆C ⊗ IM ∗ )(ρl (m∗ )) n n ε(m∗[−1] )m∗[0] = ε, mi[1] m∗ , mi[0] m∗i i=1
i=1
=
n
m∗ , mi m∗i = m∗
i=1
Conversely, given ρl : M ∗ → C ⊗ M ∗ , we define ρr = α (ρl ) : M → M ⊗ C by n m∗i[0] , mmi ⊗ m∗i[−1] ρr (m) = i=1
An easy computation shows that α and α are each others inverses. The category of comodules over a coalgebra over a field k is a Grothendieck category. Over a commutative ring, we have the following generalization of this result, due to Wisbauer [187]. Proposition 5. Let C be a coalgebra over a commutative ring k. The following assertions are equivalent: 1. C is flat as a k-module; 2. MC is a Grothendieck category and the forgetful functor MC → M is exact; 3. MC is an abelian category and the forgetful functor MC → M is exact. Proof. 1. ⇒ 2. It is clear that MC is additive. Let f : M → N be a map in MC . To prove that Ker (f ) is a C-comodule, we need to show, for any m ∈ Ker (f ): ρ(m) ∈ Ker (f ) ⊗ C = Ker (f ⊗ IC ) (using the fact that C is k-flat). This is obvious, since (f ⊗ IC )ρ(m) = f (m[0] ) ⊗ m[1] = ρ(f (m)) = 0 On Coker (f ), we put a C-comodule structure as follows: ρ(n) = n[0] ⊗ n[1] for all n ∈ N . This is well-defined: if n = f (m), then n[0] ⊗ n[1] = f (m)[0] ⊗ f (m)[1] = f (m[0] ) ⊗ m[1] = 0
14
1 Generalities
It is clear that every monic in MC is the kernel of its cokernel, and that every epic is the cokernel of its cokernel, so MC is an abelian category. Let us next see that MC is an AB3-category. If {Mλ | λ ∈ Λ} is a family in MC , then M = ⊕λ Mλ is again a comodule: we have maps Mλ
- Mλ ⊗ C
ρλ
iλ ⊗IC
- M ⊗C
and therefore a unique map ρ : M → M ⊗ C making M into a comodule, and iλ into a right C-colinear map. The fact that MC is an AB5-category follows easily since M is AB5, and the functor forgetting the C-coaction is exact. Let us finally show that MC has a family of generators. First observe that every right C-comodule of the form M ⊗ C, with C-coaction induced by C, is generated by C. Indeed, for any k-module M , we can find an epimorphism k (λ) → M in M, and therefore an epimorphism k (λ) ⊗ C = C (λ) → M ⊗ C in MC . Now we claim that the C-subcomodules of C form a family of generators of MC . It suffices to show that for every right C-comodule M and m ∈ M , there exists a C-subcomodule D of C and a C-colinear map f : D → M such that m ∈ Im (f ). ρ : M → M ⊗ C is a monomorphism in MC , so M is isomorphic to ρ(M ) = {n[0] ⊗ n[1] | n ∈ M }. C generates M ⊗ C, so there exists a Ccolinear map f : C → M ⊗ C and c ∈ C such that f (m) = ρ(m). Now let D = {d ∈ C | f (d) ∈ ρ(M )} Indeed, for d ∈ D, we can find n ∈ N such that f (d) = n[0] ⊗ n[1] , and we see that (ρ ⊗ IC )(f (d)) = n[0] ⊗ n[1] ⊗ n[2] ∈ ρ(M ) ⊗ C Now look at the diagram with exact rows that defines D: 1
1
- D
- C
f
f
? - ρ(M )
? - M ⊗C
C is flat, so we have a commutative diagram with exact rows 1
- D⊗C f ⊗ IC
1
- C ⊗C f ⊗ IC
? ? - ρ(M ) ⊗ C - M ⊗ C ⊗ C
1.1 Coalgebras, bialgebras, and Hopf algebras
15
and D ⊗ C = {x ∈ C ⊗ C | (f ⊗ IC )(x) ∈ ρ(M ) ⊗ C It follows that ρ(d) ∈ D ⊗ C, and D is a right C-comodule. We now have f : D → ρ(M ) ∼ = M in MC , and f (c) = m[0] ⊗ m[1] ∼ = m. 2. ⇒ 3. is trivial. 3. ⇒ 1. The forgetful functor F : MC → M is a left adjoint of • ⊗ C : M → MC . The unit and counit of the adjunction are given by ρ : M → M ⊗ C ; ρ(m) = m[0] ⊗ m[1] εN : N ⊗ C → N
: εN (n ⊗ c) = ε(c)n
for all M ∈ MC and N ∈ M. It is well-known that a functor between abelian categories that is a right adjoint of a covariant functor is left exact (see e.g. [11, I.7.1]), and it follows that • ⊗ C : M → MC is exact. Now the forgetful functor MC → M is also left exact, by assumption, so the composition • ⊗ C : M → M is left exact, and C is flat, as needed. Remark 1. The assumption that the forgetful functor is exact, in the second and third condition of the Proposition, means the following: for a C-colinear map f : MC → MC , the (co)kernel of f in MC has to be equal as a k-module to the kernel of f viewed as a map between k-modules. J. G´omez Torrecillas kindly pointed out to us that this condition is missing in Wisbauer’s paper [187]. For an example of a coalgebra C such that MC is abelian, while C is not flat, and the functor forgetting the coaction is not exact, we refer to [80]. The cotensor product Take M ∈ MC and N ∈ C M. The cotensor product M C N = M ⊗C N is defined as the equalizer 0−→M C N −→M ⊗ N
- M ⊗C ⊗N
Example 5. Let C = kX, and M and N X-graded modules. Then Mx ⊗ N x M C N = x∈X
For a fixed right C-comodule M , we have a functor M C • :
C
M→M
If M is flat as a k-module, then M ⊗ • is an exact functor, and it follows easily that M C • is left exact, but not necessarily right exact. Definition 2. A right C-comodule M is called right C-coflat if it is flat as a k-module, and if M C • is an exact functor. A similar definition applies to left C-comodules.
16
1 Generalities
Now take M ∈ MC , N ∈ C M, and P ∈ M. We then have a natural map f : (M C N ) ⊗ P → M C (N ⊗ P ) i mi ⊗ ni ) ⊗ p) = i mi ⊗ (ni ⊗ p).
given by f ((
Lemma 2. With notation as above, the natural map f : (M C N ) ⊗ P → M C (N ⊗ P ) is an isomorphism in each of the following cases: 1. P is k-flat (e.g. if k is a field); 2. M is right C-coflat. Proof. 1. M C N is defined by the exact sequence 0−→M C N −→M ⊗ N - M ⊗C ⊗N Using the fact that P is k-flat, we obtain a commutative diagram with exact rows 0 −→ (M C N ) ⊗ P −→ M ⊗ N ⊗ P - M ⊗C ⊗N ⊗P ∼ ∼ f = = 0 −→ M C (N ⊗ P ) −→ M ⊗ N ⊗ P - M ⊗C ⊗N ⊗P and the result follows from the Five Lemma (see e.g. [123, Sec. VIII.4]). 2. Recall the definition of the tensor product: N ⊗ P = N × P/I, where I is the ideal generated by elements of the form (n, p + q) − (n, p) − (n, q) ; (n + m, p) − (n, p) − (m, p) ; (nx, p) − (n, xp) and we have an exact sequence of left C-comodules 0−→I−→N × P −→N ⊗ P −→0 and, using the right C-coflatness of M , we find a commutative diagram with exact rows 0 −→ M C I = 0
−→
J
−→ M C (N × P ) ∼ =
−→ M C (N ⊗ P ) f
−→ 0
−→ (M C N ) × P
−→ (M C N ) ⊗ P
−→ 0
and the result follows again from Five Lemma. Assume that A is a k-algebra, C a k-coalgebra, P ∈ A M, M ∈ MC and N ∈ C MA . By this we mean that N is a left C-comodule and a right Amodule such that the right A-action is left C-colinear, i.e. ρl (na) = n[−1] ⊗ n[0] a for all n ∈ N and a ∈ A.
1.1 Coalgebras, bialgebras, and Hopf algebras
17
Lemma 3. With notation as above, the natural map f : (M C N ) ⊗A P → M C (N ⊗A P ) is an isomorphism in each of the following situations: 1. P is left A-flat; 2. M is right C-coflat. Proof. 1) The proof is identical to the proof of the first part of Lemma 2 2) The right A-action on M C N is given by mi ⊗ ni )a = mi ⊗ n i a ∈ M C N ( i
for every
i
i
mi ⊗ ni ∈ M C N . Now (M C N ) ⊗A P is the equalizer of (M C N ) ⊗ A ⊗ P −→ −→ (M C N ) ⊗ P
which is by Lemma 2 isomorphic to the equalizer of M C (N ⊗ A ⊗ P ) −→ −→ M C (N ⊗ P ) and this equalizer is isomorphic to M C (N ⊗A P ) because M is right Ccoflat. In some situations, the cotensor product can be computed explicitely. Proposition 6. Let M and N be right C-comodules, and assume that M is finitely generated and projective as a k-module. Then we have a natural isomorphism HomC (M, N ) ∼ = N C M ∗ Proof. We use notation as in Proposition 4. We know from (1.18) that M ∗ is a left C-comodule. From (1.18), we deduce that m∗[0] , mm∗[−1] = m∗ , m[0] m[1]
(1.19)
M is finitely generated projective, so we have an isomorphism α : Hom(M, N ) → N ⊗ M ∗ given by α(f ) =
n
f (mi ) ⊗ m∗i and α−1 (n ⊗ m∗ )(m) = m∗ , mn
i=1
We will show that α restricts to the required isomorphism. Assume first that f is right C-colinear. Using (1.18) we find that
18
1 Generalities
f (mi ) ⊗ m∗i[−1] ⊗ m∗i[0] =
i
=
f (mi ) ⊗ mj[1] ⊗ m∗i , mj[0] m∗j
i,j
f (mj[0] ) ⊗ mj[1] ⊗ m∗j
j
=
f (mj )[0] ⊗ f (mj )[1] ⊗ m∗j
j
and it followsthat α(f ) ∈ N C M ∗ . Now take k nk ⊗ m∗k ∈ N C M ∗ , and let f = α−1 ( k nk ⊗ n∗k ). f is then right C-colinear, since for all m ∈ M , we have f (m[0] ) ⊗ m[1] = n∗k , m[0] nk ⊗ m[1] (1.19)
=
k
n∗k[0] , mnk ⊗ n∗k[−1]
k
=
n∗k , mnk[0] ⊗ nk[1]
k
= ρ(f (m)) Coflatness versus injectivity Let C be a coalgebra over a field. We will show that a C-comodule is an injective object in the category of C-comodules if and only if it is C-coflat. Our proof is based on the approach presented in [63]. First we need some Lemmas. Lemma 4. Let C be a coalgebra over a field k, and M a right C-comodule. For every m ∈ M , there exists a finite dimensional subcomodule M of M containing m. Consequently there exists an index set J and a set {Mj | j ∈ J} consisting of finite dimensional right C-comodules, and an epimorphism φ : ⊕j∈J Mj → M in MC . Proof. Let {ci | i ∈ I} be a basis for C as a k-vector space, and write ρ(m) = mi ⊗ ci i∈I
where only a finite number of the mi are nonzero - for a change, we do not use the Sweedler notation. Let M be the k-subspace of M generated by the mk . M is finite dimensional, and ε(ci )mi ∈ M m= i∈I
We can write ∆(ci ) =
j,l∈I
ajl i cl ⊗ cm
1.1 Coalgebras, bialgebras, and Hopf algebras
19
where only a finite number of the ajl i ∈ k are different from 0. We now compute that ρ(mi ) ⊗ ci = mi ⊗ ∆(ci ) i∈I
i∈I
=
ajl i mi ⊗ cl ⊗ cm
i,j,l∈I
=
aji l ml ⊗ cl ⊗ ci
i,j,l∈I
Since the ci form a basis of C, we have ji ρ(mi ) = al ml ⊗ cl ∈ M ⊗ C j,l∈I
for all i ∈ I, and this proves that M is a subcomodule of M . Consider two right C-comodules M and Q. We say that Q is M -injective if for every subcomodule M ⊂ M , the canonical map HomC (M, Q) → HomC (M , Q) is surjective. Clearly Q is an injective comodule (i.e. an injective object of MC ) if and only if Q is M -injective for every M ∈ MC . Lemma 5. If {Mi | i ∈ I} is a collection of C-comodules, and Q ∈ MC is Mi -injective for all i ∈ I, then Q is also ⊕i∈I Mi -injective. Proof. Write M = ⊕i∈I Mi . Let M be a subcomodule of M , and f : M → Q C-colinear. Consider P = {(L, g) | M ⊂ L ⊂ M in MC , g : L → Q in MC , g|M = f } P is nonempty since (M , f ) ∈ P, and P is ordered: (L, g) ≤ (L , g ) if L ⊂ L and g|L = g. It is easy to show that this ordering is inductive, so P has a maximal element, by Zorn’s Lemma. We call this element (L0 , g0 ), and we claim that Mi ⊂ L0 , for all i ∈ I. Assume Mi is not contained in L0 , and consider h = g0|Mi ∩L0 : Mi ∩ L0 → Q Since Q is Mi -injective, we have a C-colinear map h : Mi → Q such that h|Mi ∩L0 = h Now define g : Mi + L0 → Q as follows g(x + y) = h(x) + g0 (y)
20
1 Generalities
for x ∈ Mi and y ∈ L0 . g is well-defined, since h and g0 coincide on Mi + L0 . Now g|L0 = g0 and Mi + L0 strictly contains L0 , so (L0 , g0 ) < (Mi + L0 , g) in P which is a contradiction. We conclude that Mi ⊂ L0 , so M = ⊕i∈I Mi ⊂ L0 , and g0 : M = L0 → Q extends f . Theorem 1. Let C be a coalgebra over a field k. For a right C-comodule Q, the following assertions are equivalent. 1. Q is injective as a C-comodule; 2. Q is M -injective, for every finite dimensional C-comodule M ; 3. Q is right C-coflat. Proof. 1. ⇒ 3. Assume that Q is injective. The coaction ρQ is monomorphic, so we have a C-colinear map νQ : Q ⊗ C → Q splitting ρQ . Let f : X → Y be a surjective morphism of left C-comodules, and take i qi ⊗ yi ∈ QC Y . ∈ X such that f (xi ) = yi , and our problem is As f is surjective, we find xi that we don’t know whether i qi ⊗ xi ∈ QC X. We have qi[0] ⊗ qi[1] ⊗ f (xi ) = qi ⊗ xi[−1] ⊗ f (xi[0] ) i
i
so i
q i ⊗ yi =
νM (qi[0] ⊗ qi[1] ) ⊗ f (xi )
i
= (IM ⊗ f )(νM (qi ⊗ xi[−1] ) ⊗ xi[0] ) Using the fact that νQ is C-colinear, we find (ρQ ⊗ IX )(νQ (qi ⊗ xi[−1] ) ⊗ xi[0] ) = νQ (qi ⊗ xi[−2] ) ⊗ xi[−1] ⊗ xi[0] = (IQ ⊗ ρX )(νQ (qi ⊗ xi[−1] ) ⊗ xi[0] ) so νQ (qi ⊗ xi[−1] ) ⊗ xi[0] ∈ M C X, and this shows that IQ C f : QC X → QC Y is surjective. 3. ⇒ 2. Let M ∈ MC be finite dimensional, and take a subcomodule M ⊂ M . Then M ∗ and M ∗ are left C-comodules, and Proposition 6 implies that QC M ∗ ∼ = HomC (M, Q) and QC M ∗ ∼ = HomC (M , Q) Now M ∗ → M ∗ is surjective, so QC M ∗ → QC M ∗ is also surjective since Q is C-coflat, and we find that HomC (M, Q) → HomC (M , Q) is surjective, as needed. 2. ⇒ 1. Take an arbitrary N ∈ MC . From Lemma 4, we know that there
1.1 Coalgebras, bialgebras, and Hopf algebras
21
exists a collection {Mi | i ∈ I} of finite dimensional C-comodules and a Ccolinear surjection φ : ⊕i∈I Mi → N . Let P = Ker φ. Now take a subcomodule N ⊂ N , and let M = φ−1 (N ). Then P ⊂ M , so we have the following commutative diagram with exact rows in MC : 0
0
- P 6
- M 6
=
⊂
- P
- M
φ N 6
- 0
⊂ φ - N
- 0
Applying HomC (•, Q) to this diagram, we find 0
- HomC (N, Q)
- HomC (M, Q) - HomC (P, Q) 6 =
0
? ? - HomC (N , Q) - HomC (M , Q) - HomC (P, Q)
HomC (M, Q) → HomC (M , Q) is surjective, by Lemma 5. An easy diagram argument shows that HomC (N, Q) → HomC (N , Q) is surjective, as needed. Comodule algebras and comodule coalgebras Let H be a bialgebra. A right H-comodule A that is also a k-algebra is called a right H-comodule algebra , if the unit and multiplication are right H-colinear, that is ρr (ab) = a[0] b[0] ⊗ a[1] b[1] and ρr (1A ) = 1A ⊗ 1H
(1.20)
for all a, b ∈ A. Left H-comodule algebras are introduced in a similar way, and if A is a right H-comodule algebra, then Aop is a left H opcop -comodule algebra. A k-coalgebra C that is also a right H-comodule is called a right H-comodule coalgebra if the comultiplication and the counit are right H-colinear, or c[0](1) ⊗ c[0](2) ⊗ c[1] = c(1)[0] ⊗ c(2)[0] ⊗ c(1)[1] c(2)[1]
(1.21)
εC (c[0] )c[1] = εC (c)1H
(1.22)
and for all c ∈ C. Example 6. Let G be a (semi)group, and take H = kG. Then a kG-comodule algebra is nothing else then a G-graded k-algebra (see [146] for an extensive study of graded rings). A kG-comodule coalgebra is a G-graded coalgebra (see [144]).
22
1 Generalities
Proposition 7. Let C be a coalgebra which is finitely generated and projective as a k-module. There is a bijective correspondence between right Hcomodule coalgebra structures on C and left H-comodule algebra structures on C ∗ . Proof. Let {ci , c∗i | i = 1, · · · , n} be a finite dual basis of C, and assume that C is a right H-comodule coalgebra. We know from Proposition 4 that C ∗ is a left H-comodule, with left H-coaction given by ρl (c∗ ) =
n
ci[1] ⊗ c∗ , ci[0] c∗i
i=1
This makes C ∗ into a left H-comodule algebra since c∗[−1] d∗[−1] ⊗ c∗[0] d∗[0] =
n
ci[1] cj[1] ⊗ c∗ , ci[0] d∗ , cj[0] c∗i ∗ c∗j
i,j=1
(1.5)
=
n
ci(1)[1] ci(2)[1] ⊗ c∗ , ci(1)[0] d∗ , ci(2)[0] c∗i
i=1
(1.21)
=
n
ci[1] ⊗ c∗ ∗ d∗ , ci[0] c∗i
i=1 l ∗
= ρ (c ∗ d∗ ) ρl (εC ) =
n
ci[1] ⊗ εC , ci[0] c∗i
i=1
(1.22)
=
n
1H ⊗ εC , ci c∗i = 1H ⊗ εC
i=1
The further details of the proof are left to the reader.
1.2 Adjoint functors We give a brief discussion of properties of pairs of adjoint functors; of course these results are well-known, but we have organized them in such a way that they can be applied easily to Frobenius and separable functors in Chapter 3. We will occasionally use the Godement product of two natural transformations. Let us introduce the Godement product briefly, refering the reader to [21] for more detail. Let C, D and E be categories, and consider functors F, G : C → D and H, K : D → E and natural transformations
1.2 Adjoint functors
23
α : F → G and β : H → K The Godement product β ∗ α : HF → KG is defined by (β ∗ α)C = βG(C) ◦ H(αC ) = K(αC ) ◦ βF (C) : HF (C) → KG(C) If F = G, and α = 1F , then we find (β ∗ 1F )C = βF (C) If H = K, and β = 1H , then we find (1H ∗ α)C = H(αC ) Now consider, in addition, functors L : C → D and M : D → E and natural transformations γ : G → L and δ : K → M then we have the following formula: (δ ∗ γ) ◦ (β ∗ α) = (δ ◦ β) ∗ (γ ◦ α) Pairs of adjoint functors Let A, B, C and D be categories, and consider functors F : A → C, G : B → C, H : A → D, and K : B → D We have functors HomC (F, G), HomD (H, K) : Aop × B → Sets and we can consider natural transformations θ : HomC (F, G) → HomD (H, K) The naturality of θ can be expressed as follows: given a : A → A in A, b : B → B in B, and f : F (A) → G(B) in C, we have θA ,B (G(b) ◦ f ◦ F (a)) = K(b) ◦ θA,B (f ) ◦ H(a)
(1.23)
Proposition 8. For two functors F : C → D and G : D → C, we have the following isomorphisms of classes of natural transformations: Nat(1C , GF ) ∼ = Nat(HomD (F, •), HomC (•, G)) Nat(F G, 1D ) ∼ = Nat(HomC (•, G), HomD (F, •))
(1.24) (1.25)
24
1 Generalities
Proof. (Sketch) Consider a natural transformation η : 1C → GF . The corresponding natural transformation θ : HomD (F, •) → HomC (•, G) is defined by θC,D (f ) = G(f ) ◦ ηC (1.26) for all f : F (C) → D in D. Conversely, given θ, the corresponding η is given by ηC = θC,F (C) (IF (C) ) for all C ∈ C. Lemma 6. Let F and G be as in Proposition 8, and consider natural transformations θ : HomD (F, •) → HomC (•, G) and ψ : HomC (•, G) → HomD (F, •). let η : 1C → GF and ε : F G → 1D be natural transformations from Proposition 8. 1. ψ ◦ θ is the identity natural transformation if and only if (ε ∗ F ) ◦ (F ∗ η) = 1F
(1.27)
2. θ ◦ ψ is the identity natural transformation if and only if (G ∗ ε) ◦ (η ∗ G) = 1G
(1.28)
Proof. 1. Take f : F (C) → D in D. We easily compute that ψC,D (θC,D (f )) = εD ◦ F G(f ) ◦ F (ηC ) Now take D = F (C) and f = IF (C) . Then ψC,F (C) (θC,F (C) (IF (C) )) = εF (C) ◦ F (ηC ) and, under the assumption that ψ ◦ θ is the identity natural transformation, we find (1.27). Conversely, assume that (1.27) holds. ε is natural, so we have the following commutative diagram for any f : F (C) → D in D: F GF (C)
F G(f-)
εF (C) ? F (C)
F G(D) εD
F G(f ) - ? D
and we find that f = f ◦ εF (C) ◦ F (ηC ) = εD ◦ F G(f ) ◦ F (ηC ) = ψC,D (θC,D (f )) The proof of 2. is similar.
1.2 Adjoint functors
25
Recall that (F, G) is an adjoint pair of functors if HomD (F, •) and HomC (•, G) are naturally isomorphic, or, equivalently, if there exists natural transformations η : 1C → GF and ε : F G → 1D satisfying (1.27-1.28). In this case, F is called a left adjoint of G, and G is called an adjoint of F . η is called the unit of the adjunction, while ε is called the counit. It is well-known that the left or right adjoint of a functor is unique up to natural isomorphism; we include a proof for completeness sake. Proposition 9. (Kan) [101]. If G and G are both adjoints of a functor F : C → D, then G and G are naturally isomorphic. Proof. We have two adjunctions (F, G) and (F, G ). Let (η, ε) and (η , ε ) be the unit and counit of both adjunctions, and consider the natural transformations γ = (G ∗ ε) ◦ (η ∗ G) : G → G γ = (G ∗ ε ) ◦ (η ∗ G ) : G → G η is natural, so for any D ∈ D, we have a commutative diagram G (εD ) - G (D)
G F G(D) ηG F G(D)
ηG (D)
? ? GF G (εD-) GF G F G(D) GF G (D) or
(η ∗ G ) ◦ (G ∗ ε) = (GF G ∗ ε) ◦ (η ∗ G F G)
Now η is natural, and we have a commutative diagram G(D)
ηG(D)
- G F G(D)
ηG(D)
ηG F G(D)
? ? GF (η G(D) ) - GF G F G(D) GF G(D) or
(η ∗ G F G) ◦ (η ∗ G) = (GF ∗ η ∗ G) ◦ (η ∗ G)
The naturality of ε gives a commutative diagram F G F G(D)
F G (εD-)
εF G(D) ? F G(D)
F G (D) εD
εD
? - D
26
1 Generalities
or
ε ◦ (F G ∗ ε) = ε ◦ (ε ∗ F G)
and it follows that (G ∗ ε ) ◦ (GF G ∗ ε) = (G ∗ ε) ◦ (G ∗ ε ∗ F G) Combining all these formulas, we find γ ◦ γ = (G ∗ ε ) ◦ (η ∗ G ) ◦ (G ∗ ε) ◦ (η ∗ G) = (G ∗ ε ) ◦ (GF G ∗ ε) ◦ (η ∗ G F G) ◦ (η ∗ G) = (G ∗ ε) ◦ (G ∗ ε ∗ F G) ◦ (GF ∗ η ∗ G) ◦ (η ∗ G) = (G ∗ ε) ◦ G ∗ ((ε ∗ F ) ◦ (F ∗ η )) ∗ G ◦ (η ∗ G) = (G ∗ ε) ◦ G ∗ 1F ∗ G) ◦ (η ∗ G) = (G ∗ ε) ◦ (η ∗ G) = 1G In a similar way, we obtain that γ ◦ γ = 1G , and it follows that G and G are naturally isomorphic. Recall the following properties of adjoint pairs: Theorem 2. Let (F, G) be an adjoint pair of functors. F preserves colimits, and, in particular, coproducts, initial objects and cokernels. G preserves limits, and, in particular, products, final objects and kernels. If C and D are abelian categories, then F is right exact, and G is left exact. If F is exact, then G preserves injective objects. If G is exact, then F preserves projective objects. Here is another well-known property of adjoint functors that will be useful in the sequel. Proposition 10. Let (F, G) be an adjoint pair functors, then we have isomorphisms Nat(F, F ) ∼ = Nat(G, G) ∼ = Nat(1C , GF ) ∼ = Nat(F G, 1D ) Proof. We will show that Nat(G, G) ∼ = Nat(1C , GF ), the proof of the other assertions is left to the reader. For a natural transformation θ : 1C → GF , we define α = X(θ) : G → G by αD = G(εD ) ◦ θG(D)
(1.29)
Conversely, for α : G → G, θ = X −1 (α) : 1C → GF is defined by θC = αF (C) ◦ ηC
(1.30)
1.2 Adjoint functors
27
We are done if we can show that X and X −1 are each others inverses. First take α : G → G, and θ = X −1 (α). The diagram G(D)
ηG(D) - GF G(D) G(εD-) G(D)
@ @ αF G(D) αD @ θG(D) @ R @ ? G(ε ) ? D - G(D) GF G(D) commutes: the triangle is commutative because of (1.30), and the square commutes because α is natural. From (1.27), it follows that the composition of the two maps in the top row is IG(N ) , and then we see from the diagram that α = X(θ). Conversely, take θ : 1C → GF , and let α = X(θ). Then θ = X −1 (α) because the following diagram commutes: C
ηC
- GF (C)
@ @ θGF (C) @αF (C) @ @ R ? GF (η ) ? G(ε F (C) ) C - GF (C) - GF GF (C) GF (C) θC
A result of the same type is the following: Proposition 11. Let (F, G) be an adjoint pair of functors. Then we have isomorphisms Nat(GF, 1C ) ∼ = Nat(HomD (F, F ), HomC (•, •))
(1.31)
Nat(1D , F G) ∼ = Nat(HomC (G, G), HomD (•, •))
(1.32)
Proof. We outline the proof of the first statement. Given a natural transformation ν : GF → 1C , we define θ = α(ν) : HomD (F, F ) → HomC (•, •) as follows: take g : F (C) → F (C ) in D, and put θC,C (g) = νC ◦ G(g) ◦ ηC Straightforward arguments show that θ is natural. Conversely, given θ : HomD (F, F ) → HomC (•, •) we define α−1 (θ) = ν : GF → 1C by
28
1 Generalities
νC = θGF (C),C (εF (C) ) : GF (C) → C We leave it as an exercise to show that ν is natural, as needed, and that α and α−1 are inverses. The proof of the second statement is similar. Let us just mention that, given ζ : 1D → F G, we define β(ζ) = ψ : HomC (G, G) → HomD (•, •) as follows: given f : G(D) → G(D ) in C, we put ψD,D (f ) = εD ◦ F (f ) ◦ ζD
1.3 Separable algebras and Frobenius algebras In this Section, we give the classical Definition, and elementary properties of separable and Frobenius algebras. We will refer to it in Chapter 3, where we will introduce separable and Frobenius functors, and show that they are generalizations of the classical concepts. The Section on separable algebras is based on [109], and the one on Frobenius algebras on [113]. Separable algebras Let k be a commutative ring, A a k-algebra and M an A-bimodule. Recall that M can be viewed as a left Ae -module, where Ae = A ⊗ Aop is the enveloping algebra of A. A derivation of A in M is a k-linear map D : A → M such that D(ab) = D(a)b + aD(b)
(1.33)
for all a, b ∈ A. Derk (A, M ) will be the k-module consisting of all derivation of A into M . For any m ∈ M , we have a derivation Dm , given by Dm : A → M,
Dm (a) = am − ma
called the inner derivation asociated to m. It is clear that Dm = 0 if and only if m ∈ M A = {m ∈ M | am = ma, ∀a ∈ A}, so we have an exact sequence 0 → M A → M → Derk (A, M )
(1.34)
We also note that MA ∼ = HomAe (A, M ),
M∼ = HomAe (Ae , M )
(1.35)
The multiplication mA on A induces an epimorphism A ⊗ Aop → A of left Ae -modules, still denoted by mA , and we have another exact sequence 0 → I(A) = Ker (mA ) → A ⊗ Aop → A → 0
(1.36)
1.3 Separable algebras and Frobenius algebras
29
We have a derivation δ : A → I(A),
δ(a) = a ⊗ 1 − 1 ⊗ a
for all a ∈ A. It is clear that δ(a) ∈ I(A) and
Indeed, take x =
Aδ(A) = I(A) = δ(A)A
i
x=
ai ⊗ bi ∈ I(A), then ai (1 ⊗ bi − bi ⊗ 1) = −
i
ai δ(bi ) ∈ Aδ(A)
i
Lemma 7. Let M be an A-bimodule over a k-algebra A. Then we have an isomorphism of k-modules HomAe (I(A), M ) ∼ = Derk (A, M )
(1.37)
Proof. We define φ : HomAe (I(A), M ) → Derk (A, M ), φ−1 is given by
φ−1 (D)(
ai ⊗ bi ) = −
i
φ(f ) = f ◦ δ
ai D(bi )
i
We show that φ−1 (D) is left Ae -linear, and leave the other details to the reader. ai ⊗ bi )) = φ−1 (D)( aai ⊗ bi b) φ−1 (D)((a ⊗ b)( i
=−
aai D(bi b) = −
i
= (a ⊗ b)φ−1 (D)(
i
aai D(bi )b −
i
aai bi D(b)
i
ai ⊗ bi )
i
Applying the functor HomAe (•, M ) to the exact sequence (1.36) and taking (1.35) and (1.37) into account, we find a long exact sequence 0 → M A → M → Derk (A, M ) → Ext1Ae (A, M ) → 0
(1.38)
extending (1.34). Indeed, Ext1Ae (Ae , M ) = 0, since Ae is projective as a left Ae -module. H 1 (A, M ) = Ext1Ae (A, M ) is another notation, and H 1 (A, M ) is called the first Hochschild cohomology group of A with coefficients in M . For more information on Hochschild cohomology, we refer to [55, Ch. IX]. Thus (1.38) tells us that H 1 (A, M ) ∼ = Derk (A, M )/InnDerk (A, M ) A is called a separable k-algebra if it satisfies the equivalent conditions of the following theorem:
30
1 Generalities
Theorem 3. For a k-algebra A the following statements are equivalent: 1. A is projective as a left Ae -module; 2. the exact sequence splits as a sequence of left Ae -modules; (1.36) 1 2 3. there exists e = e ⊗ e ∈ A ⊗ A such that (1.39) ae = ea and e1 e2 = 1 for all a ∈ A. 4. H 1 (A, M ) = 0, for any A-bimodules M . 5. the derivation δ : A → I(A), δ(a) = a ⊗ 1 − 1 ⊗ a is inner; 6. every derivation D : A → M is inner, for any A-bimodule M . An element e ∈ Ae satisfying 1 2 ae = ea for all a ∈ A is called a Casimir element. If, in addition, e e = 1, then e is an idempotent, and it is called a separability idempotent. Proof. 1. ⇔ 2. is obvious. 2. ⇒ 3. If ψ : A → Ae is a left Ae -module map and section of mA then, e = ψ(1) satisfies (1.39). e e 3. ⇒ 2. Define ψ1 : 2 A → A , ψ(a) = ae = ea. ψ is left A -module map and mA ψ(a) = ae e = a. 1. ⇔ 4. is obvious. 4. ⇔ 6. follows from the exact sequence (1.38). 6. ⇒ 5. is trivial. 5. ⇒ 6. Let D : A → M be a derivation. From the above Lemma we know that there is a f ∈ HomAe (I(A), M ) such that D = f ◦ δ. δ is inner, so we can write δ = Dx , with x ∈ I(A). Now, D(a) = f (δ(a)) = f (ax − xa) = af (x) − f (x)a = Df (x) (a) i.e. D is inner. Let us now prove some immediate properties of separable algebras. Proposition 12. Any projective separable algebra A over a commutative ring k is finitely generated. Proof. We take a dual basis {si , s∗i | i ∈ I} for A. This means that, for all s ∈ A, the set I(s) = {i ∈ I | s∗i , s = 0} is finite, and s=
s∗i , ssi i∈I
For all i ∈ I, we define φi : A ⊗ A
op
→ A by
φi (s ⊗ t) = s∗i , ts
1.3 Separable algebras and Frobenius algebras
such that
31
φi (s s ⊗ t) = s∗i , ts s = s φi (s ⊗ t)
and φi is left A-linear. We now claim that {zi = 1 ⊗ si , φi | i ∈ I} is a dual basis of A ⊗ Aop as a left A-module. Take z = s ⊗ t ∈ A ⊗ Aop . If φi (z) = s∗i , ts = 0, then s∗i , t = 0, so i ∈ I(t), and we conclude that I(z) = {i ∈ I | φi (z) = 0} ⊂ I(t) is finite. Moreover s⊗t =
s ⊗ s∗i , tsi =
i∈I
=
s∗i , ts ⊗ si
i∈I
φi (s ⊗ t) ⊗ si =
i∈I
φi (s ⊗ t)(1 ⊗ si )
i∈I
A is separable, so we have a separability idempotent e = e1 ⊗ e2 ∈ A ⊗ Aop . Our next claim is that I(et) ⊂ I(e) for all t ∈ A. Indeed, we compute φi (et) = φi (e1 ⊗ e2 t) = φi (te1 ⊗ e2 ) = tφi (e) so i ∈ I(et), or φi (et) = 0, implies φi (e) = 0 and i ∈ I(e). For all t ∈ A, we finally compute t = 1t = m(e)t = m(et) = m φi (et)zi
=m
2
i∈I(e)
= Write e =
r
s∗i , e2 te1 m(zi ) i∈I(e)
j=1 ej
i∈I(e)
φi (e ⊗ e t)zi = m 1
=
s∗i , e2 te1 zi
i∈I(e)
s∗i , e2 te1 si
i∈I(e)
⊗ ej . We have shown that {ej si , s∗i , ej • | i ∈ I(e), j = 1, · · · , r}
is a finite dual basis for A. Proposition 13. A separable algebra A over a field k is semisimple. 1 Proof. Let e = e ⊗ e2 ∈ A ⊗ A be a separability idempotent and N an A-submodule of a right A-module M . As k is a field, the inclusion i : N → M splits in the category of k-vector spaces. Let f : M → N be a k-linear map such that f (n) = n, for all n ∈ N . Then
32
1 Generalities
f˜ : M → N,
f˜(m) :=
f (me1 )e2
is a right A-module map that splits the inclusion i. Thus N is an A-direct factor of M , and it follows that M is completely reducible. This shows that A is semisimple. Examples 1. 1. Let k be a field of characteristic p, and a ∈ k \ k p . l = k[X]/(X p − a) is then a purely inseparable field extension of k, and l is not a separable k-algebra in the above sense. Indeed, d : l→l dX is a derivation that is not inner. More generally, one can prove that a finite field extension l/k is separable in the classical sense if and only if l is separable as a k-algebra, see [66, Proposition III.3.4]. 2. Let k be a field. It can be show that a separable k-algebra is of the form A = Mn1 (D1 ) × · · · × Mnr (Dr )
(1.40)
where Di is a division algebra with center a finite separable field extension li of k. See [66, Theorem III.3.1] for details. 3. Any nmatrix ring Mn (k) is separable as a k-algebra: for any i = 1, · · · , n, ei = j=1 eji ⊗eij is a separablity idempotent. More generally, any Azumaya algebra A is separable as a k-algebra. Frobenius algebras In this Section we will recall the classical definition of a Frobenius algebra, thus showing how it came up in representation theory. We will work over a a field k. For a k-algebra A, the k-dual A∗ = Homk (A, k) is an A-bimodule via the actions r∗ · r, r = r∗ , rr ,
r · r∗ , r = r∗ , r r
(1.41)
for all r, r ∈ A and r∗ ∈ A∗ . Definition 3. A finite dimensional k-algebra A is called a Frobenius algebra if A ∼ = A∗ as right A-modules. Remarks 1. 1. A finite dimensional k-algebra A is Frobenius if and only if there exists a k-linear map λ : A → k such that for any ψ ∈ A∗ there exists a unique element r = rψ ∈ A such that ψ(x) = λ(rx) for all x ∈ A. In particular, the matrix algebra Mn (k) is Frobenius: take λ = Tr, the trace map. 2. The concept of Frobenius algebra is left-right symmetric: that is A ∼ = A∗ ∗ in MA if and only if A ∼ A in M. = A It suffices to observe that there exists a one to one correspondence between the following data:
1.3 Separable algebras and Frobenius algebras
33
– the set of all isomorphisms of right A-modules f : A → A∗ ; – the set of all bilinear, nondegenerate and associative maps B : A × A → k; – the set of all isomorphisms of left A-modules g : A → A∗ , given by the formulas f (x)(y) = B(x, y) = g(y)(x) (1.42) for all x, y ∈ A. Let us now explain how the original problem of Frobenius arises naturally in representation theory, as explained in the book of Lam [113]. We fix a basis {e1 , · · · , en } of a finite dimensional algebra A. Then for any r ∈ A we can (r) (r) find scalars aij and bij such that ei r =
n
(r)
aij ej ,
rei =
j=1
n
(r)
bji ej
(1.43)
j=1
for all i = 1, · · · , n. Hence we have constructed k-linear maps α, β : A → Mn (k),
(r)
α(r) = (aij ),
(r)
β(r) = (bij )
(1.44)
for all r ∈ A. It is straightforward to prove that α and β are algebra maps, i.e. they are representations of the k-algebra A. The problem of Frobenius: When are the above representations α and β equivalent? We recall that two representations α, β : A → Mn (k) are equivalent if there exists an invertible matrix U ∈ Mn (k) such that β(r) = U α(r)U −1 , for all r ∈ A. Before giving the answer to the problem we present one more construction: let (clij )i,j,l=1,n be the structure constants of the algebra A, that is ei ej =
n
ckij ek
k=1
for all i, j = 1, · · · , n. For a = (a1 , · · · , an ) ∈ k n , let Pa ∈ Mn (k) be the matrix given by n ak ckij (Pa )i,j = k=1
The matrix Pa is called the paratrophic matrix. In the next Theorem, the equivalence 2. ⇔ 3. was the original theorem of Frobenius, while the equivalence 1. ⇔ 2. translate the problem from representation theory into the language of modules. Theorem 4. For an n-dimensional algebra A, the following statements are equivalent:
34
1 Generalities
1. A is Frobenius; 2. the representations α and β : A → Mn (k) constructed in (1.44) are equivalent; 3. there exists a ∈ k n such that the paratrophic matrix Pa is invertible; 4. there exists a bilinear, nondegenerate and associative map B : A×A → k, i.e. B(xy, z) = B(x, yz), for all x, y, z ∈ A; 5. there exists a hyperplane of A that does not contain a nonzero right ideal of A; 6. thereexists a pair (ε, e), called a Frobenius pair , where ε ∈ A∗ and e= e1 ⊗ e2 ∈ A ⊗ A such that ae = ea, and ε(e1 )e2 = e1 ε(e2 ) = 1. (1.45) Before proving the Theorem, let us recall some well-known facts. First of all, let V be a n-dimensional vector space with basis B = {v1 , · · · , vn }. Let canV : Endk (V )op → Mn (k),
canV (f ) = MB (f )
be the canonical isomorphism of algebras; here, for f ∈ Endk (V ), MB (f ) = (aij ), is the matrix asociated to f with respect to the basis B written as follows n f (vi ) = aij vj j=1
for all i = 1, · · · , n. Secondly, a k-vector space M has a structure of right A-module if and only if there exists an algebra map ϕM : A → Endk (M )op ϕM is called the representation associated to M . The correspondence between the action ” · ” and the representation is given by ϕM (r)(m) = m · r. In particular, if dimk (M ) = n, M has a structure of right A-module if and only if there exists an algebra map ϕ˜M (= canM ◦ ϕM ) : A → Mn (k). Finally, let M and N be two right A-modules and ϕM : A → Endk (M ), ϕN : A → Endk (N ) the associated representations. Then M ∼ = N (as right A-modules) if and only if there exists an isomorphism of k-vector spaces θ : M → N such that ϕM (r) = θ−1 ◦ ϕN (r) ◦ θ for all r ∈ A. Indeed, a k-linear map θ : M → N is a right A-module map if and only if θ(m · r) = θ(m) · r for all m ∈ M , r ∈ A. This is equivalent to
1.3 Separable algebras and Frobenius algebras
35
θ(ϕM (r)(m)) = ϕN (r)(θ(m)) or θ ◦ ϕM (r) = ϕN (r) ◦ θ for all r ∈ A. Proof. (of Theorem 4). 1. ⇔ 2. This follows from the remarks made above if we can prove that α = ϕ˜A , β = ϕ˜A∗ . Let us prove first that α = ϕ˜A , where A ∈ MA , via right multiplication. The representation associated to this structure is ϕA : A → Endk (A),
ϕA (r)(r ) = r r
hence, ϕA (r)(ei ) = ei r =
n
(r)
aij ej
j=1
i.e. α = ϕ˜A . Let us show next that β = ϕ˜A∗ . Let {e∗i } be the dual basis of {ei } and ϕA∗ : A → Endk (A∗ ), ϕA∗ (r)(r∗ ) = r∗ (r). Now β(r) = ϕ˜A∗ (r) if and only if e∗i
·r =
n
bij e∗j (r)
j=1
or
n (r) e∗i , rek = bij e∗j , ek j=1 (r)
for all k. Both sides are equal to bik . 1. ⇔ 3. Any right A-module map f : A → A∗ has the form f (r) = λ · r, for some λ ∈ A∗ . Thus, there exists a1 , · · · , an ∈ k such that f (r) = (a1 e∗1 + · · · + an e∗n ) · r for any r ∈ A. Using the dual basis formula we have e∗k · ei , ej = e∗k , ei ej = ckij Hence e∗k · ei =
n
k ∗ j=1 cij ej ,
f (ei ) =
and it follows that
n k=1
ak e∗k · ei =
n n ( ckij ak )e∗j j=1 k=1
36
1 Generalities
for all i = 1, · · · , n. This means that the matrix associated to f in the pair of basis {ei , e∗i } is just the paratrophic matrix Pa , where a = (a1 , · · · , an ) ∈ k n . 1. ⇔ 4. follows from (1.42). 4. ⇒ 5. H = {a ∈ A | B(1, a) = 0} is a k-subspace of A of codimension 1. Assume that J is a right ideal of A and J ⊂ H, and take x ∈ J. using the fact that xA ⊂ J ⊂ H, and that B is associative, we obtain 0 = B(1, xA) = B(x, A) As B is nondegenerate we obtain that x = 0. 5. ⇒ 1. Let H be a such a hyperplane. As k is a field, we can pick a k-linear map λ : A → k such that Ker (λ) = H. Then f = fλ : A → A∗ ,
f (x), y = λ(xy)
for all x, y ∈ A, is an injective right A-linear map. Indeed, for x, y, z ∈ A we have f (xy), z = λ(xyz) = f (x), yz = f (x) · y, z On the other hand, from f (x) = 0 it follows that λ(xA) = 0, hence xA ⊂ Ker(λ) = H. We obtain, that xA = 0, i.e. x = 0. Thus, f is an injective right A-module map, that is an isomorphism as A and A∗ have the same dimension. 1. ⇒ 6. Let (ei , e∗i ) be a dual basis of A and → A∗ an isomorphism f : A −1 of right A-modules. Then (ε = f (1), e = (e∗i )) is a Frobenius i ei ⊗ f pair. This is an elementary computation left to the reader at this point; in Theorem 28, we give the same proof in a more general situation. 6. ⇒ 1. If (ε, e = e1 ⊗ e2 ) is a Frobenius pair, then f : A → A∗ ,
f (x), y = ε(xy)
is an isomorphism of right A-modules with inverse f −1 : A∗ → A, f −1 (a∗ ) = a∗ , e1 e2 for all a∗ ∈ A∗ . Examples 2. 1. Theorem 4 gives an elementary way to check whether an algebra A is Frobenius. Let A = k[X, Y ]/(X 2 , Y 2 ). Then A has a basis e1 = 1, e2 = x, e3 = y and e4 = xy. Through a trivial computation we find that the paratrophic matrix is a1 a2 a3 a4 a2 0 a4 0 Pa = a a 0 0 4 3 a4
0
0
0
1.3 Separable algebras and Frobenius algebras
37
Thus, if a4 is non-zero, then Pa is invertible, so A is a Frobenius algebra. 2. A similar computation shows that the k-algebra A = k[X, Y ]/(X 2 , XY 2 , Y 3 ) is not Frobenius. 3. Using the criterium 5) given by Theorem 4 we can see that any finite dimensional division k-algebra D is a Frobenius algebra. It can be proved that Mn (D) is also a Frobenius k-algebra. Using (1.40) and the fact that a product of Frobenius algebras is Frobenius algebra, we obtain that any separable algebra over a field is Frobenius.
2 Doi-Koppinen Hopf modules and entwined modules
In this Chapter, we introduce entwining structures and entwined modules. We show how various kinds of modules that appear in ring theory are special cases of entwined modules. We also show that there is a close analogy, based on duality arguments, with the factorization problem for algebras, and the smash product of algebras. Entwined modules themselves can be viewed as special cases of comodules over corings. Pairs of adjoint functors between categories of entwined modules are investigated, and it is discussed how one can make the category of entwined modules into a monoidal category.
2.1 Doi-Koppinen structures and entwining structures Entwining structures Throughout this Section, k is a commutative ring. A (right-right) entwining structure on k consists of a triple (A, C, ψ), where A is a k-algebra, C a k-coalgebra, and ψ : C ⊗ A → A ⊗ C a k-linear map satisfying the relations (ab)ψ ⊗ cψ = aψ bΨ ⊗ cψΨ (1A )ψ ⊗ cψ = 1A ⊗ c aψ ⊗ ∆C (c ) = aψΨ ⊗ ψ
cΨ (1)
(2.1) (2.2) ⊗
cψ (2)
ψ
εC (c )aψ = εC (c)a
(2.3) (2.4)
Here we used the sigma notation ψ(c ⊗ a) = aψ ⊗ cψ = aΨ ⊗ cΨ A morphism (α, γ) : (A, C, ψ) → (A , C , ψ ) consists of an algebra map α : A → A and a coalgebra map γ : C → C such that (α ⊗ γ) ◦ ψ = ψ ◦ (γ ⊗ α) or, equivalently, α(aψ ) ⊗ γ(cψ ) = α(a)ψ ⊗ γ(c)ψ
(2.5)
(2.6)
E•• (k) will denote the category of entwining structures. The category E•• (k) is monoidal. E∗ •• (k) is the full subcategory of E•• (k) consisting of entwining S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 39–87, 2002. c Springer-Verlag Berlin Heidelberg 2002
40
2 Doi-Koppinen Hopf modules and entwined modules
structures (A, C, ψ) with ψ invertible. Left-right, right-left, and left-left versions can also be introduced. For example, • E• (k) is the category with objects (A, C, ψ), where now ψ : A ⊗ C → A ⊗ C, ψ(a ⊗ c) = aψ ⊗ cψ is a map satisfying (2.2,2.3,2.4) and (ab)ψ ⊗ cψ = aψ bΨ ⊗ cΨ ψ
(2.7)
In • E• (k), we need maps ψ : C ⊗ A → C ⊗ A, satisfying (2.1,2.2,2.4) and aψ ⊗ ∆C (cψ ) = aψΨ ⊗ (c(1) )ψ ⊗ (c(2) )Ψ • • E(k),
In (2.8).
(2.8)
we will need maps ψ : A ⊗ C → C ⊗ A satisfying (2.2,2.4,2.7) and
Proposition 14. The categories E•• (k), • E• (k), • E• (k), and •• E(k) are isomorphic. Proof. It is easy to see that the isomorphism between E•• (k) and • E• (k) is given by sending (A, C, ψ) to (Aop , C, ψ ◦ τ ). The other isomorphisms are left to the reader. Obviously the isomorphisms in Proposition 15 restrict to the subcategories consisting of structures with invertible ψ. For these subcategories, there exists alternative isomorphisms. Proposition 15. The categories E∗ •• (k) and •• E∗ (k) are isomorphic via the functor S given by S(A, C, ψ) = (A, C, ψ −1 ) (2.9) Proof. Asssume that ψ : C ⊗ A → A ⊗ C satisfies (2.1-2.4). We have to show that ϕ = ψ −1 satisfies (2.2,2.4, 2.7) and (2.8). (2.2) and (2.4) are obvious. (2.1) is equivalent to commutativity of the diagram C ⊗A⊗A
ψ ⊗ IA
A⊗C ⊗A
IA ⊗ψ
IC ⊗ mA
A⊗A⊗C mA ⊗ IC
? C ⊗A
? - A⊗C
ψ
This is equivalent to commutativity of the following diagram A⊗A⊗C
IA ⊗ϕ
A⊗C ⊗A
mA ⊗ IC ? A⊗C
ϕ ⊗ IA
C ⊗A⊗A IC ⊗ mA
ϕ
? - C ⊗A
2.1 Doi-Koppinen structures and entwining structures
41
which is equivalent to cϕφ ⊗ aφ bϕ = cϕ ⊗ (ab)ϕ and this tells us that ϕ satisfies (2.7). In a similar way (2.3) implies that ϕ satisfies (2.8). Doi-Koppinen structures Let H be a bialgebra, A a right H-comodule algebra, and C a right H-module coalgebra. We call (H, A, C) a right-right Doi-Koppinen structure or DK structure over k. A morphism between two DK structures consists of a triple ϕ = (, α, γ) : (H, A, C) → (H , A , C ), where : H → H , α : A → A , and γ : C → C are respectively a bialgebra map, an algebra map, and a coalgebra map such that ρA (α(a)) = α(a[0] ) ⊗ (a[1] ) γ(ch) = γ(c)(h)
(2.10) (2.11)
for all a ∈ A, c ∈ C, and h ∈ H. The category of right-right Doi-Hopf structures over k is denoted by DK•• (k). DK•• (k) is a monoidal category, if we define (H, A, C) ⊗ (H , A , C ) = (H ⊗ H , A ⊗ A , C ⊗ C ) with the obvious structure maps. The unit element is (k, k, k). We will also consider the full subcategories H•• (k), HA•• (k), and HC•• (k) of DK•• (k), consisting of objects respectively of the form (H, H, H), (H, A, H), (H, H, C) The subcategory of DK•• (k) consisting of objects (H, A, C) and morphisms (, α, γ) where H has a twisted antipode S, and where preserves the twisted antipode, is denoted by DKs•• (k). In a similar way, we introduce the categories • DK• (k), • DK• (k), and •• DK(k), and their various subcategories. For example, • DK• (k) has objects (H, A, C), where A is a right H-comodule algebra, and C is a left H-module coalgebra. • In the definition of • DK∗ (k) and • DK∗ • (k), we require that the bialgebra H in each object is a Hopf algebra (i.e., it has an antipode). In the left-left case, we want a twisted antipode. Proposition 16. The categories DK•• (k), • DK• (k), • DK• (k), and •• DK(k) are isomorphic. Similar statements hold for the respective subcategories introduced above. Proof. Let (H, A, C) ∈ DK•• (k). Then the opposite algebra Aop with the original right H-coaction is a right H op -comodule algebra. The coalgebra C with left H op -action defined by hop · c = ch
42
2 Doi-Koppinen Hopf modules and entwined modules
is a left H op -module coalgebra. The functor DK•• (k) → • DK• (k), mapping (H, A, C) to (H op , Aop , C) is easily seen to be an isomorphism of categories. Observe also that H op has an antipode in case H has a twisted antipode, so we also find an isomorphism between the categories DKs•• (k) and • DKs• (k). The other statements follow in a similar way, let us mention that the objects corresponding to (H, A, C) ∈ DK•• (k) are (H cop , A, C cop ) ∈ • DK• and (H opcop , Aop , C cop ) ∈ •• DK(k). Proposition 17. We have faithful functors F : DK•• (k) → E•• (k) and
•
F : DK∗ • (k) → E∗ •• (k)
Proof. We define F by F (H, A, C) = (A, C, ψ) ; F (, α, γ) = (α, γ) with ψ : C ⊗ A → A ⊗ C given by ψ(c ⊗ a) = a[0] ⊗ ca[1] We leave it to the reader to check that ψ satisfies (2.1-2.4), and that (α, γ) satisfies (2.5). If (H, A, C) ∈ DKs•• (k), then H has a twisted antipode S, and the inverse of ψ is given by the formula ψ −1 (a ⊗ c) = cS(a[1] ) ⊗ a[0] Alternative Doi-Koppinen structures These structures were recently introduced by Schauenburg [163]. A left-right alternative Doi-Koppinen structure consists of a triple (H, A, C), where H is a bialgebra, A is left H-module algebra, and C is a right H-comodule coalgebra. We write • aDK• (k) for the category left-right alternative Doi-Koppinen structures. The morphisms are defined in the obvious way, and analogous definitions can be given in the right-left, left-left and right-right situations. The alternative version of Proposition 17 is the following: Proposition 18. We have a faithful functor Fa :
• • aDK (k)
→ • E• (k)
Proof. We put Fa (H, A, C) = (A, C, ψ) and Fa (, α, γ) = (α, γ) with ψ : A ⊗ C → A ⊗ C, ψ(a ⊗ c) = c[1] a ⊗ c[0] A straightforward computation shows that (A, C, ψ) is a left-right entwining structure.
2.1 Doi-Koppinen structures and entwining structures
43
Doi-Koppinen structures versus entwining structures An obvious question is the following: is an entwining structure (A, C, ψ) defined by a Doi-Koppinen structure, i.e. can we find a bialgebra H, an H-coaction on A, and an H-action on C such that (A, C, ψ) = F (H, A, C) We will see that a sufficient condition is that A is finitely generated and projective as a k-module. If C is finitely generated and projective, then every entwining structure comes from an alternative Doi-Koppinen structure. A recent counterexample due to Schauenburg shows that there exist entwining structures that do not arise from Doi-Koppinen structures. We start with a construction due to Sweedler [172, p.155], reformulated by Tambara [178]. Let A be finitely generated projective, with dual basis {ai , a∗i | i = 1, · · · , n}, and write H = H(A) = T (A∗ ⊗ A)/I the tensor algebra of A∗ ⊗ A divided by the ideal I generated by elements of the form a∗ , 1A − a∗ ⊗ 1A ∗
a ⊗ ab −
(a∗(1)
(2.12)
⊗ a) ⊗
(a∗(2)
⊗ b)
(2.13)
where a∗ ∈ A∗ and a, b ∈ A. We write [a∗ ⊗ a] for the class represented by a∗ ⊗ a. Proposition 19. Let A be a finitely generated projective k-algebra. Then H = H(A) is a bialgebra, with comultiplication and counit given by ∆H [a∗ ⊗ a] =
n
[a∗ ⊗ ai ] ⊗ [a∗i ⊗ a] and εH [a∗ ⊗ a] = a∗ , a
i=1
Proof. A straightforward calculation; we will show that ∆H is well-defined, i.e. ∆H = 0 on I. First, ∆H (a∗ ⊗ 1A ) =
n
[a∗ ⊗ ai ] ⊗ [a∗i ⊗ 1A ]
i=1
=
n
[a∗ ⊗ ai ] ⊗ a∗i , 1A = a∗ , 1A 1 ⊗ 1
i=1
Next ∆H ([(a∗(1) ⊗ a) ⊗ (a∗(2) ⊗ b)]) =
n i=1
[a∗(1) ⊗ ai ] ⊗ [a∗i ⊗ a]
n j=1
[a∗(2) ⊗ aj ] ⊗ [a∗j ⊗ b]
44
2 Doi-Koppinen Hopf modules and entwined modules
=
=
(1.4)
=
n
[a∗(1) ⊗ ai ][a∗(2) ⊗ aj ] ⊗ [a∗i ⊗ a][a∗j ⊗ b]
i,j=1 n
[a∗ ⊗ ai aj ] ⊗ [a∗i ⊗ a][a∗j ⊗ b]
i,j=1 n
[a∗ ⊗ ai ] ⊗ [a∗i(1) ⊗ a][a∗i(1) ⊗ b]
i=1
=
n
[a∗ ⊗ ai ] ⊗ [a∗i ⊗ ab]
i=1
= ∆H [a∗ ⊗ ab] Remark 2. As above, let A be finitely generated and projective, and consider the functor F : k-Alg → k-Alg ; F (B) = A ⊗ B Tambara [178] observes that F has a right adjoint G, and, as a k-algebra, H(A) = G(A). For any k-algebra B, we write G(B) = a(A, B) = T (A∗ ⊗ B)/I with I defined as above (1A is replaced by 1B , and a, b ∈ B). The unit and counit of the adjunction are given by ηB : B → a(A, A ⊗ B) ; ηB (b) =
n
[a∗i ⊗ (ai ⊗ b)]
i=1
εB : A ⊗ a(A, B) → B ; εB (a ⊗ [a∗ ⊗ b]) = a∗ , ab The comultiplication and counit on a(A, A) = H(A) can be defined using the adjunction properties. Proposition 20. Let A be a finitely generated projective k-algebra, and H = H(A). Then A is a right H-comodule algebra, and A∗ is a left H-comodule coalgebra. The structure maps are ρr (a) = ρl (a∗ ) =
n i=1 n
ai ⊗ [a∗i ⊗ a]
(2.14)
[a∗ ⊗ ai ] ⊗ a∗i
(2.15)
i=1
Proof. A is a right H-comodule since (ρr ⊗ IH )(ρr (a)) =
n i,j=1
aj ⊗ [a∗j ⊗ ai ] ⊗ [a∗i ⊗ a] = (IA ⊗ ∆H )(ρr (a))
2.1 Doi-Koppinen structures and entwining structures
45
A is a right H-comodule algebra since r
ρ (ab) = (2.13) (1.4)
= =
n
ai ⊗ [a∗i ⊗ ab]
i=1 n
ai ⊗ [a∗i(1) ⊗ a][a∗i(2) ⊗ b]
i=1 n
ai aj ⊗ [a∗i ⊗ a][a∗j ⊗ b]
i,j=1 r
= ρ (a)ρr (b) n ρr (1A ) = ai ⊗ [a∗i ⊗ 1A ] i=1
(2.12)
=
n
ai ⊗ a∗i , 1A = 1A ⊗ 1H
i=1
From Proposition 7, it follows that A∗ is a left H-comodule coalgebra, with ρl (a∗ ) = =
=
n
ai[1] ⊗ a∗ , ai[0] a∗i
i=1 n
[a∗j ⊗ ai ] ⊗ a∗ , aj a∗i
i,j=1 n
[a∗ ⊗ ai ] ⊗ a∗i
i=1
Theorem 5. Let A be a finitely generated projective algebra, and C a coalgebra. There is a bijective correspondence between left H(A)-module coalgebra structures on C, and left-right entwining structures of the form (A, C, ψ). Consequently every entwining structure (A, C, ψ) with A finitely generated and projective can be derived from a Doi-Koppinen structure. Proof. First consider an entwining structure (A, C, ψ) ∈ • E• (k). As before, we write H = H(A). On C, we define the following left H-action: [a∗ ⊗ a] · c = a∗ , aψ cψ This action is well-defined since [a∗ ⊗ 1A ] · c = a∗ , (1A )ψ cψ = a∗ , 1A c and [a∗ ⊗ ab] · c = a∗ , (ab)ψ cψ = a∗ , aψ bΨ cΨ ψ = a∗(1) , aψ a∗(2) , bΨ cΨ ψ = [a∗(1) , a] · [a∗(2) , b] · c
(2.16)
46
2 Doi-Koppinen Hopf modules and entwined modules
The comultiplication and counit of C are left H-linear since n
[a∗ ⊗ ai ] · c(1) ⊗ [a∗i ⊗ a] · c(2) =
i=1
n
∗ Ψ a∗ , aiψ cψ (1) ⊗ ai , aΨ c(2)
i=1 ∗
= a
, aΨ ψ cψ (1)
⊗
cΨ (2)
= a∗ , aψ ∆(cψ ) = ∆([a∗ ⊗ a] · c)
and εC ([a∗ ⊗ a] · c) = εC (a∗ , aψ cψ ) = a∗ , aεC (c) = εH ([a∗ ⊗ a])εC (c) Conversely, let C be a left H-module coalgebra. We know from Proposition 20 that A is a right H-comodule algebra, so we have (H, A, C) ∈ • DK• (k) and (A, C, ψ) = F (H, A, C) ∈ • E• (k). Recall that ψ : A ⊗ C → A ⊗ C is given by ψ(a ⊗ c) = a[0] ⊗ a[1] c Let us check that we have a bijective correspondence, as needed. Let (A, C, ψ) be an entwining structure, (H, A, C) the corresponding Doi-Koppinen structure, and write F (H, A, C) = (A, C, ψ ). Then ψ (a ⊗ c) = a[0] ⊗ a[1] c =
n
ai ⊗ [a∗i ⊗ a] · c
i=1
=
n
ai ⊗ a∗i , aψ cψ = aψ ⊗ cψ = ψ(a ⊗ c)
i=1
If C is a left H(A)-module coalgebra, then we have a left-right Doi-Koppinen structure (H(A), A, C), and an entwining structure (A, C, ψ) = F (H(A), A, C). This entwining structure defines a left H(A)-module coalgebra structure on C, we denote this action temporarily by . This action equals the original one, since [a∗ ⊗ a]c = a∗ , aψ cψ = a∗ , a[0] a[1] · c n a∗ , ai [a∗i ⊗ a] · c = [a∗ ⊗ a] · c = i=1
Adapting our arguments, we see that every entwining structure (A, C, ψ), with C finitely generated and projective, comes from an alternative DoiKoppinen structure. This time, the involved bialgebra is H = H(C ∗ )cop . Observe that H(C ∗ )cop = T (C ∗ ⊗ C)/I, with I generated by elements of the form εC , c − εC ⊗ c and c∗ ∗ d∗ ⊗ c − (c∗ ⊗ c(1) ) ⊗ (d∗ ⊗ c(2) ) and
2.1 Doi-Koppinen structures and entwining structures
∆H ([c∗ ⊗ c]) =
n
47
[c∗ ⊗ ci ] ⊗ [c∗i ⊗ c] and εH ([c∗ ⊗ c]) = c∗ , c
i=1
From Proposition 20, it follows that C is a right H-comodule algebra, with ρr (c) =
n
ci ⊗ [c∗i ⊗ c]
i=1
Given a left-right entwining structure (A, C, ψ), we define a left H -action on A as follows: [c∗ ⊗ c] · a = c∗ , cψ aψ and this makes A into a left H -module algebra. (H , A, C) is a left-right alternative Doi-Koppinen structure. Further verifications are left to the reader. We summarize our results as follows: Theorem 6. Let C be a finitely generated projective coalgebra, and H = H(C ∗ )cop . Then C is a right H -comodule coalgebra. For a given k-algebra A, there is a bijective correspondence between left-right entwining structures (A, C, ψ) and left H -module algebra structures on A. Consequently every entwining structure comes from an alternative Doi-Koppinen structure. We will now show that not every entwining structure arises from a DoiKoppinen structure. Let k be a field. For a left-right entwining structure (A, C, ψ), c ∈ C and c∗ ∈ C ∗ , we consider the transformation Tc,c∗ : A → A ; Tc,c∗ (a) = c∗ , cψ aψ If (A, C, ψ) = F (H, A, C) arises from a Doi-Koppinen structure, then Tc,c∗ (a) = c∗ , a[1] ca[0] and then every H-subcomodule of A is Tc,c∗ -invariant. As every a ∈ A is contained in a finite dimensional H-subcomodule of A (cf. [172, Theorem 2.1.3b]), the Tc,c∗ -invariant subspace of A generated by a is finite dimensional. Example 7. (Schauenburg [163]) Let C = k ⊕ kt, with t primitive, and let A be the free algebra with generators Xi , where i ranges over the integers. We define ψ : A ⊗ C → A ⊗ C by ψ(a ⊗ 1) = a ⊗ 1 for all a ∈ k, and ψ(Xi1 Xi2 · · · Xin ⊗ t) = Xi1 +1 Xi2 +1 · · · Xin +1 ⊗ t A straightforward computation shows that ψ is entwining. Now take c∗ ∈ C ∗ such that c∗ , t = 1. Then Tt,c∗ (Xi ) = Xi+1 and the Tt,c∗ -invariant subspace of A generated by X0 is infinite dimensional, and (A, C, ψ) cannot be derived from a Doi-Koppinen structure.
48
2 Doi-Koppinen Hopf modules and entwined modules
2.2 Doi-Koppinen modules and entwined modules Entwined modules Let (A, C, ψ) ∈ E•• (k). An (A, C, ψ)-entwined module is a k-module with a right A-action and a right C-coaction such that ρr (ma) = m[0] aψ ⊗ mψ [1]
(2.17)
The category of (A, C, ψ)-entwined modules and A-linear C-colinear maps is denoted by M(ψ)C A . We also have left-left, left-right and right-left versions: For (A, C, ϕ) ∈ •• E(k), C A M(ϕ) consists of left A-modules and left Ccomodules such that ρl (am) = mϕ (2.18) [−1] ⊗ aϕ m[0] For (A, C, ψ) ∈ • E• (k), comodules such that
C A M (ψ)
consists of left A-modules and right C-
ρr (am) = aψ m[0] ⊗ mψ [1] For (A, C, ϕ) ∈ • E• (k), comodules such that
C
(2.19)
MA (ϕ) consists of right A-modules and left C-
ρl (ma) = mϕ [−1] ⊗ m[0] aϕ
(2.20)
E•• (k),
Examples 3. 1. For (A, C, ψ) ∈ A ⊗ C and C ⊗ A are right-right entwined modules. The structure is given by the formulas (c ⊗ a)b = c ⊗ ab (a ⊗ c)b = abψ ⊗ cψ
ρr (c ⊗ a) = (c(1) ⊗ aψ ) ⊗ cψ (2) ρr (a ⊗ c) = a ⊗ c(1) ⊗ c(2)
ψ : C ⊗ A → A ⊗ C is a morphism in E•• (k). See also Examples 13 and 14 2. Let H be a bialgebra with twisted antipode S, and consider the map ψ : H ⊗ H → H ⊗ H with ψ(h ⊗ k) = h(2) ⊗ h(3) kS(h(1) ) (H, H, ψ) ∈ • E• (k), and the objects of H M(ψ)H are k-modules with a left H-action and right H-coaction such that ρr (hm) = h(2) m[0] ⊗ h(3) m[1] S(h(1) ) These modules are known under the name Yetter-Drinfeld modules . YetterDrinfeld modules will be investigated in Section 4.4 and Chapter 5. 3. For any bialgebra H, (H, H, IH⊗H ) ∈ • E• (k). H M(ψ)H consists of kmodules with a left H-action and right H-coaction such that ρr (hm) = hm[0] ⊗ m[1] i.e. the right H-coaction is left H-linear. In the situation where H is commutative and cocommutative, this type of modules has been considered first by Long in [118]; in the sequel, we will refer to them as Long dimodules. If H is commutative and cocommutative, then Long dimodules and Yetter-Drinfeld modules coincide. We will come back more extensively to Long dimodules in Section 4.5 and Chapter 7.
2.2 Doi-Koppinen modules and entwined modules
49
Doi-Koppinen Hopf modules Let (H, A, C) ∈ DK•• (k), and (A, C, ψ) = F (H, A, C) the corresponding object in E•• (k). M(H)C A will be another notation for M(ψ)C . (2.17) takes the form A ρr (ma) = m[0] a[0] ⊗ m[1] a[1]
(2.21)
The objects of M(H)C A are called unified Hopf modules, Doi-Hopf modules or Doi-Koppinen Hopf modules. If H has a twisted antipode, then ψ is bijective, C −1 and (A, C, ψ −1 ) ∈ •• E(k). C ). This A M(H) will be a new notation for A M(ψ category consists of left A-modules and left C-comodules such that ρl (am) = m[−1] S(a[1] ) ⊗ a[0] m[0]
(2.22)
Similar constructions apply to left-right, right-left and left-left Doi-Koppinen structures. Examples 4. 1. (k, A, k) ∈ DK•• (k), and the corresponding entwining structure is (A, k, IA ) ∈ E•• (k). The category of entwined modules is nothing else then the category of right A-modules. 2. (k, k, C) ∈ DK•• (k) corresponds to (k, C, IC ) ∈ E•• (k). An entwined module is now simply a right C-comodule. 3. Let H be a bialgebra. (H, H, H) ∈ DK•• (k) corresponds to (H, C, ψ) ∈ E•• (k), with ψ(h ⊗ k) = k(1) ⊗ hk(2) An entwined module is now a Hopf module in the sense of Sweedler [172]. 4. If H is a bialgebra, and A is a right H-comodule algebra, then (H, A, H) ∈ DK•• (k) is a right-right Doi-Koppinen structure. The corresponding DoiKoppinen modules are the well-known (A, H)-relative Hopf modules (see e.g. [67]). 5. In a similar way, if C is a right H-module coalgebra, then (H, H, C) ∈ DK•• (k) is a right-right Doi-Koppinen structure, and the corresponding DoiKoppinen modules are the [H, C])-Hopf modules studied in [67]. 6. Let G be a group, and A a G-graded k-algebra. Then (kG, A, kG) is a Doi-Koppinen structure, and the corresponding category of Doi-Koppinen modules is the category of G-graded right A-modules. 7. Now let X be a right G-set. Then kX is a right kG-module coalgebra, and (kG, A, kX) is a Doi-Koppinen structure. M(kG)kX A consists of right Amodules graded by the G-set X (see [143] and [147] for a study of modules graded by G-sets). 8. Yetter-Drinfeld modules and Long dimodules are special cases of Doi-Hopf modules. This will be explained in Section 4.4 and 4.5. Proposition 21. For (A, C, ψ) ∈ E•• (k), the categories M(ψ)C A , Aop M(ψ ◦ cop cop τ )C , C M(τ ◦ ψ)A and C M(τ ◦ ψ ◦ τ ) are isomorphic. In particular, for Aop op C C cop cop op , M(H ) , M(H )A (H, A, C) ∈ DK•• (k), the categories M(H)C A A cop opcop and C M(H ) are isomorphic. Aop
50
2 Doi-Koppinen Hopf modules and entwined modules
Proof. Everything is straightforward. For example, if M ∈ M(ψ)C A , then the corresponding object in Aop M(ψ ◦ τ )C is equal to M as a right C-comodule, but with left Aop -action given by aop · m = ma All the other isomorphisms are defined in a similar way, and we leave further details to the reader.
2.3 Entwined modules and the smash product Let A and B be k-algebras, and consider a map R : B ⊗ A → A ⊗ B. We will use the following notation (summation understood): R(b ⊗ a) = aR ⊗ bR = ar ⊗ br
(2.23)
We put A#R B = A ⊗ B as a k-module, but with a new multiplication: mA#R B = (mA ⊗ mB ) ◦ (IA ⊗ R ⊗ IB )
(2.24)
(a#b)(c#d) = acR #bR d
(2.25)
or If this new multiplication makes A#R B into an associative algebra with unit 1#1, then we call A#R B a smash product, and (A, B, R) a smash product structure or a factorization structure. Theorem 7. ([44]) (A, B, R) is a smash product structure if and only if R(b ⊗ 1A ) = 1A ⊗ b R(1B ⊗ a) = a ⊗ 1B R(bd ⊗ a) = aRr ⊗ br dR R(b ⊗ ac) = aR cr ⊗ bRr
(2.26) (2.27) (2.28) (2.29)
for all a, c ∈ A, b, d ∈ B. Proof. Assume that (A, B, R) is a smash product structure. Then for all b ∈ B, we have 1A #b = (1A #b)(1A #1B ) = 1R #bR and (2.26) follows. (2.27) follows in a similar way. The multiplication is associative, so (1#b) (a#1)(c#1) = (1#b)(a#1) (c#1) and this implies (2.29). (2.28) follows from (1#b)(1#d) (a#1) = (1#b) (1#d)(a#1) the converse is left to the reader.
2.3 Entwined modules and the smash product
51
Remark 3. The smash product is related to the factorization problem. We say that a k-algebra X factorizes through k-algebras A and B if X ∼ = A ⊗ B as k-modules, and, after identifying X and A ⊗ B, the maps ιA : A → A ⊗ B = X ιB : B → A ⊗ B = X
ιA (a) = a ⊗ 1B ιB (b) = 1A ⊗ b
are algebra maps. It is not hard to show that there exists a bijective correspondence between the algebra structures on A ⊗ B for which ιA and ιB are algebra maps, and smash product structures of the form (A, B, R): given the multiplication mX on X, we put R(b ⊗ a) = mX (ιB (b) ⊗ ιA (a)) Let (A, B, R), (A , B , R ) be smash product structures. A morphism (A, B, R) → (A , B , R ) consists of a pair (α, β), where α : A → A and β : B → B are algebra maps such that (α ⊗ β) ◦ R = R ◦ (β ⊗ α), or α(aR ) ⊗ β(bR ) = α(a)R ⊗ β(b)R for all a ∈ A and b ∈ B. S(k) will denote the category of smash product structures over k. Proposition 22. If (A, B, R) ∈ S(k) is a smash product structure, then (B op , Aop , τ ◦R ◦τ ) is also a smash product structure. Furthermore the switch map τ : (A#R B)op → B op #τ ◦R◦τ Aop is an algebra isomorphism. Proof. The first statement is obvious. To prove the second one, we only need to show that τ is anti-multiplicative. Indeed, τ (c#d)τ (a#b) = (d#c)(b#a) = d · bR #cR · a = bR d#RacR = τ (acR #R bR d) = τ (a#b)(c#d) Proposition 23. Let (A, B, R) ∈ S(k) be a smash product structure, and assume that R is invertible. Then (B, A, S = R−1 ) is also a smash product structure, and R : B#S A → A#R B is an algebra isomorphism, with inverse S. Proof. We write down conditions (2.26-2.29) as commutative diagrams. Change the direction of the morphisms involving R, and replace R by R−1 = S. We then obtain the diagram telling that (B, A, R−1 ) is a smash product structure. We are left to prove that R is multiplicative. This works as follows: for all b, d ∈ B and a, c ∈ A, we have
52
2 Doi-Koppinen Hopf modules and entwined modules
R (b#a)(d#c) = R bdS #aS c = (aS c)R #(bdS )R (2.28) = (aS c)R1 R2 #bR2 dSR1 (2.29) (S = R
−1
= (aSR1 cR3 )R2 #bR2 dSR1 R3
)
= (acR3 )R2 #bR2 dR3
(2.29)
= aR2 cR3 R4 #bR2 R4 dR3 = (aR2 #bR2 )(cR3 #dR3 ) = R(b#a)R(d#c)
Theorem 8. Let A be a k-algebra, and C a k-coalgebra which is finitely generated and projective as a k-module. Then there is a bijective correspondence between left-right entwining structures of the form (A, C, ψ) and smash product structures of the form (A, C ∗ , R). If R corresponds to ψ, then the categories A M(ψ)C and A#R C ∗ M are isomorphic. Proof. Let {ci , c∗i | i = 1, · · · , n} be a dual basis for C. For an entwining structure (A, C, ψ) ∈ • E• (k), we define f (ψ) = R : C ∗ ⊗ A → A ⊗ C ∗ by ∗ R(c∗ ⊗ a) = c∗ , cψ (2.30) i aψ ⊗ ci i
We claim that (A, C , R) is a smash product structure. For all c∗ , d∗ ∈ C ∗ and a, b ∈ A, we have ∗ ∗ c∗ , cψ aRr ⊗ c∗r ∗ d∗R = i (AR )ψ ⊗ ci ∗ dR ∗
i
=
∗ Ψ ∗ ∗ c∗ , cψ i d , cj aΨ ψ ⊗ ci ∗ cj
i,j
(1.5)
=
(2.3)
=
c∗ , (ci(1) )ψ d∗ , (ci(2) )Ψ aΨ ψ ⊗ c∗i
i
ψ ∗ ∗ c∗ , (cψ i )(1) d , (ci )(2) aψ ⊗ ci
i
=
∗ c∗ ∗ d∗ , cα i aψ ⊗ ci
i
= R(c∗ ∗ d∗ ⊗ a) proving (2.28). aR br ⊗ (c∗ )Rr =
∗ c∗R , cψ i aR bψ ⊗ ci
i
=
∗ Ψ ∗ c∗j , cψ i c , cj aΨ bψ ⊗ ci
i,j
=
i
∗ c∗ , cψΨ i aΨ bψ ⊗ ci
2.3 Entwined modules and the smash product
(2.7)
=
53
∗ c∗ , cψ i (ab)ψ ⊗ ci
i
= R(c∗ ⊗ ab) proving (2.29). (2.26) and (2.27) are left to the reader. Conversely, if (A, C ∗ , R) is a smash product structure, then we define g(R) = ψ : A ⊗ C → A ⊗ C by (c∗i )R , caR ⊗ ci (2.31) ψ(a ⊗ c) = i
Then aΨ ψ ⊗ (c(1) )ψ ⊗ (c(2) )Ψ = =
(c∗i )R , c(1) aΨ R ⊗ ci ⊗ (c(2) )Ψ
i
(c∗i )R , c(1) (c∗j )r , c(2) arR
⊗ ci ⊗ cj
i,j
=
(c∗i )R ∗ (c∗j )r , carR ⊗ ci ⊗ cj
i,j
(2.28)
=
(c∗i ∗ c∗j )R , caR ⊗ ci ⊗ cj
i,j
(1.5)
=
(c∗i )R , caR ⊗ ∆(ci )
i
= aψ ⊗ δ(cψ ) proving (2.3). To prove (2.7): aψ bΨ ⊗ cΨ ψ = (c∗i )R , cΨ aR bΨ ⊗ ci = (c∗i )R , cj (c∗j )r , caR br ⊗ ci = (c∗i )R , cj c∗j r , c aR br ⊗ ci (2.29)
= (c∗i )Rr , caR br ⊗ ci = (c∗i )R , c(ab)R ⊗ ci = ψ(ab ⊗ c)
Next observe that (g(f (ψ)))(a ⊗ c) =
(c∗i )R , caR ⊗ ci i
=
∗ c∗i , cψ j cj , caψ ⊗ ci
i,j
=
c∗j , caψ ⊗ cψ j
j
= aψ ⊗ cψ = ψ(a ⊗ c)
54
2 Doi-Koppinen Hopf modules and entwined modules
and it follows that (g ◦ f )(ψ) = ψ. In a similar way, we can prove that (f ◦ g)(R) = R, and this finishes the proof of the first part of the Theorem. We will now define an isomorphism F : A M(ψ)C → A#R C ∗ M. For M ∈ C ∗ A M(ψ) , we define F (M ) = M as a k-module, with left A#R C -action defined by (a#c∗ ) · m = c∗ , m[1] a · m[0] (2.32) It is clear that M is an A#R C ∗ -module, since (a#c∗ )(b#d∗ ) · m = (abR #(c∗R ∗ d∗ )) · m = c∗R ∗ d∗ , m[1] abR m[0] ∗ ∗ = c∗ , cψ i ci , m[1] d , m(2) abψ m[0] i
= (a#c∗ ) · d∗ , m[1] bm[0] = (a#c∗ ) · (b#d∗ ) · m Conversely, if M is a left A#R C ∗ -module, we define G(M ) ∈ G(M ) = M as a k-module, with left A-action
C A M(ψ) :
am = (a#εC ) · m and right C-coaction ρr (m) =
(1#c∗i ) · m ⊗ ci
i
Further details are left to the reader. Theorem 9. Let A be a k-algebra, and C a coalgebra which is finitely generated and projective as a k-module. Then there is a bijection between right-left entwining structures of the form (A, C, ϕ) and smash product structures of the form (C ∗ , A, S). In this situation we have an isomorphism of categories C
M(ψ)A ∼ = MC ∗ #S A
Proof. We use the left-right dictionary. If (A, C, ϕ) ∈ • E• (k), then (Aop , C cop , τ ◦ ϕ ◦ τ ) ∈ • E• (k) (see Proposition 14). Using Theorem 8, we find (Aop , C cop∗ = C ∗op , R) ∈ S(k). Finally Proposition 22 gives the corresponding smash product structure (C ∗ , A, S = τ ◦ R ◦ τ ). From (2.30) and (2.31), it follows that the correspondence between S and ϕ is given by the formulas ∗ S(a ⊗ c∗ ) = c∗ , cϕ (2.33) i ci ⊗ aϕ i
(c∗i )R , cci ⊗ aR ϕ(c ⊗ a) = i
(2.34)
2.3 Entwined modules and the smash product
55
Take an entwining structure (A, C, ψ) ∈ • E• (k). Assume that C is finitely generated and projective, and that ψ is invertible. Then we have a right-left entwining structure (C, A, ϕ = τ ◦ ψ −1 ◦ τ ) ∈ • E• (k) (see Proposition 15). Let (A, C ∗ , R) and (C ∗ , A, S) be the two corresponding smash product structures from Theorems 8 and 9. One is then tempted to conjecture that S = R−1 , and therefore A#R C ∗ ∼ = C ∗ #S A, according to Proposition 23. Surprisingly, this is not true in general! a straightforward computation shows that S = R−1 if and only if c∗i , cϕ cψ (2.35) c⊗a = i ⊗ aψϕ i
c∗i , cψ cϕ c⊗a = i ⊗ aϕψ
(2.36)
i
for all c ∈ C and a ∈ A. The condition ϕ = τ ◦ ψ −1 ◦ τ amounts to c∗i , cϕ cψ c⊗a = i ⊗ aϕψ
(2.37)
i
c⊗a =
c∗i , cψ cϕ i ⊗ aψϕ
(2.38)
i
for all c ∈ C and a ∈ A. We will make the difference clear in the Doi-Hopf case. Examples 5. 1) Let A be a right H-comodule algebra, and B a right H-module algebra. Define R : B ⊗ A → A ⊗ B by R(b ⊗ a) = a[0] ⊗ ba[1]
(2.39)
Then (A, B, R) is a smash product structure, and the multiplication on A#R B is given by the formula (a#b)(c#d) = ac[0] #(bc[1] )d
(2.40)
If H has a twisted antipode, then R is invertible, and R−1 is given by the formula R−1 (b ⊗ a) = bS(a[1] ) ⊗ a[0] (2.41) 2) In a similar way, if A is a left H-comodule algebra, and B is a left H-module algebra, then we have a smash product structure (B, A, R), with R(a ⊗ b) = a[−1] b ⊗ a[0] 3) Let (H, A, C) ∈ • aDK• (k), i.e. A is a left H-module algebra, and C is a right H-comodule coalgebra. If C is finitely generated projective, then C ∗ is a left H-comodule algebra, cf. Proposition 7, so we find a smash product structure (A, C ∗ , R). Now (H, A, C) defines an entwining structure (see
56
2 Doi-Koppinen Hopf modules and entwined modules
Proposition 18), and Theorem 8 produces another smash product structure (A, C ∗ , R ). As one might expect, R = R , since R (c ⊗ a) =
n
∗ c∗ , cψ i aψ ⊗ ci
i=1
=
n
c∗ , ci[0] ci[0] a ⊗ c∗i
i=1
= c∗[−1] a ⊗ c∗[0] = R(c ⊗ a) Let (A, B, R) be a smash product structure. Can we find a bialgebra H, an H-coaction on A and an H-action on B such that R is given by (2.39). We have discussed this question already for entwining structures. For smash product structures, the answer is the following: Theorem 10. (Tambara [178]) Let A be a finitely generated and projective algebra, and H = H(A) as in Proposition 19. For every algebra B, we have a bijective correspondence between smash product structures of the form (A, B, R) and right H-module algebra structures on B. A similar result holds if B is finitely generated projective. Proof. The proof is similar to the corresponding proofs for entwining structures (Theorems 5 and 6). We know that A is a right H-comodule algebra. Given R, we define a right H-action on B as follows: b · [a∗ ⊗ a] = a∗ , aR bR We invite the reader to prove that this puts an H-module algebra structure on H. Conversely, if B is a right H-module algebra, then Example 5 1) tells us how to produce a smash product structure. Example 8. If (H, A, C) ∈ • DK• (k), then C ∗ is a right H-module algebra, the right H-action on C ∗ is given by c∗ h, c = c∗ , hc and we obtain a smash product structure (A, C ∗ , R). We have a functor F :
C A M(H)
→ A#R C ∗ M
F (M ) = M as a k-module, with left A#R C ∗ -action (a#c∗ ) · m = c∗ , m[1] am[0] If C is finitely generated and projective, then the map R coincides with the one from Theorem 8. First observe that c∗ , hci c∗i c∗ h = i
2.3 Entwined modules and the smash product
57
To see this, apply both sides to an arbitrary c ∈ C. Thus, according to (2.30), c∗ , a[1] ci a[0] ⊗ c∗i R(c∗ ⊗ a) = i
=
a[0] ⊗ c∗ a[1] , ci c∗i
i
= a[0] ⊗ (c∗ a[1] )
(2.42)
If C is projective, but not necessarily finitely generated, then F is fully faithful. Indeed, if f : M → N is a left A#R C ∗ -linear map between two Doi-Hopf modules M and N , then f is left A-linear and left C ∗ -linear, and therefore right C-colinear, by Proposition 3. Consequently, A M(H)C can be viewed as a full subcategory of A#R C ∗ M. Example 9. Now assume that H has an antipode. To (H, A, C) ∈ • DK• (k), we can associate (A, C, ψ) ∈ • E• (k) and (A, C, ϕ) ∈ • E• (k) Recall that ϕ : C ⊗ A → C ⊗ A is given by ϕ(c ⊗ a) = S(a[1] )c ⊗ a[0] We have associated smash product structures (A, C ∗ , R) and (C ∗ , A, S), with R given by (2.42), and S by (2.43) S(a ⊗ c∗ ) = c∗ S(a[1] ) ⊗ a[0] Even if C is not necessarily finitely generated and projective, (2.43) defines a smash product structure. In any case, we have a functor F : C M(H)A → MC ∗ #S A . F (M ) = M as a k-module, with action m · (c∗ #a) = c∗ , m[−1] m[0] a If C is finitely generated and projective, then F is an isomorphism of categories. If H has a twisted antipode, then the inverse of R exists and is given by (see (2.41)) (2.44) R−1 (a ⊗ c∗ ) = c∗ S(a[1] ) ⊗ a[0] If the antipode S of H is of order 2, then we can conclude from (2.43) and (2.44) that S = R−1 , and we have the following result. Proposition 24. Let (H, A, C) ∈ • DK(k)• , and assume that H has an antipode of order 2. Let (A, C ∗ , R) and (C ∗ , A, S) be defined as in Example 9. Then the smash products A#R C ∗ and C ∗ #S A are isomorphic. Koppinen’s smash product Let (A, C, ψ) ∈ • E• (k) be a left-right entwining structure. The Koppinen smash #ψ (C, A) is equal to Hom(C, A) as a k-module, but with twisted multiplication (f • g)(c) = f (cψ (1) )g(c(2) )ψ for all f, g : C → A and c ∈ C.
58
2 Doi-Koppinen Hopf modules and entwined modules
Proposition 25. If (A, C, ψ) is a left-right entwining structure, then #ψ (C, A) is an associative algebra, with unit ηA ◦ εC . Proof. The proof of the associativity goes as follows: ((f • g) • h)(c) = (f • g)(cψ (1) )h(c(2) )ψ ψ cψ = f cψ h(c(2) )ψ (1) (1) g (1) (2) ψ ψ ψ (2.3) = f cΨ h(c(3) )ψΨ (1) g c(2) ψ ψ g c(2) h(c(3) )ψ (2.7) = f cΨ (1) Ψ (g • h)(c = f cΨ ) (2) Ψ (1) = (f • (g • h))(c) From (2.2) and (2.4), it follows easily that ηA ◦ εC is the unit element of #ψ (C, A). Proposition 26. C ∗ and A are subalgebras of #ψ (C, A), via c∗ → ηA ◦ c∗ and a → aη ◦ εC Proof. Obvious. Proposition 27. For (A, B, ψ) ∈ • E• (k), we have a functor F :
C A M(ψ)
→ #ψ (C,A) M
For an entwined module M , F (M ) = M as a k-module, with left #ψ (C, A)action given by f · m = f (m[1] )m[0] Proof. We will prove that F (M ) is a #ψ (C, A)-module, leaving further details to the reader. For f, g : C → A, we have f · (g · m) = f · (g(m[1] )m[0] ) = f (mψ [1] )g(m(2) )ψ m[0] ) = (f • g)(m[1] )m[0] ) = (f • g) · m Proposition 28. Let (A, B, ψ) ∈ • E• (k), and assume that C is finitely generated and projective as a k-module. Let (A, C ∗ , R) be the corresponding smash product structure (cf. Theorem 8). Then we have an algebra isomorphism s : A#R C ∗ → #ψ (C, A) given by
s(a#c∗ )(c) = c∗ , ca
for all a ∈ A, c ∈ C and c∗ ∈ C ∗ .
2.4 Entwined modules and the smash coproduct
59
Proof. It is well-known that s is a k-module isomorphism if C is finitely generated and projective. So we only need to show that s is an algebra map. Let {ci , c∗i | i = 1, · · · , n} be a finite dual basis of C. For all a, b ∈ A, c ∈ C and c∗ , d∗ ∈ C ∗ , we have ∗ ∗ s (a#c∗ )(b#d∗ ) (c) = s c∗ , cψ i abψ #ci ∗ d (c) ∗ ∗ = c∗ , cψ i ci ∗ d , cabψ ∗ ∗ = c∗ , cψ i ci , c(1) d , c(2) abψ ∗ = c∗ , cψ (1) d , c(2) abψ ∗ = (aηA ◦ c∗ )(cψ (1) ) (bηA ◦ d )(c(2) ) ψ = (aηA ◦ c∗ ) • (bηA ◦ d∗ ) (c) = s(a#c∗ ) • s(b#d∗ ) (c)
Example 10. Let (H, A, C) ∈ • DK• (k), and let (A, C, ψ) be the associated entwining structure. We will write #H (C, A) = #ψ (C, A). The product on #H (C, A) is given by the formula (f • g)(c) = f g(c(2) )(1) c(1) g(c(2) )[0] This multiplication appeared first in [111, Def. 2.2]. In the situation where C = H, it appears already in [68] and [110].
2.4 Entwined modules and the smash coproduct The result in this Section may be viewed as the duals of the ones in Section 2.3. Let C and D be coalgebras, and consider a linear map V : C ⊗ D → D ⊗ C. We will use the notation V (c ⊗ d) = dV ⊗ cV C be the quantum plane: ∆(x) = x ⊗ x,
∆(y) = y ⊗ 1 + x ⊗ y,
ε(x) = 1,
ε(y) = 0.
C = kx is a subcoalgebra of H. For a ∈ k let σa : C ⊗ H → k be given by σa (x ⊗ 1) = 1,
σa (x ⊗ x) = 0,
σa (x ⊗ y) = a
and extend σa to the whole of C ⊗ H using (H3). Then, σa is a Hopf function. By Proposition 130, it suffices to check that (H1) holds for h ∈ {x, y}. For h = x, (H1) is σa (x ⊗ x)x2 = σa (x ⊗ x)x which holds if and only if σa (x ⊗ x) = 0. For h = y, (H1) has the form σa (x ⊗ y)x + σa (x ⊗ x)yx = σa (x ⊗ y)x, which is true, as σa (x ⊗ x) = 0. In fact, we have also proved the converse: if (H, C, σ) is a bialgebra with a Hopf function, then σ = σa for some a ∈ k. 3. Let M be a monoid and N = {n ∈ M | xn = n, for all x ∈ M }. Let H = k[M ] and C = k[F ], with F ⊂ N . Let σ : k[F ] ⊗ k[M ] → k be such that σ(f ⊗ 1) = 1 and σ(f, •) : M → (k, ·) is a morphism of monoids for all f ∈ F . Then σ is a Hopf function. We give such a concrete example. Let a ∈ k and Fa (k) = {u : k → k | u(a) = a}. (Fa (k), ◦) is a monoid, and F = {Ik } ⊂ N . σ : k[F ] ⊗ k[Fa (k)] → k,
σ(Ik ⊗ u) = 1
is a Hopf function. Let H be a bialgebra, C ⊂ H a subcoalgebra, and σ : C ⊗ H → k a k-linear map. As usual, σ12 , σ13 , σ23 : C ⊗ C ⊗ H → k are defined by: σ12 (c ⊗ d ⊗ x) = ε(x)σ(c ⊗ d),
σ13 (c ⊗ d ⊗ x) = ε(d)σ(c ⊗ x)
σ23 (c ⊗ d ⊗ x) = ε(c)σ(d ⊗ x) for all c, d ∈ C, x ∈ H.
6.2 The FRT Theorem for the Hopf equation
263
Proposition 131. Let (H, C, σ) be a bialgebra with a Hopf function σ : C ⊗ H → k. Then σ23 ∗ σ13 ∗ σ12 = σ12 ∗ σ23 (6.22) in Hom(C ⊗ C ⊗ H, k). If (M, ρ) is a right C-comodule, then the map R = R(σ,M,ρ) : M ⊗ M → M ⊗ M,
R(m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0]
is a solution of the Hopf equation. Proof. For c, d ∈ C and x ∈ H, we have: (σ12 ∗ σ23 )(c ⊗ d ⊗ x) = ε(x(1) )σ(c(1) ⊗ d(1) )ε(c(2) )σ(d(2) ⊗ x(2) ) = σ(c ⊗ d(1) )σ(d(2) ⊗ x) and (σ23 ∗ σ13 ∗ σ12 )(c ⊗ d ⊗ x) = ε(c(1) )σ(d(1) ⊗ x(1) )ε(d(2) )σ(c(2) ⊗ x(2) )ε(x(3) )σ(c(3) ⊗ d(3) ) = σ(c(1) ⊗ x(2) )σ(c(2) ⊗ d(2) )σ(d(1) ⊗ x(1) ) = σ(c ⊗ x(2) d(2) )σ(d(1) ⊗ x(1) ) = σ c ⊗ σ(d(1) ⊗ x(1) )x(2) d(2) = σ c ⊗ σ(d(2) ⊗ x)d(1) = σ(c ⊗ d(1) )σ(d(2) ⊗ x) proving (6.22). The fact that R is a solution of the Hopf equation follows from (6.22) and from the formulas: R12 R23 (u ⊗ v ⊗ w) = R12 σ(v[1] ⊗ w[1] )u ⊗ v[0] ⊗ w[0] = σ(u[1] ⊗ v[0][1] )σ(v[1] ⊗ w[1] )u[0] ⊗ v[0][0] ⊗ w[0] = σ(u[1] ⊗ v[1](1) )σ(v[1](2) ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] = σ12 ∗ σ23 (u[1] ⊗ v[1] ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] and R23 R13 R12 (u ⊗ v ⊗ w) = R23 R13 σ(u[1] ⊗ v[1] )u[0] ⊗ v[0] ⊗ w = R23 σ(u[0][1] ⊗ w[1] )σ(u[1] ⊗ v[1] )u[0][0] ⊗ v[0] ⊗ w[0] = R23 σ(u[1] ⊗ w[1] )σ(u[2] ⊗ v[1] )u[0] ⊗ v[0] ⊗ w[0]
264
6 Hopf modules and the pentagon equation
= σ(v[0][1] ⊗ w[0][1] )σ(u[1] ⊗ w[1] )σ(u[2] ⊗ v[1] ) u[0] ⊗ v[0][0] ⊗ w[0][0] = σ(v[1] ⊗ w[1] )σ(u[1] ⊗ w[2] )σ(u[2] ⊗ v[2] )u[0] ⊗ v[0] ⊗ w[0] = σ(u[1](1) ⊗ w[1](2) )σ(u[1](2) ⊗ v[1](2) )σ(v[1](1) ⊗ w[1](1) ) u[0] ⊗ v[0] ⊗ w[0] = σ23 ∗ σ13 ∗ σ12 (u[1] ⊗ v[1] ⊗ w[1] )u[0] ⊗ v[0] ⊗ w[0] We will present the analog of Theorem 60 for the Hopf equation. The bialgebra B(R) and the coaction ρ of B(R) on M are as in Theorem 66. Theorem 68. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ) a solution of the Hopf equation. Let C be the subcoalgebra of B(R) generated by the (cij ). 1. There exists a unique Hopf function σ : C ⊗ B(R) → k such that R = R(σ,M,ρ) . 2. If R is bijective and commutative, then σ is invertible in the convolution algebra Hom(C ⊗ B(R), k). Proof. 1. First we prove that σ is unique. Let σ : C ⊗ B(R) → k be a Hopf function such that R = Rσ . Let u, v = 1, · · · , n. Then R(σ,M,ρ) (mv ⊗ mu ) = σ (mv )[1] ⊗ (mu )[1] (mv )[0] ⊗ (mu )[0] = σ(civ ⊗ cju )mi ⊗ mj and the fact that Rσ (mv ⊗ mu ) = R(mv ⊗ mu ) implies σ(civ ⊗ cju ) = xij vu
(6.23)
As B(R) is generated as an algebra by the (cij ), the relations (6.23) with (H2) and (H3) imply the uniqueness of σ. Our next goal is to prove the existence of σ. First we define σ0 : C ⊗ C → k using (6.23). Then we extend σ0 to a map σ1 : C ⊗ T (C) → k such that (H2) and (H3) hold. In order to prove that σ1 factorizes trough a map σ : C ⊗ B(R) → k, we have to show that σ1 (C ⊗ I) = 0, whith I the two-sided ideal of T (C) generated by all χij kl . This follows from the fact that αj ij p u v p σ1 (cpq ⊗ χij kl ) = xvu σ1 (cq ⊗ ck cl ) − xlk σ1 (cq ⊗ cα i) p αj pi u β v = xij vu σ1 (cβ ⊗ ck )σ1 (cq ⊗ cl ) − xlk xqα pu βv αj pi = xij vu xβk xql − xlk xqα = 0
We have constructed σ : C ⊗ B(R) → k such that (H2) and (H3) hold and R = R(σ,M,ρ) . We are left to prove (H1). By proposition 130, it suffices to check (H1) for c = cil and h = cjk , for all i, j, k, l = 1, · · · , n. We have
6.2 The FRT Theorem for the Hopf equation
σ(c(1) ⊗ h(1) )h(2) c(2) =
σ(civ ⊗ cju )cuk cvl =
u,v
and σ(c(2) ⊗ h)hc(1) =
265
u v xji uv ck cl
u,v
j i σ(cα l ⊗ ck )cα =
α
i xjα kl cα
α
so σ(c(1) ⊗ h(1) )h(2) c(2) − σ(c(2) ⊗ h)hc(1) = χ(i, j, k, l) = 0 and (H1) also holds, as needed. ij 2. Assume that R is bijective and let S = R−1 . Let (yvu ) be the matrix of S, i.e. ij S(mv ⊗ mu ) = yvu mi ⊗ m j , As RS = SR = IdM ⊗M we have
We define
αβ p i xpi αβ yqj = δq δj ,
pi αβ yαβ xqj = δqp δji
σ0 : C ⊗ C → k,
ij σ0 (civ ⊗ cju ) = yvu
and we extend σ0 to a map σ1 : C ⊗ T (C) → k in such a way that σ1 satisfies (H2) and (H3). First we prove that σ1 is the convolution inverse of σ1 : β σ1 (cpq )(1) ⊗ (cij )(1) σ1 (cpq )(2) ⊗ (cij )(2) = σ1 (cpα ⊗ ciβ )σ1 (cα q ⊗ cj ) αβ p i p i = xpi αβ yqj = δq δj = ε(cq )ε(cj )
and β σ1 (cpq )(1) ⊗ (cij )(1) σ1 (cpq )(2) ⊗ (cij )(2) = σ1 (cpα ⊗ ciβ )σ1 (cα q ⊗ cj ) pi αβ xqj = δqp δji = ε(cpq )ε(cij ) = yαβ
To show that σ : C ⊗ B(R) → k is also convolution invertible, we have to show that σ1 factorizes through a map σ : C ⊗ B(R) → k. We will prove that this happens if and only if R12 R13 = R13 R12 , i.e. R is commutative. σ1 factorizes if and only if σ1 (cpq ⊗ χij kl = 0, or, equivalently,
αj p p u v i xij vu σ1 (cq ⊗ ck cl ) = xlk σ1 (cq ⊗ cα )
which is equivalent to αj pi p u β v xij uv σ1 (cβ ⊗ ck )σ1 (cq ⊗ cl ) = xlk yqα
and pu βv αj pi xij vu yβk yql = xlk yqα
266
6 Hopf modules and the pentagon equation
From Proposition 125, it follows that this is equivalent to R23 S 13 S 12 = S 12 R23 Since S = R−1 , this last equation turns into R12 R23 = R23 R12 R13
(6.24)
Finally R is a bijective solution of the Hopf equation, i.e. R12 R23 = R23 R13 R12 , and we see that (6.24) is equivalent to R12 R13 = R13 R12 , completing the proof of the Theorem. Remark 19. There is a major difference between part 2) of Theorem 68 and the corresponding statement for the quantum Yang-Baxter equation. In the QYBE case, the bijectivity of a solution R implies that the map σ : A(R) ⊗ A(R) → k making A(R) coquasitriangular is convolution invertible. Behind this is the observation that R is a solution of the QYBE if and only if R−1 is a solution of the QYBE. In the Hopf equation situation, things are complicated by the fact that a bijective R is a solution of the Hopf equation if and only if R−1 is a solution of the pentagon equation. Now we introduce the dual notion of a bialgebra with a Hopf function (H, C, σ). This corresponds to the concept of quasitriangular bialgebra. Definition 14. Let H be a bialgebra and A a subalgebra of H. An element R = R1 ⊗ R2 ∈ A ⊗ H is called a Hopf element if (HE1) ∆(R1 ) ⊗ R2 = R13 R23 (HE2) ε(R1 )R2 = 1 (HE3) ∆cop (a)R = R(1 ⊗ a) for all a ∈ A. We will say that (H, A, R) is a bialgebra with a Hopf element. Remarks 17. 1. (HE1) and (HE2) are exactly (QT1) and (QT2), up to the fact that R ∈ A⊗H in the Hopf case, while R ∈ H ⊗H in the quasitriangular case. (HE3) is obtained by modifying the right hand side of (QT5), in such a way that an integral type condition appears. Let us explain this in detail. Take t = R1 ε(R2 ) ∈ A. (HE3) can be re written as a(2) R1 ⊗ a(1) R2 = R1 ⊗ R2 a
(6.25)
for all a ∈ A. Applying I ⊗ ε to (6.25), we find at = ε(a)t for all a ∈ A. Hence, if A is a subbialgebra of H, then t is a left integral in A. Applying I ⊗ ε ⊗ ε to (HE1) we get t2 = t. It follows that t = tt = ε(t)t, hence ε(t) = 1. Using the Maschke theorem for Hopf algebras, we conclude that: if (H, A, R) is a bialgebra with a Hopf element and A is a finite dimensional subbialgebra of H with an antipode, then A is semisimple.
6.3 Noncommutative noncocommutative bialgebras
267
Conversely, if t is a left integral in A, then R = t ⊗ 1 satisfies (HE3). 2. Let H be a bialgebra and A a subalgebra of H. Then R = 1 ⊗ 1 is a Hopf element if and only if A = k. Indeed, if R = 1 ⊗ 1 then (HE3) takes the form ∆cop (a) = 1 ⊗ a, for all a ∈ A. Hence, a = ε(a)1H , for all a ∈ A, i.e. A = k. 3. Let (H, A, R) be a bialgebra with a Hopf element. Suppose that H has an antipode S. Then R is invertible and R−1 = S(R1 ) ⊗ R2 . Moreover u = S(R2 )R1 ∈ H satisfies the condition S(a)u = ε(a)u, for all a ∈ A. This formula is obtained if we apply mH τ (I ⊗ S) to (6.25). We observe that if A = k then u is not invertible (if u is invertible, then R−1 = S(R1 ) ⊗ R2 = ε(R1 ) ⊗ R2 = 1 ⊗ 1, i.e. A = k). Proposition 132. If (H, A, R) is a bialgebra with a Hopf element, then R23 R13 R12 = R12 R23
(6.26)
in A ⊗ H ⊗ H. If (M, ·) is a left H-module, then the map R = R(R,M,·) : M ⊗ M → M ⊗ M,
R(m ⊗ n) = R1 · m ⊗ R2 · n
is a solution of the Hopf equation. Proof. (HE1) is equivalent to (HE1 ) ∆cop (R1 ) ⊗ R2 = R23 R13 Writing r = R, we compute R23 R13 R12 = ∆cop (R1 ) ⊗ R2 (r1 ⊗ r2 ⊗ 1) = ∆cop (R1 )R ⊗ R2 = R(1 ⊗ R1 ) ⊗ R2 = r1 ⊗ r2 R1 ⊗ R2 = R12 R23 The second statement follows from (6.26) since R23 R13 R12 (l ⊗ m ⊗ n) = R23 R13 R12 · (l ⊗ m ⊗ n) and R12 R23 (l ⊗ m ⊗ n) = R12 R23 · (l ⊗ m ⊗ n) for all l, m, n ∈ M .
6.3 New examples of noncommutative noncocommutative bialgebras Using Theorem 66 we can construct new examples of noncommutative noncocommutative bialgebras. These are different from the ones that appear in
268
6 Hopf modules and the pentagon equation
quantum group theory, and this is because the relations that we factor out are not always homogeneous. If M is a vector space with basis {m1 , · · · , mn }, then the matrix of a klinear map R : M ⊗ M → M ⊗ M is an n2 × n2 -matrix. We will write this matrix with respect to the basis {mi ⊗ mj | i, j = 1, · · · , n}, in the lexicographic order. For example (6.27) is the matrix of R with respect to the basis {m1 ⊗ m1 , m1 ⊗ m2 , m2 ⊗ m1 , m2 ⊗ m2 }. Our first example is a solution of the Hopf equation in characteristic two, giving rise to a five dimensional noncommutative noncocommutative bialgebra B(R). Proposition 133. Let k be a field and R be 1 0 0 0 1 1 R= 0 0 1 0
0
0
the matrix of M4 (k) given by 0 0 (6.27) 0 1
1. R is a solution of the Hopf equation if and only if k has the characteristic two. In this case R is commutative. 2. If char(k) = 2, then the bialgebra B(R) is the algebra with generators x, y, z and relations x2 = x,
y 2 = z 2 = yx = yz = 0,
xy = y,
xz = zx = z
The comultiplication ∆ and the counit ε are given by: ∆(x) = x ⊗ x + y ⊗ z,
∆(y) = x ⊗ y + y ⊗ x + y ⊗ zy,
∆(z) = z ⊗ x + x ⊗ z + zy ⊗ z, ε(x) = 1,
ε(y) = ε(z) = 0.
Furthermore, dimk (B(R)) = 5. Proof. 1. A direct computation tells 1 0 0 1 0 0 0 0 R12 R23 = 0 0 0 0 0 0 0
0
us that 0
0
0
0
0
1
0
0
0
0
1
0
1
0
0
0
1
0
1
1
0
0
1
0
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
0
0
0 0 0 0 0 0 1
6.3 Noncommutative noncocommutative bialgebras
and
R23 R13 R12
1
0 0 0 = 0 0 0 0
0
0
0
0
0
0
1
1
0
α
0
0
0
1
0
1
0
0
0
0
1
0
1
1
0
0
0
1
0
0
0
0
0
0
1
1
0
0
0
0
0
1
0
0
0
0
0
0
0
269
0 0 0 0 0 0 1
where α = 1 + 1. Hence, R is a solution of the Hopf equation if and only if char(k) = 2. In this case we also have that 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 1 0 R12 R13 = R13 R12 = 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0
0
0
0
0
0
0
1
i.e. R is commutative. kl 2. Suppose that char(k) = 2. Among the scalars (xkl ij ) that define R = (xij ) via the formula (6.11) the only nonzero elements are 22 12 21 12 x11 11 = x22 = x12 = x21 = x21 = 1.
Now, the relations χij kl = 0, written in lexicographic order are: c11 c11 = c11 ,
c11 c12 = c12 ,
c12 c11 = 0,
c12 c12 = 0,
c21 c11 + c11 c21 = 0,
c12 c12 + c11 c22 = c11 ,
c22 c11 + c12 c21 = c11 ,
c22 c12 + c12 c22 = c12 ,
c11 c21 = c21 , c21 c21 = 0,
c11 c22 = c22 , c21 c22 = c21 ,
c12 c21 = 0,
c12 c22 = 0,
c22 c21 = c21 ,
c22 c22 = c22 .
270
6 Hopf modules and the pentagon equation
Now, if we write c11 = x, c12 = y, c21 = z, c22 = t and using that char(k) = 2 we get the following relations: x2 = x,
y 2 = z 2 = yx = yz = 0,
xy = y,
zy = x + t,
t2 = t,
xt = t,
yt = 0,
ty = y,
zt = tz = z.
xz = zx = z,
tx = x,
So, t is in the free algebra generated by x, y, z and t = zy − x. If we substitute t in all the relations in which t is involved, then these become identities. The relations given in the statement of the proposition remain valid. The formula for ∆ follows, as the matrix x y z
t
is comultiplicative. We will now prove that dimk (B(R)) = 5. More precisely, we will show in an elementary way (without using the Diamond Lemma) that {1, x, y, z, zy} is a k-basis for B(R). From the relations which define B(R) we obtain: x(zy) = zy,
(zy)2 = (zy)x = y(zy) = (zy)y = z(zy) = (zy)z = 0.
These relations tell us that {1, x, y, z, zy} generate B(R) as a vector space over k. We are done if we can show that {1, x, y, z, zy} is linearly independent. Let a, b, c, d, e ∈ k be such that a + bx + cy + dz + e(zy) = 0 Left multiplication by y gives a = 0. Right multiplication by z gives b = 0, and then right multiplication by x gives d = 0. Finally left multiplication by z gives c = 0, and e = 0 follows automatically. Hence {1, x, y, z, zy} is a k-basis for B(R). Remarks 18. 1. The proof also tells us that B(R) can be described as follows: – As a vector space, B(R) is five dimensional with basis {1, x, y, z, t}. – The multiplication is given by:
xy = y, yz = 0,
x2 = x,
y 2 = z 2 = 0,
t2 = t,
yx = 0,
xz = zx = z,
xt = t,
zy = x + t,
yt = 0,
ty = y,
tx = x,
zt = tz = z.
6.3 Noncommutative noncocommutative bialgebras
271
– The comultiplcation ∆ and the counity ε are defined in such way that the matrix x y z
t
is comultiplicative. Now, let C be the four dimensional subcoalgebra of B(R) with k-basis {x, y, z, t}. The map σ : C ⊗ B(R) → k defined by σ(x ⊗ 1) = 1, σ(x ⊗ x) = 1, σ(x ⊗ y) = 0, σ(x ⊗ z) = 0, σ(x ⊗ t) = 1, σ(y ⊗ 1) = 0, σ(y ⊗ x) = 0, σ(y ⊗ y) = 0, σ(y ⊗ z) = 1, σ(y ⊗ t) = 0, σ(z ⊗ 1) = 0, σ(z ⊗ x) = 0, σ(z ⊗ y) = 0, σ(z ⊗ z) = 1, σ(z ⊗ t) = 0, σ(t ⊗ 1) = 1, σ(t ⊗ x) = 1, σ(t ⊗ y) = 0, σ(t ⊗ z) = 0, σ(t ⊗ t) = 0, is a Hopf function. As R is bijective and commutative we obtain that σ is convolution invertible. Since k has characteristic two, R−1 = R, and σ −1 = σ. The bialgebra B(R) is not a Hopf algebra. Localizing B(R) we obtain a Hopf algebra isomorphic to the group algebra kC2 . Indeed, let S be a potential antipode. Then: S(x)x + S(y)z = 1 and S(x)y + S(y)t = 0 If we multiply the second equation to the right with z we get S(y)z = 0, so S(x)x = 1. But x2 = x, so x = 1 and then y = 0, t = 1. We obtain the Hopf algebra k < z | z 2 = 0 >, ∆(z) = z ⊗ 1 + 1 ⊗ z, ε(z) = 0. If we denote g = z + 1 then g 2 = 1, ∆(g) = g ⊗ g, ε(g) = 1, hence B(R) is isomorphic to the Hopf algebra kG, where G = {1, g} is a group with two elements. 2. R is commutative, so we can construct the bialgebra B(R) = k[X, Z]/(X 2 − X, Z 2 , XZ − Z) Its coalgebra structure is given by ∆(X) = X ⊗ X,
∆(Z) = X ⊗ Z + Z ⊗ X
ε(X) = 1,
ε(Z) = 0.
In the next Propositions we will construct the bialgebras that arise from the solutions of the Hopf equation given in Example 10 3). Taking quotients of one of these bialgebras, we will be able to construct a noncommutative noncocommutative bialgebra of dimension 2n + 1, for any positive integer n and any field k. First we will construct the bialgebra B(Rq ), which can be associated to the solution Rq = fq ⊗ fq , where fq is defined in Example 10. It is worthwhile to note that these bialgebras do not depend on q ∈ k \ {0}.
272
6 Hopf modules and the pentagon equation
Proposition 134. Let q ∈ k and Rq by 1 0 Rq = 0 0 Let
Eq2 (k)
be the bialgebra
the solution of the Hopf equation given q2
q
q
0
0
0
0
0 0
0
0
0
B(Rq ).
1. If q = 0, then E02 (k) is the free algebra with generators x, y, z and relations x2 = x, xz = zx = z 2 = 0. The comultiplication ∆ and the counit ε are given by ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0.
2. If q = 0, then Eq2 (k) is the free algebra with generators A, B and relations: B3 = B2 The comultiplication ∆ and the counity ε are given by: ∆(A) = A ⊗ A,
∆(B) = B ⊗ A + B 2 ⊗ (B − A) ε(A) = ε(B) = 1
Proof. We proceed as in Proposition 133. The (xji uv ) that are different from zero are: 11 2 x11 x11 x11 11 = 1, 21 = x12 = q, 22 = q . Now the relations χij kl = 0 are c11 c11 + qc21 c11 + qc11 c21 + q 2 c21 c21 = c11 c11 c12 + qc21 c12 + qc11 c22 + q 2 c21 c22 = qc11 c12 c11 + qc22 c11 + qc12 c21 + q 2 c22 c21 = qc11 c12 c12 + qc22 c12 + qc12 c22 + q 2 c22 c22 = q 2 c11 0 = 0, 0 = c21 ,
0 = 0,
0 = qc21 ,
0 = 0,
0 = 0,
0 = 0, 0 = qc21 , 0 = 0,
0 = 0, 0 = q 2 c21 , 0 = 0,
Hence c21 = 0. If we denote c11 = x, c22 = y, c12 = z then we obtain the description of Eq2 (k) as a bialgebra:
6.3 Noncommutative noncocommutative bialgebras
273
– as an algebra, Eq2 (k) has generators x, y, z and relations x2 = x,
xz + qxy = qx,
zx + qyx = qx,
z 2 + qyz + qzy + q 2 y 2 = q 2 x.
– the comultiplication ∆ and the counit ε are given by the equations ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
∆(z) = x ⊗ z + z ⊗ y
(6.28)
and ε(x) = ε(y) = 1,
ε(z) = 0.
(6.29)
For q = 0, we obtain the relations for E02 (k). If q = 0, then x is in the free algebra generated by y and z and x = y 2 + q −1 zy + q −1 yz + q −2 z 2 = (y + q −1 z)2 . Substituting x in the other three relations, the only remaining defining relation is (y + q −1 z)3 = (y + q −1 z)2 The other two are linearly dependent on it. Writing A = y and B = y + q −1 z, we obtain the desired description of Eq2 (k). Remarks 19. 1. Let C be the three dimensional subcoalgebra of Eq2 (k) with k-basis {x, y, z}. Then σ : C ⊗ Eq2 (k) → k defined by σ(x ⊗ 1) = 1, σ(x ⊗ x) = 1, σ(x ⊗ y) = 0, σ(x ⊗ z) = q, σ(y ⊗ 1) = 1, σ(y ⊗ x) = 0, σ(y ⊗ y) = 0, σ(y ⊗ z) = 0, σ(z ⊗ 1) = 0, σ(z ⊗ x) = q, σ(z ⊗ y) = 0, σ(z ⊗ z) = q 2 . is a Hopf function. 2. If q = 0, then B 2 is a grouplike element of Eq2 (k), since ∆(B 2 ) = B 2 ⊗ A2 + B 2 ⊗ (AB − A2 ) + B 2 ⊗ (BA − A2 ) + B 4 ⊗ (B 2 − BA − AB + A2 ) = B 2 ⊗ A2 + B 2 ⊗ (AB − A2 ) + B 2 ⊗ (BA − A2 ) + B 2 ⊗ (B 2 − BA − AB + A2 ) = B2 ⊗ B2 3. Let n ≥ 2 be a positive integer. The two-sided ideal I of E02 (k) generated by y n − y, zy, xy − x and yx − x is a biideal (cf. [96]) and B2n+1 (k) = E02 (k)/I is a 2n + 1-dimensional noncommutative noncocommutative bialgebra. The bialgebra B2n+1 (k) can be described as follows:
274
6 Hopf modules and the pentagon equation
– B2n+1 (k) is algebra with generators x, y, z and relations x2 = x,
xz = zx = z 2 = zy = 0,
y n = y,
xy = yx = x.
– The comultiplication ∆ and the counity ε are given by ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0.
4. Observe that y does not appear in the relations of E02 (k). As ∆(y − 1) = (y − 1) ⊗ y + 1 ⊗ (y − 1) and ε(y − 1) = 1, we get that the two-sided ideal generated by y − 1 is also a coideal. We can add the new relation y = 1 in the definition of E02 (k) and we obtain a three dimensional noncocommutative bialgebra T (k). More explicitly: – {1, x, z} is a basis of T (k) as a vector space. – The multiplication is given by x2 = x,
xz = zx = z 2 = 0.
– The comultiplication ∆ and the counit ε are given by ∆(x) = x ⊗ x,
∆(z) = x ⊗ z + z ⊗ 1,
ε(x) = 1,
ε(z) = 0.
In [103], Kaplansky gives two examples of three dimensional bialgebras over a field of characteristic zero, both of them are commutative and cocommutative. The only difference between our T (k) and one of Kaplansky’s bialgebras is the relation ∆(z) = x ⊗ z + z ⊗ 1 (in [103]: ∆(z) = 1 ⊗ z + z ⊗ 1). This minor change of ∆ (in our case k being a field of arbitrary characteristic) makes T (k) noncocommutative. The proofs of Propositions 135 and 136 are left to the reader, since they are similar to the proofs of Propositions 133 and 134. Proposition 135. Let q ∈ k and Rq the by 0 −q 0 1 Rq = 0 0 0 Let
Bq2 (k)
0
solution of the Hopf equation given 0
−q 2
0
q 0
0
0
0
be the bialgebra B(Rq ).
1. If q = 0, the bialgebra B02 (k) has generators x, y, z and relations yx = x,
yz = 0
6.3 Noncommutative noncocommutative bialgebras
275
The comultiplication ∆ and the counity ε are given by ∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0
2. If q = 0, the bialgebra Bq2 (k) has generators A, B and relations A2 B = AB The comultiplication ∆ and the counit ε are given by: ∆(A) = A ⊗ A,
∆(B) = q −1 AB ⊗ B + (B − AB) ⊗ A, ε(A) = 1,
ε(B) = q.
Remark 20. Bq2 (k) is not a Hopf algebra. We can localize it to obtain a Hopf algebra. As ∆(x) = x ⊗ x, ∆(y) = y ⊗ y and ε(x) = ε(y) = 1 we should add new generators which make x and y invertible. But then y = 1 and z = q(x − 1). It follows that the localization of the bialgebra Bq2 (k), is the usual Hopf algebra k[X, X −1 ], with ∆(X) = X ⊗ X, ε(X) = 1, and antipode S(X) = X −1 . Example 24. Let C be the three dimensional subcoalgebra of B02 (k) with basis {x, y, z}. An easy but long and boring computation shows that σ : C ⊗ B02 (k) → k is a Hopf function if and only if there exists a, b ∈ k such that σ(x ⊗ 1) = 1, σ(x ⊗ x) = 0, σ(x ⊗ y) = a, σ(x ⊗ z) = b, σ(y ⊗ 1) = 1, σ(y ⊗ x) = 0, σ(y ⊗ y) = 0, σ(y ⊗ z) = 0, σ(z ⊗ 1) = 0, σ(z ⊗ x) = 0, σ(z ⊗ y) = 0, σ(z ⊗ z) = 0. and ab = 0. Proposition 136. For q ∈ k, let Rq given by 1 0 Rq = 0 0
be the solution of the Hopf equation 0
0
q
1
0
0
0
q 0
0
0
0
B(Rq )
Write = Dq2 (k). The bialgebra D02 (k) has
generators x, y, z and relations
x2 = x = yx,
zx = xz = z 2 = yz = 0
The comultiplication ∆ and the counit ε are given by:
276
6 Hopf modules and the pentagon equation
∆(x) = x ⊗ x,
∆(y) = y ⊗ y,
ε(x) = ε(y) = 1,
∆(z) = x ⊗ z + z ⊗ y ε(z) = 0.
For q = 0, Dq2 (k) has generators A, B and relations A3 = A2 ,
BA = 0.
The comultiplication ∆ and the counit ε are given by: ∆(A) = A ⊗ A + q −1 (A2 − A) ⊗ B,
∆(B) = A2 ⊗ B + B ⊗ A − q −1 B ⊗ B,
ε(A) = 1,
ε(B) = 0.
Remarks 20. 1. Rq is also a solution of the quantum Yang-Baxter equation. The bialgebra A(Rq ) from Theorem 60 has generators x, y, z, t and relations xy − yx = qyz,
zx = xz = zy = z 2 = zt = 0, xy + qxt = qx2 ,
y 2 + qyt = qxy,
xt − tx = qtz,
ty + qt2 = qxt.
The comultiplication ∆ and the counit ε are such that the matrix x y z
t
is comultiplicative. 2. The bialgebra D02 (k) is the quotient of B02 (k) by the two-sided ideal (which is also a coideal) generated by x2 − x,
zx,
xz,
z2.
D02 (k) is also the quotient of E02 (k) by the two-sided ideal generated by yx − x,
yz.
3. Let n ≥ 2 be a positive integer. The bialgebras B02 (k), D02 (k) and E02 (k) constructed in the previous Propositions can be generalized to B0n (k), D0n (k) and E0n (k). We will describe B0n (k). Let π1 : k n → k n be the projection of k n onto the first component, i.e. π1 ((x1 , x2 , · · · , xn )) = (x1 , 0, · · · , 0), and let π 1 = Idkn − π1 be the projection of k n onto the hyperplane x1 = 0, that is π 1 ((x1 , x2 , · · · , xn )) = (0, x2 , · · · , xn ). Then π1 ⊗ π 1 is a solution of the Hopf equation and the bialgebra B0n (k) = B(π1 ⊗ π 1 ) can be described as follows: – B0n (k) has generators (cij )i,j=1,···,n and relations cii = 0,
cjk c11 = δkj δ1l c11
for all i, j ≥ 2 and k, l ≥ 1. As before, δvu is the Kronecker symbol.
6.4 The structure of finite dimensional Hopf algebras
277
– The comultiplcation ∆ and the counity ε are such that the matrix (cii ) is comultiplicative. The proof is similar to the one of Proposition 134. Among the elements (xij vu ), which define π1 ⊗ π 1 , the only nonzero ones x1t 1t = 1,
∀t ≥ 2.
If i = 1, all the relations χij kl = 0 are 0 = 0, with the exception of the relations ij χj1 = 0 for all j ≥ 2, which give us 0 = ci1 for all i ≥ 2. The relations χ1j kl = 0 give us cjk c1l = δkj δl1 c11 for all j ≥ 2 and k, l ≥ 1. Other new examples of bialgebras can be constructed starting from projections of k n on different intersections of hyperplanes. We end this section with one more example communicated to us by G. Mititica. Example 25. Let n be a positive integer and k a field such that n is invertible in k. Let R = (xkl ij ) given by xkl ij =
0, if i + j ≡ k + l(mod n) n−1 , if i + j ≡ k + l(mod n)
(6.30)
It is easy to show that R = (xkl ij ) is a solution of the Hopf equation. For n = 2, the bialgebra algebra B(R) from Theorem 66 is given as follows: B(R) has generators x and y and relations x2 + y 2 = x,
xy + yx = y.
The comultiplication ∆ and the counit ε are given by: ∆(x) = x ⊗ x + y ⊗ y,
∆(y) = x ⊗ y + y ⊗ x,
ε(x) = 1,
ε(y) = 0.
6.4 The pentagon equation versus the structure and the classification of finite dimensional Hopf algebras In this section we will present a fundamental construction related to the pentagon equation that associate to any solution of the pentagon equation a finite dimensional Hopf algebra. This construction is originally due to Baaj and Skandalis (see [10]) in the case of unitary multiplicatives R ∈ L(K ⊗ K), where K is a separable Hilbert space and to Davydov (see [65]) for vector spaces K over arbitrary fields. We will follow [137], leading us to the structure and the classification of finite dimensional Hopf algebras. A key role is played by the canonical element of the Heisenberg double of a Hopf algebra. Let M be a finite dimensional vector space and
278
6 Hopf modules and the pentagon equation
ϕ : M ⊗ M ∗ → End(M ),
ϕ(v ⊗ v ∗ )(w) = v ∗ , wv
the canonical isomorphism. The element R = ϕ−1 (IdM ) is called the canonical element of M ⊗ M ∗ . Of course R=
n
ei ⊗ e∗i
i=1
where {ei , e∗i | i = 1, · · · , n} is a dual basis and R is independent of the choice of the dual basis. Throughout this Section, A will be a finite dimensional algebra and R ∈ A⊗A will be an invertible solution of the pentagon equation R12 R13 R23 = R23 R12
(6.31)
For later use, we remark that if R ∈ A ⊗ A is a solution of the pentagon equation and f : A → B is an algebra map, then (f ⊗ f )(R) is a solution of the pentagon equation in B ⊗ B ⊗ B. We can define the category Pent of the pentagon objects: the objects are pairs (A, R), where A is a finite dimensional algebra and R ∈ A ⊗ A is an invertible solution of the pentagon equation. A morphism f : (A, R) → (B, T ) between two pentagon objects (A, R) and (B, T ) is an algebra map f : A → B such that (f ⊗ f )(R) = T . Pent is a monoidal category under the product (A, R) ⊗ (B, T ) = (A ⊗ B, R13 T 24 ). The following is the pentagon equation version of Proposition 126. Proposition 137. Let A be an algebra and R ∈ A ⊗ A an invertible solution of the pentagon equation. Consider the comultiplications ∆l , ∆r : A → A ⊗ A given by ∆r (a) = R−1 (1A ⊗ a)R = (6.32) U 1 R1 ⊗ U 2 aR2 ∆l (a) = R(a ⊗ 1A )R−1 = R1 aU 1 ⊗ R2 U 2 (6.33) where U = U 1 ⊗ U 2 = R−1 . Then Ar = (A, ·, ∆r ) and Al = (A, ·, ∆l ) are bialgebras without counit. Proof. It is obvious that ∆r and ∆l are algebra maps. For a ∈ A we have (Id ⊗ ∆r )∆r (a) = (R23 )−1 (R13 )−1 (1A ⊗ 1A ⊗ a)R13 R23 and
(∆r ⊗ Id)∆r (a) = (R12 )−1 (R23 )−1 (1A ⊗ 1A ⊗ a)R23 R12
so ∆r is coassociative if and only if R23 R12 (R23 )−1 (R13 )−1 (1A ⊗ 1A ⊗ a) = (1A ⊗ 1A ⊗ a)R23 R12 (R23 )−1 (R13 )−1
6.4 The structure of finite dimensional Hopf algebras
279
using the pentagon equation (6.31), we find that this is equivalent to R12 (1A ⊗ 1A ⊗ a) = (1A ⊗ 1A ⊗ a)R12 and this equality holds for any a ∈ A. In a similar way we can prove that ∆l is also coassociative. It follows from Proposition 137 that we can put two different algebra structures (without unit) on A∗ : the multiplications are the convolutions ∗l and ∗r which are the dual maps of ∆l and ∆r , i.e. ω, R1 aU 1 ω , R2 U 2 ω ∗l ω , a = ω ∗r ω , a = ω, U 1 R1 ω , U 2 aR2 for all ω, ω ∈ A∗ , a ∈ A. We have seen in Example 9 2) that the canonical element of the Drinfeld double is a solution of the QYBE. If the case of the pentagon equation, a similar result holds for the Heisenberg double H(L) of a Hopf algebra L (cf. Section 4.1). Let L be a Hopf algebra. Recall that L∗ is a left L-module algebra in the usual way h · g ∗ , h = g ∗ , h h and the Heisenberg double is by definition the smash product H(L) = L#L∗ The multiplication is given by (h#h∗ )(g#g ∗ ) = h(2) g#h∗ ∗ (h(1) · g ∗ ) Recall also that the maps iL : L → H(L), and
iL∗ : L∗ → H(L),
iL (l) = l#εL iL∗ (l∗ ) = 1L #l∗
are injective algebra maps. The Heisenberg double H(L) satisfies the following universal property: given a k-algebra A and algebra maps u : L → A, v : L∗ → A such that u(l)v(l∗ ) = v(l(1) · l∗ )u(l(2) ) (6.34) there exists a unique algebra map F : H(L) → A (given by F (l#l∗ ) = v(l∗ )u(l), for all l ∈ L, l∗ ∈ L∗ ) such that the following diagram commutes
280
6 Hopf modules and the pentagon equation
H(L) iL
L
I @ @ iL∗ @ @ @ F
@ @ @ u @ @ R
? A
L∗
v
The Heisenberg double H(L) presented above, differs from H(L) introduced in [140, Example 4.1.10], where H(L) = L# L∗ , with multiplication given by (h# h∗ )(g# g ∗ ) = hh∗(1) , g(2) g(1) # h∗(2) g ∗ for all h, g ∈ L, h∗ , g ∗ ∈ L∗ . However, [140, Corollary 9.4.3] and Proposition 138 show that the two descriptions of the Heisenberg double H(L) and H(L) are isomorphic as algebras, both of them being isomorphic to the matrix algebra Mn (k), where n = dim(L). Proposition 138. Let L be a finite dimensional Hopf algebra. Then there exists an algebra isomorphism H(L) ∼ = Mdim(L) (k). Proof. As L is finite dimensional, the functor T : L ML → MH(L) ,
T (M ) = M
where the right H(L)-action on M is given by l∗ , m m · l m • (l#l∗ ) = is an equivalence of categories (see Theorem 8). As the antipode of L is bijective ([172]) we have the following equivalences of categories MH(L) ∼ = L ML ∼ = Mk i.e. H(L) is Morita equivalent to k. It follows from the Morita theory (see, for instance, [2], pag. 265) that there exists an algebra isomorphism H(L) ∼ = Mn (k). Taking into account that dim(H(L)) = dim(L)2 , we obtain that n = dim(L). Remark 21. Let L be a finite dimensional Hopf algebra. Kashaev ([106]) proved that the Drinfeld double D(L) can be realized as a subalgebra in the tensor product of two Heisenbergs H(L) ⊗ H(L∗ ). This can be proved
6.4 The structure of finite dimensional Hopf algebras
281
immediately using Proposition 138: if dim(L) = n then dim(D(L)) = n2 and hence D(L) ⊂ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k) ∼ = H(L) ⊗ H(L) ∼ = H(L) ⊗ H(L∗ ). The following theorem is [181, Theorem 5.2] and [106, Theorem 1]. In [181], the Heisenberg double does not appear explicitly, and in [106] Heisengerg double is described in terms of structure constants, and not as a smash product. Theorem 69. Let L be a finite dimensional Hopf algebra and {ei , e∗i | i = 1, · · · , n} a dual basis of L. Then the canonical element (ei #ε) ⊗ (1#e∗i ) ∈ H(L) ⊗ H(L) R= i
is an invertible solution of the pentagon equation in H(L) ⊗ H(L) ⊗ H(L). Consequently, if A is and algebra, and f : H(L) → A an algebra map, then (f ⊗ f )(R) is an invertible solution of the pentagon equation in A ⊗ A ⊗ A. Proof. Taking into account the multiplication rule of H(L) we find (ej #ε) ⊗ (ei(2) #ei(1) · e∗j ) ⊗ (1#e∗i ) R23 R12 = i,j
and R12 R13 R23 =
(ea eb #ε) ⊗ (ec #e∗a ) ⊗ (1#e∗b ∗ e∗c )
a,b,c
so we have to prove the equality ej ⊗ ei(2) ⊗ ei(1) · e∗j ⊗ e∗i = ea eb ⊗ ec ⊗ e∗a ⊗ e∗b ∗ e∗c i,j
(6.35)
a,b,c
in L ⊗ L ⊗ L∗ ⊗ L∗ . Fix indices x, y, z, t ∈ {1, · · · , n}, and evaluate (6.35) at e∗x ⊗ e∗y ⊗ ez ⊗ et . (6.35) is then equivalent to e∗y , et(2) e∗x , ez et(1) = e∗x , ez eb e∗b ∗ e∗y , et b
Applying the definition of the convolution product e∗b e∗y , et = e∗b , et(1) e∗y , et(2) and the dual basis formula
eb e∗b , et(1) = et(1)
b
we find that (6.35) holds, as needed. We will now prove that
282
6 Hopf modules and the pentagon equation
U=
(S(ei )#ε) ⊗ (1#e∗i ) i
is the inverse of R, where S is the antipode of L. As H(L)⊗H(L) is isomorphic to n2 ×n2 -matrix algebra over k, it is enough to prove that RU = 1⊗1. Indeed, (ei S(ej )#ε) ⊗ (1#e∗i e∗j ) RU = i,j
Hence, we have to prove the formula ei S(ej ) ⊗ e∗i e∗j = 1 ⊗ ε i,j
which holds, as for indices x, y = 1, · · · , n we have e∗x , ei S(ej )e∗i e∗j , ey = e∗x , ei S(ej )e∗i , ey(1) e∗j , ey(2) i,j
i,j
e∗x , ei e∗i , ey(1) S(ej e∗j , ey(2) ) = i,j
=
e∗x , ey(1) S(ey(2) )
= e∗x , 1ε, ey 1 From now let R = R ⊗ R2 ∈ A ⊗ A be an invertible solution of the pentagon equation, on a finite dimensional algebra A. The subspaces AR,l = {a ∈ A | R(a⊗1A ) = a⊗1A } and AR,r = {a ∈ A | (1A ⊗a)R = 1A ⊗a} are called the spaces of left, respectively right, R-invariants of A. R(l) = { a∗ , R2 R1 | a∗ ∈ A∗ } and R(r) = { a∗ , R1 R2 | a∗ ∈ A∗ } are called the spaces of left, respectively right coefficients of R. We will denote them as follows P = P (A, R) = R(l) ; H = H(A, R) = R(r) . m Assume now that R = i=1 ai ⊗ bi , where m is as small as possible. Then m is called the length of R and will be denoted l(R) = m. From the choice of m, the sets {ai | i = 1, · · · , m}, respectively {bi | i = 1, · · · , m} are linear independent in A and hence bases of R(l) , respectively R(r) . In particular, dim(R(l) )=dim(R(r) )=l(R). Two elements R and S ∈ A ⊗ A are called equivalent (we will write R ∼ S) if there exists u ∈ U (A) an invertible element of u −1 A such that S then l(R)=l(S). Indeed, m S = R := (u ⊗ u)R(u ⊗ u) . If R ∼ m let R = i=1 ai ⊗ bi where m = l(R). Then S = i=1 uai u−1 ⊗ ubi u−1 and hence l(S) ≤ l(R); in a similar way we obtain that l(R) ≤ l(S). In particular,
6.4 The structure of finite dimensional Hopf algebras
283
if {ai | i = 1, · · · , m} is a basis of R(l) , then {uai u−1 | i = 1, · · · , m} is a basis of S(l) = u R(l) . Now consider a∗i ∈ P ∗ and b∗i ∈ H ∗ defined by a∗i , aj = δij = b∗i , bj i.e. {ai , a∗i } and {bi , b∗i } are dual bases of P and H. Extend a∗i : P → k and b∗i : H → k to respectively ωi∗ : A → k and λ∗i : A → k. We then have m
ωk , ai bi = bk and
i=1
m
aj λk , bj = ak
(6.36)
j=1
for all k = 1, · · · , m. We will use two different notations for R: R=
m
ai ⊗ bi =
i=1
m
aj ⊗ bj
j=1
when we are interested in the basis elements of P and H, and the generic notation R= R1 ⊗ R2 = r1 ⊗ r2 = r ; U = U 1 ⊗ U 2 = R−1 1 Theorem 70. Let A be a finite dimensional algebra, R = R ⊗R2 ∈ A⊗A an invertible solution of the pentagon equation and P = P (A, R) = R(l) , H = H(A, R) = R(r) the subspaces of coefficients of R. 1. P and H are unitary subalgebras of A and Hopf algebras with comultiplication given by ∆P (x) = ∆r (x) = R−1 (1A ⊗ x)R and ∆H (y) = ∆l (y) = R(y ⊗ 1A )R−1 (6.37) for all x ∈ P , y ∈ H. Furthermore, the k-linear map f : P ∗ → H, f (p∗ ) = p∗ , R1 R2 (6.38) is an isomorphism of Hopf algebras. 2. The k-linear map F : H(P ) → A,
F (p#p∗ ) =
p∗ , R1 R2 p
is an algebra map and R = (F ⊗ F )(R) where R ∈ H(P )⊗H(P ) is the canonical element associate of the Heisenberg double.
284
6 Hopf modules and the pentagon equation
3. The multiplication on A defines isomorphisms AR,r ⊗ P ∼ =A
(resp. H ⊗ AR,l ∼ = A)
of right P -modules (resp. left H-modules). In particular, A is free as a right P -module and as a left H-module and dim(P ) = dim(H) =
dim(A) dim(A) = = l(R) dim(AR,l ) dim(AR,r )
4. If f : (A, R) → (B, S) is an isomorphism in Pent, then the Hopf algebras P (A, R) and P (B, S) are isomorphic. Consequently, if S ∈ A ⊗ A is equivalent to R, then the Hopf algebras P (A, R) and P (A, S) are isomorphic. Proof. 1. We will use the notation introduced above. First we will prove that P (resp. H) are unitary subalgebras in A and subcoalgebras of Ar = (A, ∆r ) (resp. Al = (A, ∆l )). This will follow from the formulas: ap aq =
m
λp ∗l λq , bj aj ∈ P
(6.39)
j=1 m
∆r (ap ) =
λp , bi bj ai ⊗ aj ∈ P ⊗ P
(6.40)
i,j=1
and b p bq =
m
ωp ∗r ωq , aj bj ∈ H
(6.41)
j=1 m
∆l (bp ) =
ωp , ai aj bi ⊗ bj ∈ H ⊗ H
(6.42)
i,j=1
for all p, q = 1, · · · , m. We prove (6.39) and (6.40), leaving (6.41) and (6.42) to the reader. m m aj λp ∗l λq , bj = aj λp , R1 bj U 1 λq , R2 U 2 j=1
j=1 m = (Id ⊗ λp ⊗ λq )( aj ⊗ R1 bj U 1 ⊗ R2 U 2 ) j=1
(6.31)
= (Id ⊗ λp ⊗ λq )(R23 R12 (R23 )−1 ) = (Id ⊗ λp ⊗ λq )(R12 R13 ) m aj ak ⊗ bj ⊗ bk ) = (Id ⊗ λp ⊗ λq )( j,k=1
=
m j,k=1
aj λp , bj ak λq , bk = ap aq
6.4 The structure of finite dimensional Hopf algebras
285
i.e. P is a subalgebra of A. On the other hand m aj λp , bj ) ∆r (ap ) = ∆r (
(6.32)
=
(6.36)
j=1 m
U 1 R1 ⊗ U 2 aj R2 λp , bj
j=1 m = (Id ⊗ Id ⊗ λp )( U 1 R1 ⊗ U 2 aj R2 ⊗ bj ) j=1
(6.31)
= (Id ⊗ Id ⊗ λp )((R12 )−1 R23 R12 ) = (Id ⊗ Id ⊗ λp )(R13 R23 ) m ai ⊗ aj ⊗ bi bj ) = (Id ⊗ Id ⊗ λp )( i,j=1
=
m
ai ⊗ aj λp , bi bj
i,j=1
and P is subcoalgebra of (A, ∆r ). A similar computation yields (6.41) and (6.42), proving that H is a subalgebra of A and a subcoalgebra of (A, ∆l ). Moreover R ∈ P ⊗ H so, for any positive integer t, there exist scalars αij ∈ k such that m Rt = αij ai ⊗ bj (6.43) i,j=1
We will prove now that 1A ∈ P and 1A ∈ H. As A is finite dimensional, A can embeded into a matrix algebra A ⊂ Mn (k), where n =dim(A). We view R ∈ A ⊗ A ⊂ Mn (k) ⊗ Mn (k) ∼ = Mn2 (k) R is invertible, and it follows from the Cayley-Hamilton Theorem that the identity matrix In2 can be written as a linear combination of powers of R. Using (6.43), we find 1A ⊗ 1A =
m
γi,j ai ⊗ aj
i,j=1
for some γi,j ∈ k. Hence in Mn2 (k), the identity matrix In2 can be representated as a linear combinations of powers of R. Hence, using (6.43), we obtain in A ⊗ A a linear combination 1A ⊗ 1A =
m i,j=1
γi,j ai ⊗ bj
286
6 Hopf modules and the pentagon equation
for some γi,j ∈ k. Let p : A → k be the projection of A onto the one dimensional subspace spanned by 1A . Then 1A =
m
ai p, γi,j bj =
i,j=1
m
p, γi,j ai bj ∈ P ∩ H
i,j=1
i.e. P and H are unitary subalgebras of A and hence we can view U = R−1 ∈ P ⊗ H. The counit and the antipode of P and H are defined by the formulas: U 1 b∗k , U 2 εP : P → k, εP (ak ) = b∗k , 1A , SP : P → P, SP (ak ) = (6.44) and εH : H → k, εH (bk ) = a∗k , 1A , SH : H → H, SH (bk ) = a∗k , U 1 U 2 (6.45) for all k = 1, · · · , m. We will prove that P is a Hopf algebra, the fact that H is a Hopf algebra is proved in a similar way. First, we remark that, as H is a subalgebra of A, (6.40) can be rewritten as ∆r (ap ) =
m
b∗p , bi bj ai ⊗ aj
(6.46)
i,j=1
Now, for p = 1, · · · , m we have m
(Id ⊗ εP )∆r (ap ) =
ai b∗p , bi bj b∗j , 1A
(6.46)
i,j=1
=
m
ai b∗p , bi bj b∗j , 1A =
i,j=1
m
ai b∗p , bi = ap
i=1
i.e. (IP ⊗ εP )∆r = Id. A similar computation shows that (εP ⊗ IP )∆r = Id, and εP is a counit. Using (6.46), we find that SP is a right convolution inverse of IP : (Id ⊗ SP )∆r (ap ) =
n
ai b∗p , bi bj U 1 b∗j , U 2
i,j=1
=
=
n
ai U 1 b∗p , bi bj b∗j , U 2
i,j=1 n
ai U 1 b∗p , bi U 2 = (Id ⊗ b∗p )(RR−1 )
i=1
= 1A b∗p , 1A = εP (ap )1A
6.4 The structure of finite dimensional Hopf algebras
287
From the fact that P is finite dimensional, it follows that SP is an antipode of P . Let us prove now that f : P ∗ → H is an isomorphism of Hopf algebras. f (a∗j ) =
m
a∗j , ai bi = bj
i=1
so f is an isomorphism of vector spaces. Let us prove that f is an algebra map: m m εP , ai bi = b∗i , 1A bi = 1A f (1P ∗ ) = f (εP ) = i=1
i=1
(6.41) can be rewritten as b p bq =
m
a∗p ∗r a∗q , aj bj
j=1
which means that
f (a∗p )f (a∗q ) = f (a∗p ∗r a∗q )
for all p, q = 1, · · · , m, i.e. f is an algebra isomorphism. Let us prove now that f is also a coalgebra map. We recall the definition of the comultiplication ∆P ∗ : ∆P ∗ (a∗p ) = X1 ⊗ X2 ∈ P ∗ ⊗ P ∗ if and only if a∗p , xy =
X 1 , xX 2 , y
for all x, y ∈ P . It follows that f (X 1 ) ⊗ f (X 2 ) = X 1 , R1 R2 ⊗ X 2 , r1 r2 (f ⊗ f )∆P ∗ (a∗p ) = =
a∗p , R1 r1 R2 ⊗ r2 =
m
a∗p , ai aj bi ⊗ bj
i,j=1
(6.42)
= ∆H (bp ) = (∆H ◦ f )(a∗p )
i.e. f is also a coalgebra map. Hence, we have proved that f is an isomorphism of bialgebras, and as P and H are finite dimensional it is also a isomorphism of Hopf algebras (see [172]). 2. We remark that F (ai #a∗j ) =
m
a∗j , at bt ai = bj ai
t=1
for all i, j = 1, · · · , m. The fact that F is an algebra map can be proved directly using this formula; another way to proceed is to use the universal property of the Heisenberg double H(P ) for the diagram
288
6 Hopf modules and the pentagon equation
H(P ) iP
I @ @ iP ∗ @ @ @
P
F
P∗
@ @ @ u @ @ R
v ? A Here u : P → A is the usual inclusion and v : P ∗ → A is the composition v = f ◦ j, where f : P ∗ → H is the isomorphism from part 1) and j : H → A is the usual inclusion. We only have to prove that the compatibility condition (6.34) holds, i.e. hv(g ∗ ) = v(h(1) · g ∗ )h(2) for any h ∈ P and g ∗ ∈ P ∗ , which turns out to be hg ∗ , R1 R2 = g ∗ , R1 h(1) R2 h(2) or, equivalently
R1 ⊗ hR2 =
R1 h(1) ⊗ R2 h(2) .
This equation holds, as ∆P (h) = R−1 (1H ⊗ h)R, for any h ∈ P . m Now let R = i=1 (ai #εP ) ⊗ (1A #a∗i ) be the canonical element of H(P ) ⊗ H(P ). Then (F ⊗ F )(R) =
m
εP , at bt ai ⊗ bi =
i,t=1
m
b∗t , 1A bt ai ⊗ bi =
i,t=1
m
ai ⊗ bi = R.
i=1
3. Consider the map ρ = ρP : A → P ⊗ A,
ρ(a) = (1A ⊗ a)R =
R ⊗ aR = 1
2
m
ai ⊗ abi
i=1
for all a ∈ A. We will show that (A, ·, ρP ) ∈ P MP is a right-left P -Hopf module, where the structure of right P -module is just the multiplication · of A. Indeed, for a ∈ A we have (Id ⊗ ρ)ρ(a) = R1 ⊗ ρ(aR2 ) = R1 ⊗ r1 ⊗ aR2 r2 = (1A ⊗ 1A ⊗ a)R13 R23 (6.31) = (1A ⊗ 1A ⊗ a)(R12 )−1 R23 R12 = U 1 r1 ⊗ U 2 R1 r2 ⊗ aR2 = R−1 (1A ⊗ R1 )R ⊗ aR2 = ∆P (R1 ) ⊗ aR2 = (∆P ⊗ Id)ρ(a)
6.4 The structure of finite dimensional Hopf algebras
and
m
εP , ai abi =
i=1
m
289
b∗i , 1A abi = a
i=1
so (A, ρ) is a left P -comodule. We still have to prove the compatibility relation ρ(a)∆P (ai ) = (1A ⊗ a)RR−1 (1A ⊗ ai )R = (1A ⊗ aai )R = ρ(aai ) for all i = 1, · · · , m. Hence, (A, ·, ρP ) ∈ P MP and the coinvariants Aco(P ) = {a ∈ A | ρ(a) = 1 ⊗ a} = AR,r the right R-invariants of A. From the right-left version of the Fundamental Theorem of Hopf modules it follows that the multiplication on A, µ : AR,r ⊗ P → A,
µ(a ⊗ x) = ax
defines an isomorphism of P -Hopf modules, and, in particular, of right P modules. We recall that AR,r ⊗ P is a right P -module via (a ⊗ x) · y = a ⊗ xy, for all a ∈ AR,r , x, y ∈ P . It follows that A is free as a right P -module and dim(A) = dim(P )dim(AR,r ). In a similar way we can show that (A, ·, ρH ) ∈ plication on A and ρH : A → A ⊗ H,
H HM ,
ρH (a) = R(a ⊗ 1A ) =
where · is the multi-
R1 a ⊗ R2
for all a ∈ A. Moreover, Aco(H) = AR,l . If we apply once again the fundamental Theorem of Hopf modules (this time the left-right version) we obtain the other part of the statement. 4. isomorphism such that S = (f ⊗ f )(R) = nLet f : A → B is an algebra −1 1 f (a ) ⊗ f (b ). Then S = f (U ) ⊗ f (U 2 ). It follows that {f (ai ) | i i i=1 i = 1, · · · , n} is a basis of P (B, S) and hence the restriction of f to P (A, R) gives an algebra isomorphism between P (A, R) and P (B, S) that is also a coalgebra map since f (U 1 )f (R1 ) ⊗ f (U 2 )f (ai )f (R2 ) (f ⊗ f )∆P (A,R) (ai ) = = S −1 (1 ⊗ f (ai ))S = ∆P (B,S) (f (ai )) for all i = 1, · · · , n. The last statement is obtain taking B = A and f : A → A, f (x) = uxu−1 for all x ∈ A. Using Theorems 69 and 70, we obtain the following Corollary, which is a pure algebraic version of [10, Theorem 4.7]: the role of the operator VG on a local compact group G is played by the canonical element of the Heisenberg double.
290
6 Hopf modules and the pentagon equation
Corollary 38. Let A be a finite dimensional algebra and R ∈ A ⊗ A an invertible element. Then R is a solution of the pentagon equation if and only if there exists a finite dimensional Hopf algebra L and an algebra map F : H(L) → A such that R = (F ⊗ F )(R), where R is the canonical element associated to the Heisenberg double H(L). Remarks 21. 1. Part 3. of Theorem 70 is a Lagrange type theorem, useful to evaluate the dimension of the Hopf algebra P (A, R) coming from a solution of the pentagon equation. Let (A, R) ∈ Pent. It follows from Corollary 38 and Proposition 138 that there exists an algebra map F : Mn (k) → A, where j n = l(R). As Mn (k) is a simple algebra, F is injective. nHence aij = F (ei ) = 0 ∈ A, i, j = 1, · · · , n; then aij akl = δjk ail and 1A = i=1 aii . It follows from the Reconstruction Theorem of the matrix algebra ([113, Theorem 17.5]) that there exists an algebra isomorphism A∼ = Mn (B), where B = {x ∈ A | xaij = aij x, ∀i, j = 1, · · · , n}. Hence A is a matrix algebra if R is non-trivial (l(R) > 1 or, equivalently, R = 1A ⊗ 1A ). Furthermore, dim(A) = n2 dim(B) and hence, l(R)2 |dim(A). 2. We can compute the space of integrals on P , (resp. on H); we have to use the space AR,r of right R-coinvariants of A (resp. AR,l ). Let a ∈ AR,r and χ : A → H be an arbitrary right H-linear map. Then ϕ(ai ) = b∗i , χ(a)
ϕ : P → k,
is a right integral on P . Indeed, a ∈ AR,r , hence a ⊗ 1A = χ is right H-linear we obtain χ(a) ⊗ 1A =
n
n i=1
abi ⊗ ai . As
χ(a)bi ⊗ ai
(6.47)
i=1
Now, for ap ∈ P we have ϕ((ap )(1) ) ⊗ (ap )(2) = b∗p , bi bj b∗i , χ(a) ⊗ aj i,j
= (6.47)
=
b∗p , χ(a)bj ⊗ aj
j b∗p , χ(a)
⊗ 1A = ϕ(ap ) ⊗ 1A
which shows that ϕ is right P -colinear i.e. a right integral on P . Similarly, if b ∈ AR,l and ψ : A → P is an arbitrary left P -linear map, γ : H → k, is a left integral on H.
γ(bi ) = a∗i , χ(b)
6.4 The structure of finite dimensional Hopf algebras
291
Theorem 71. Let L be a finite dimensional Hopf algebra. Then there exists an isomorphism of Hopf algebras L∼ = P (H(L), R) where R is the canonical element of the Heisenberg double H(L). Proof. Let {ei | i = 1, · · · , m} be a basis of L, {e∗i | i = 1, · · · , m} the dual basis of L∗ , and R=
m
(ei #εL ) ⊗ (1L #e∗i ) ∈ H(L) ⊗ H(L)
i=1
the canonical element. We have to prove that the Hopf algebra P (H(L), R) extracted from part 1. of Theorem 70 is isomorphic to L, with the initial structure of Hopf algebra. Of course, iL : L → H(L),
iL (l) = l#εL
is an injective algebra map. We identify L∼ = Im(iL ) = L#εL From the construction, P (H(L), R) is the subalgebra of H(L) having {ei #εL | i = 1, · · · , m} as a basis; i.e. there exists an algebra isomorphism L ∼ = Im(iL ) = P (H(L), R). It remains to prove that the coalgebra structure (resp. the antipode) of P (H(L), R) extracted from Theorem 70 is exactly the original coalgebra structure (resp. the antipode) of L. As the counit and the antipode of a Hopf algebra are uniquely determined by the multiplication and the comultiplication, the only thing left to show is the fact that, via the above identification, ∆P = ∆L . This means that ∆L (ei #εL ) = R−1 (1H(L) ⊗ ei #εL )R or, equivalently, R∆L (ei #εL ) = (1L #εL ) ⊗ (ei #εL ) R Now we compute m (ej #εL ) ⊗ (1L #e∗j ) (1L #εL ) ⊗ (ei #εL ) R = (1L #εL ) ⊗ (ei #εL )
j=1
=
m j=1
On the other hand
(ej #εL ) ⊗ (ei(2) #ei(1) · e∗j )
292
6 Hopf modules and the pentagon equation
R∆L (ei #εL ) =
m
(ej #εL ) ⊗ (1L #e∗j )
(ei(1) #εL ) ⊗ (ei(2) #εL )
j=1
=
m
(ej ei(1) #εL ) ⊗ (ei(2) #e∗j )
j=1
Hence, we have to show m
ej ei(1) ⊗ ei(2) ⊗
e∗j
=
j=1
m
ej ⊗ ei(2) ⊗ ei(1) · e∗j
(6.48)
j=1
For indices a, b, k ∈ {1, · · · , m}, evaluate (6.48) at e∗a ⊗ e∗b ⊗ ek . (6.48) is then equivalent to e∗a , ek ei(1) e∗b , ei(2) =
m
e∗a , ej e∗b , ei(2) ei(1) · e∗j , ek
j=1
and this is easily verified: m
e∗a , ej e∗b , ei(2) ei(1) · e∗j , ek =
j=1
=
m j=1 m
e∗a , ej e∗b , ei(2) e∗j , ek ei(1) e∗a , ej e∗j , ek ei(1) e∗b , ei(2)
j=1
= e∗a , ek ei(1) e∗b , ei(2) It follows that ∆L = ∆P and L ∼ = P (H(L), R) as Hopf algebras. Let L be a finite dimensional Hopf algebra. Proposition 138 proves that there exists an algebra isomorphism H(L) ∼ = Mn (k), where n = dim(L). Via this isomorphism the canonical element R ∈ H(L)⊗H(L) is viewed as an element of Mn (k) ⊗ Mn (k), or as a matrix of Mn2 (k). We will now give the data which show us how any finite dimensional Hopf m algebra is constructed. Let R = i=1 Ai ⊗ Bi ∈ Mn (k) ⊗ Mn (k) be an invertible solution of the pentagon equation such that the sets of matrices {Ai | i = 1, · · · m} and {Bi | i = 1, · · · m} are linearly independent over k. Let {Bi∗ | i = 1, · · · m} be the dual basis of {Bi | i = 1, · · · m} and write U = R−1 = U 1 ⊗ U 1. The Hopf algebra P (Mn (k), R) is described as follows: – as an algebra P (Mn (k), R) is the subalgebra of the n × n-matrix algebra Mn (k) with {Ai | i = 1, · · · m} as a k-basis; – the coalgebra structure and the antipode of P (Mn (k), R) are given by the following formulas:
6.4 The structure of finite dimensional Hopf algebras
293
∆ : P (Mn (k), R) → P (Mn (k), R) ⊗ P (Mn (k), R), ∆(Ai ) = R−1 (In ⊗ Ai )R
(6.49)
Bi∗ , In
(6.50)
ε : P (Mn (k), R) → k, S : P (Mn (k), R) → P (Mn (k), R),
ε(Ai ) = S(Ai ) =
Bi∗ , U 2 U 1
(6.51)
for all i = 1, · · · , m. Theorems 70 and 71 imply the following Structure Theorem for finite dimensional Hopf algebras. Theorem 72. L is a finite dimensional Hopf algebra if and only if there exist a positive integer n and an invertible solution of the pentagon equation R ∈ Mn (k) ⊗ Mn (k) ∼ = Mn2 (k) such that L ∼ = P (Mn (k), R). Furthermore, dim(L) = l(R) =
n2 dim(Mn (k)R,r )
where Mn (k)R,r is the subspace of right R-invariants of Mn (k). Remark 22. Let L be a Hopf algebra with a comultiplication ∆ and R ∈ L⊗L an invertible element. On the algebra L, Drinfeld ([75]) introduced a new comultiplication ∆R given by ∆R (l) = R−1 ∆(l)R for all l ∈ L. Let LR := L as an algebra and having ∆R as a comultiplication. If LR is a structure of Hopf algebra it is called a twist of L. It was proved in [75] that if R is a Harrison cocycle1 , i.e. (∆ ⊗ Id)(R)(R ⊗ 1) = (Id ⊗ ∆)(R)(1 ⊗ R) (ε ⊗ Id)(R) = (Id ⊗ ε)(R) = 1
(6.52)
then LR is a Hopf algebra, i.e. a twist of L. The twist construction plays a crucial role in the theory of finite dimensional triangular semisimple Hopf algebras classification ([82]). Let Mn (k) be the matrix algebra having the trivial bialgebra structure (without counit) ∆ : Mn (k) → Mn (k) ⊗ Mn (k),
∆(x) = In ⊗ x
for all x ∈ Mn (k). Any subalgebra of Mn (k) is a subbialgebra. Theorem 72 and the comultiplication (6.37) show that any finite dimensional Hopf algebra L, viewed as a subalgebra of the matrix algebra, is obtained as a twist in the sense of Drinfeld: the trivial bialgebra structure of Mn (k) is twisted by an invertible element R. An important difference with the previous situation is that R is not a Harrison cocycle in the sense of (6.52): R is not a solution of the equation R23 R12 = R13 R23 ) but a solution of the pentagon equation. 1
The fact that (6.52) is a Harrison cocycle condition is shown in [37]: in the literature (see e.g. [82]), (6.52) called the twist equation.
294
6 Hopf modules and the pentagon equation
Let n be a positive integer. We have proved that an n-dimensional Hopf algebra L is isomorphic to a P (n, R) for R ∈ Mn (k) ⊗ Mn (k) an invertible solution of the pentagon equation such that l(R) = n. We are now going to prove the Classification Theorem for finite dimensional Hopf algebras. Let Pentn be the set Pentn = {R ∈ Mn (k) ⊗ Mn (k) | (Mn (k), R) ∈ Pent and l(R) = n}. Theorem 73. Let n be a positive integer. Then there exists a one to one correspondence between the set of types of n-dimensional Hopf algebras and the set of the orbits of the action GLn (k) × Pentn → Pentn ,
(u, R) → (u ⊗ u)R(u ⊗ u)−1 .
(6.53)
Proof. In part 5. of the Theorem 70 we proved that there exists a Hopf algebra isomorphism P (Mn (k), R) ∼ = P (Mn (k), u R) for any u ∈ GLn (k), which means that all the Hopf algebras associated to the elements of an orbit of the action (6.53) are isomorphic. We will now prove the converse. First we show that two finite dimensional Hopf algebras L1 and L2 are isomorphic if and only if (H(L1 ), RL1 ) and (H(L2 ), RL2 ) are isomorphic as objects in Pent. Indeed, let f : L1 → L2 be a Hopf algebra isomorphism. Then, f ∗ : L∗2 → L∗1 , f ∗ (l∗ ) = l∗ ◦ f is an isomorphism of Hopf algebras and f˜ : H(L1 ) → H(L2 ),
f˜(h#h∗ ) := f (h)#(f ∗ )−1 (h∗ ) = f (h)#h∗ ◦ f −1
for all h ∈ L1 , h∗ ∈ L∗1 is an algebra isomorphism. Indeed, let h, g ∈ L1 and h∗ , g ∗ ∈ L∗1 ; using the fact that f is an algebra map, we have f (h(2) )f (g)# h∗ (h(1) · g ∗ ) ◦f −1 f˜ (h#h∗ )(g#g ∗ ) = and as f is a coalgebra map f (h(2) )f (g)# h∗ ◦ f −1 f (h(1) ) · (g ∗ ◦ f −1 ) f˜(h#h∗ )f˜(g#g ∗ ) = It follows that f˜ is an algebra map, since for any l ∈ L2 we have h∗ ◦ f −1 f (h(1) ) · (g ∗ ◦ f −1 ) , l = h∗ ◦ f −1 , l(1) g ∗ ◦ f −1 , l(2) f (h(1) ) = h∗ , f −1 (l(1) )g ∗ , f −1 (l(2) )h(1) = h∗ (h(1) · g ∗ ) , f −1 (l) = h∗ (h(1) · g ∗ ) ◦f −1 , l On the other hand, if {ei , e∗i } is a dual basis of L1 , then {f (ei ), e∗i ◦ f −1 } is a dual basis of L2 and hence (f˜ ⊗ f˜)(RL1 ) = RL2 , and this proves
6.4 The structure of finite dimensional Hopf algebras
295
that f˜ is an isomorphism in Pent. Let ni = dim(Li ), i = 1, 2. Using Proposition 138 we obtain that (H(L1 ), RL1 ) ∼ = (H(L2 ), RL2 ) if and only if (Mn1 (k), R1 ) ∼ = (Mn2 (k), R2 ) in Pent, where Ri is the image of RLi under the algebra isomorphism H(Li ) ∼ = Mni (k). Now, the two matrix algebras Mn1 (k) and Mn2 (k) are isomorphic if and only if n1 = n2 and the Skolem-Noether theorem tells us that any automorphism g of the matrix algebra Mn1 (k) is an inner one: there exists u ∈ GLn1 (k) such that g(x) = gu (x) = uxu−1 . Hence we obtain that (Mn1 (k), R1 ) ∼ = (Mn2 (k), R2 ) in Pent if and only if n1 = n2 and there exists u ∈ GLn1 (k) such that R2 = (gu ⊗ gu )(R1 ) = (u ⊗ u)R1 (u ⊗ u)−1 , i.e. R2 is equivalent to R1 , as needed. We conclude with a few examples, evidencing our general method to determine invertible solutions of the pentagon equation R ∈ Mn (k) ⊗ Mn (k). Let A = (aij ), B = (bij ) ∈ Mn (k). We recall that, via the canonical isomorphism Mn (k) ⊗ Mn (k) ∼ = Mn2 (k), A ⊗ B viewed as a matrix of Mn2 (k) is given by: a11 B a12 B · · · a1n B a21 B a22 B · · · a2n B A⊗B = (6.54) · · ··· · an1 B an2 B · · · ann B Let (eji )i,j=1,n be the canonical basis of Mn (k). An element R ∈ Mn (k) ⊗ Mn (k) can be written as follows R=
n
eji ⊗ Aij
(6.55)
i,j=1
for some matrices Aij ∈ Mn (k). Using the formula (6.54), R viewed Kronecker product: A11 A21 R= · An1
as a matrix in Mn2 (k) is given by the A12 A22 · An2
· · · A1n · · · A2n ··· · · · · Ann
(6.56)
and we can quickly check if R is invertible (det(R) = 0). A large class of invertible R is given by choosing (Aij ) such that R is upper triangular (i.e. Aij = 0 for all i > j) and Aii is invertible in Mn (k) for all i = 1, · · · n. The next Proposition clarify the condition for R = (Aij )i,j=1,n ∈ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k) to be a solution of the pentagon equation. Proposition 139. Let n be a positive integer and R = (Aij )i,j=1,n ∈ Mn2 (k) ∼ = Mn (k) ⊗ Mn (k), Aij ∈ Mn (k), be an invertible matrix. Then R is a solution of the pentagon equation if and only if
296
6 Hopf modules and the pentagon equation n
Aij ⊗ Ajp = R(Aip ⊗ In )R−1
(6.57)
j=1
for all i, p = 1, · · · , n. Proof. Taking into account the multiplication rule for elementary matrices, we find R12 R13 R23 = R23 R12 =
n
epi ⊗ Aij esr ⊗ Ajp Ars
i,j,p,r,s=1 n
epi ⊗ eba Aip ⊗ Aab .
a,b,i,p=1
Hence, R is a solution of the pentagon equation if and only if n
eba Aip ⊗ Aab =
Aij esr ⊗ Ajp Ars
(6.58)
i,r,s=1
a,b=1
or, equivalently,
n
n R(Aip ⊗ In ) = ( Aij ⊗ Ajp )R
(6.59)
j=1
for all i, p = 1, · · · n. Viewing R as a matrix in Mn2 (k) (see (6.56)) and using (6.54), we find that the pentagon equation (6.59) can be rewritten in Mn2 (k). As R is invertible, (6.59) is equivalent to n
Aij ⊗ Ajp = R(Aip ⊗ In )R−1 = ∆l (Aip )
j=1
which means that the matrix (Aij ) is comultiplicative with respect to ∆l . Example 26. In this example we will show that the two constructions of bialgebras that follow from the solutions of the pentagon equation using Theorems 67 and 70 give very different objects. Let R ∈ M4 (k) ∼ = M2 (k) ⊗ M2 (k) be given by
1
0 R= 0 0
0
0
0
1
0
1
1
0 0
0
0
1
Viewing R as an element of M2 (k) ⊗ M2 (k), we have
6.4 The structure of finite dimensional Hopf algebras
297
R = I2 ⊗ I2 + e12 ⊗ e21 where {eij } is the canonical basis of the matrix algebra M2 (k) (see Section 5.1). As char(k) = 2, R−1 = R. It follows from Proposition 133 that R is is an invertible solution of the pentagon equation if and only if char(k) = 2, and in this case P (R) = B(τ ◦R◦τ ) is a five dimensional noncommutative noncocommutative bialgebra. It is easy to see that the Hopf algebra P (M2 (k), R) obtained applying Theorem 70 to R, is the groupring kC2 where C2 = {1, g} is the group with two elements. Indeed, from the construction it follows that P (M2 (k), R) is the subalgebra of M2 (k) with {I2 , e21 } as a basis. If we denote g = I2 − e21 , then g 2 = I2 and P (M2 (k), R) and kC2 are isomorphiic as Hopf algebras (using the fact that char(k) = 2 in the formula for ∆P ). Example 27. Let n be a positive integer and + en1 ∈ Mn (k) A = e12 + e23 + · · · + en−1 n Let R ∈ Mn (k) ⊗ Mn (k) be given by R = e11 ⊗ In + e22 ⊗ A + · · · + enn ⊗ An−1
(6.60)
A routine computation shows that R is an invertible solution of the pentagon equation with inverse R−1 = e11 ⊗ In + e22 ⊗ An−1 + · · · + enn ⊗ A.
(6.61)
Then P (Mn (k), R) ∼ = (kG)∗ , the Hopf algebra of functions on a cyclic group with n elements G. Proof. We will prove that H = H(Mn (k), R) ∼ = kG, the group algebra of G by and then we use the duality between P (Mn (k), R) and H(Mn (k), R) given n Theorem 70. We remark that R is already written in the form R = i=1 Ai ⊗ Bi , with (Ai ) and (Bi ) are linear independent. Then H(Mn (k), R) is the commutative subalgebra of Mn (k) with basis {In , A, A2 , · · · , An−1 }. We also note that An = In . Using (6.61), (6.37), and (6.45), we obtain that the comultiplication, the counit and the antipode of H are given by ∆H (A) = A ⊗ A,
εH (A) = In ,
SH (A) = An−1 = A−1
i.e. H ∼ = kG. Example 28. Let R ∈ M16 (k) ∼ = M4 (k) ⊗ M4 (k) be the upper triangular matrix given by I4 0 B 0 0 A 0 C R= 0 0 A 0 0 0 0 I4
298
where
6 Hopf modules and the pentagon equation
0 1 A= 0 0
1 0 0 0
0 0 0 1
0 0 , 1 0
0 0 00 0 0 0 0 B= 1 0 0 0, 0 −1 0 0
0 0 C= 0 1
0 00 0 0 0 −1 0 0 0 00
If we view R ∈ M4 (k) ⊗ M4 (k), then R is given by R = (e11 + e44 ) ⊗ I4 + (e22 + e33 ) ⊗ (e21 + e12 + e43 + e34 ) + e31 ⊗ (e13 − e24 ) + e42 ⊗ (e14 − e23 )
(6.62)
R is an invertible solution of the pentagon equation, H(M4 (k), R) ∼ = H4 , and hence P (M4 (k), R) ∼ = H4∗ ∼ = H4 , where H4 , Sweedler’s four dimensional Hopf algebra. Proof. Similar to the proof in Example 27. The inverse of R is R−1 = (e11 +e44 )⊗I4 +(e22 +e33 )⊗(e21 +e12 +e43 +e34 )+e31 ⊗(e14 −e23 )+e42 ⊗(e42 −e13 ) and therefore the Hopf algebra H = H(M4 (k), R) is the four dimensional subalgebra of M4 (k) with k-basis {I4 , e21 + e12 + e43 + e34 , e14 − e23 , e13 − e24 } Now, writing x = e13 − e24 and g = e21 + e12 + e43 + e34 we find that x2 = 0,
g 2 = I4 ,
gx = −xg = e14 − e23
On the other hand the formula of the comultiplication of H given by (6.37), namely ∆(A) = R(A ⊗ I4 )R−1 , for all A ∈ H gives, using the expresion of R−1 , ∆(g) = g ⊗ g, ∆(x) = x ⊗ g + I4 ⊗ x i.e. H ∼ = H4 , the Sweedler four dimensional Hopf algebra (see Example 9 4)). Theorem 73 opens a new road for describing the types of isomorphisms for Hopf algebras of a certain dimension. The first step, and the most important, is however the development of a new Jordan type theory (we called it restricted Jordan theory). From the point of view of actions, the classical Jordan theory gives the most elementary description of the representatives of the orbits of the action GLn (k) × Mn (k) → Mn (k),
(U, A) → U AU −1 .
The restricted Jordan theory refers to the following open problem: Problem: Describe the orbits of the action GLn (k)×(Mn (k)⊗Mn (k)) → Mn (k)⊗Mn (k), (U, R) → (U ⊗U )R(U ⊗U )−1 .
6.4 The structure of finite dimensional Hopf algebras
299
We recall that the canonical Jordan form JA of a matrix A is the matrix equivalent to A which has the greatest number of zeros. For practical reasons, in the restricted Jordan theory we are in fact interested in finding the elements of each orbit that have the greatest number of zeros. Of these, we retain only those which are invertible solutions of the pentagon equation. The set of types of n-dimensional Hopf algebras will be those Hopf algebras associated (using Theorem 72) to the solutions of length n (or, equivalently, the space of coinvariant elements is n-dimensional); all other Hopf algebras will have a dimension that is a divisor of n2 . We mention that, as a general rule, the set of types of n-dimensional Hopf algebras is infinite (this was proved recently in [7], [12], [90]). If however we limit ourselves to classifying certain special types of Hopf algebras, then this set can be finite. For instance, the set of types of n-dimensional semisimple and cosemisimple Hopf algebras is finite ([168]). The restricted Jordan Theory, is also involved in the theory of classification of separable algebras (see the This has to do with those orbits last1 chapter). whose representatives R = R ⊗ R2 ∈ Mn (k) ⊗ Mn (k)are solutions of the separability equation R12 R23 = R23 R13 = R13 R12 and R1 R2 = In . A new interesting direction related to the study of the pentagon equation is a general representation theory, whose objects are defined bellow. Let (M, R) be a pair, where M is a vector space and R ∈ End(M ⊗ M ) is a solution of the pentagon equation. A representation of (M, R), or an (M, R)-module is a pair (V, ψV ), where V is a vector space and ψV : M ⊗ V → V ⊗ M is a k-linear map such that 23 (τM,M R)12 = (τM,M R)23 ψV12 ψV23 ψV12 τM,V
as maps M ⊗ M ⊗ V → V ⊗ M ⊗ M . A morphism of two (M, R)-modules (V, ψV ), (W, ψW ) is a k-linear map f : V → W such that (f ⊗ IdM )ψV = ψW (IdM ⊗ f ). (M,R) M will be the monoidal category of (M, R)-modules, where the monoidal structure is given by 23 12 ψV ). (V, ψV ) ⊗ (W, ψW ) = (V ⊗ W, ψW
We will see that the category of representations of a Hopf algebra, or more generally of an algebra, is a subcategory of (M,R) M. Examples 12. 1. Let L be a Hopf algebra and RL : L ⊗ L → L ⊗ L the canonical solution of the pentagon equation, RL (g ⊗ h) = g(1) ⊗ g(1) h, for all g, h ∈ L. Then any left L-module (V, ·) has a natural structure of (L, RL )module with the map ψV : L ⊗ V → V ⊗ L,
ψV (g ⊗ v) = g(2) · v ⊗ g(1)
Indeed, for g, h ∈ L and v ∈ V , we have that 23 (τL,L RL )12 (g ⊗ h ⊗ v) = (τL,L RL )23 ψV12 ψV23 (g ⊗ h ⊗ v) ψV12 τL,V = g(3) h(2) · v ⊗ g(2) h(1) ⊗ g(1)
300
6 Hopf modules and the pentagon equation
2. Now let A be a k-algebra and R = RA : A ⊗ A → A ⊗ A,
R(a ⊗ b) = a ⊗ ba
the solution of the pentagon equation constructed in Proposition 127. Let V be a vector space and · : V ⊗ A → V a k-linear map. We define ψV = ψ(V,·) : A ⊗ V → V ⊗ A,
ψV (a ⊗ v) = v · a ⊗ a
Then (V, ψ(V,·) ) is an (A, RA )-module if and only if (V, ·) is a right A-module in the usual sense. Indeed, the statement follows from the formulas (τA,A RA )23 ψV12 ψV23 (a ⊗ b ⊗ v) = (v · b) · a ⊗ ba ⊗ a and 23 ψV12 τA,V (τA,A RA )12 (a ⊗ b ⊗ v) = v · (ba) ⊗ ba ⊗ a
for all a, b ∈ A, v ∈ V . Hence the category category of modules over an algebra A.
(M,R) M
generalizes the usual
7 Long dimodules and the Long equation
In this Chapter, we will show that the nonlinear equation R12 R23 = R23 R12 (called Long equation) can be associated to the category of Long dimodules H over a bialgebra H. The Long equation is obtained from the quantum HL Yang-Baxter equation by deleting the middle term from both sides. Our theory is similar to the one developed in Chapters 5 and 6, where we have discussed how Yetter-Drinfeld modules and Hopf modules are connected to respectively the quantum Yang-Baxter equation and the pentagon equation. A different approach to solving the Long equation, in a general monoidal category, is given in [16].
7.1 The Long equation Definition 15. Let M be a vector space over a field k. R ∈ End(M ⊗ M ) is called a solution of the Long equation if R12 R23 = R23 R12
(7.1)
in End(M ⊗ M ⊗ M ). For later use, we rewrite the Long equation in matrix form. The proof is left to the reader. Proposition 140. Let {m1 , · · · , mn } be a basis of M , and let R and S ∈ ij End(M ⊗ M ) be given by their matrices (xij uv ) and (yuv ), i.e. ij R(mu ⊗ mv ) = xij uv mi ⊗ mj and S(mu ⊗ mv ) = yuv mi ⊗ mj
Then R23 S 12 = S 12 R23 if and only if pv αj pi xij vk yql = xlk yqα
(7.2)
In particular, R is a solution for the Long equation if and only if pv αj pi xij vk xql = xlk xqα
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 301–316, 2002. c Springer-Verlag Berlin Heidelberg 2002
(7.3)
302
7 Long dimodules and the Long equation
In Proposition 141, we give some other nonlinear equations that are equivalent to the Long equation. Proposition 141. Let M be a vector space and R ∈ End(M ⊗ M ). The following statements are equivalent 1. R is a solution of the Long equation; 2. T = Rτ is a solution of the equation T 12 T 13 = T 23 T 13 τ (123) 3. U = τ R is a solution of the equation U 13 U 23 = τ (123) U 13 U 12 4. W = τ Rτ is a solution of the equation τ (123) W 23 W 13 = W 12 W 13 τ (123) Proof. 1. ⇔ 2. As R = T τ , R is a solution of the Long equation if and only if T 12 τ 12 T 23 τ 23 = T 23 τ 23 T 12 τ 12 (7.4) Now τ 12 T 23 τ 23 = T 13 τ 13 τ 12 ,
and τ 23 T 12 τ 12 = T 13 τ 12 τ 13
and (7.4) is equivalent to T 12 T 13 τ 13 τ 12 = T 23 T 13 τ 12 τ 13 The equivalence of 1. and 2. follows since τ 12 τ 13 τ 12 τ 13 = τ (123) . 1. ⇔ 3. R = τ U , so R is a solution of the Long equation if and only if τ 12 U 12 τ 23 U 23 = τ 23 U 23 τ 12 U 12
(7.5)
Using the fact that τ 12 U 12 τ 23 = τ 23 τ 13 U 13 ,
and τ 23 U 23 τ 12 = τ 23 τ 12 U 13 .
we find that (7.5) is equivalent to U 13 U 23 = τ 13 τ 12 U 13 U 12 and we are done since τ 13 τ 12 = τ (123) . 1. ⇔ 4. R = τ W τ is a solution of the Long equation if and only if τ 12 W 12 τ 12 τ 23 W 23 τ 23 = τ 23 W 23 τ 23 τ 12 W 12 τ 12 Using the formulas
(7.6)
7.1 The Long equation
τ 12 τ 23 = τ 13 τ 12 ,
303
τ 23 τ 12 = τ 13 τ 23 ,
τ 12 W 12 τ 13 = τ 12 τ 13 W 23 ,
τ 12 W 23 τ 23 = W 13 τ 12 τ 23
τ 23 W 23 τ 13 = τ 23 τ 13 W 12 ,
τ 23 W 12 τ 12 = W 13 τ 23 τ 12
we find that (7.6) is equivalent to τ 12 τ 13 W 23 W 13 τ 12 τ 23 = τ 23 τ 13 W 12 W 13 τ 23 τ 12 proving the equivalence of 1. and 4. since τ 13 τ 23 τ 12 τ 13 = τ 23 τ 12 τ 23 τ 12 = τ (123) Examples 13. 1. If R ∈ End(M ⊗ M ) is bijective, then R is a solution of the Long equation if and only if R−1 is also a solution of the Long equation. 2. Let (mi )i∈I be a basis of M and (aij )i,j∈I be a family of scalars of k. Then, R : M ⊗ M → M ⊗ M , R(mi ⊗ mj ) = aij mi ⊗ mj , for all i, j ∈ I, is a solution of the Long equation. In particular, the identity map IdM ⊗M is a solution of the Long equation. 3. Let M be a finite dimensional vector space and u an automorphism of M . If R is a solution of the Long equation then u R = (u ⊗ u)R(u ⊗ u)−1 is also a solution of the Long equation. 4. Let R ∈ M4 (k) given by
a
0
0
0
0 R= 0
b
c
d
e
0 0
0
0
0
f
A direct computation shows that R is a solution of the Long equation if and only if c = d = 0. In particular, if q ∈ k, q = 0, q = 1, the two dimensional Yang-Baxter operator Rq is a solution of the QYBE and is not a solution for the Long equation. 5. Let G be a group and M be a left kG-module. Assume also that M is G-graded, i.e. M = ⊕σ∈G Mσ where the Mσ are subspaces of M . If the Mσ are kG-submodules of M , then the map R : M ⊗ M → M ⊗ M, R(n ⊗ m) = σ · n ⊗ mσ , ∀n, m ∈ M (7.7) σ
is a solution of the Long equation. If G is non-abelian, then R is not a solution of the QYBE.
304
7 Long dimodules and the Long equation
It suffices to show that (7.1) holds for homogenous elements. Let mσ ∈ Mσ , mτ ∈ Mτ and mθ ∈ Mθ . Then R23 R12 (mσ ⊗ mτ ⊗ mθ ) = R23 (τ · mσ ⊗ mτ ⊗ mθ ) = τ · mσ ⊗ θ · mτ ⊗ mθ and R12 R23 (mσ ⊗ mτ ⊗ mθ ) = R12 (mσ ⊗ θ · mτ ⊗ mθ ) = τ · mσ ⊗ θ · mτ ⊗ mθ and it follows that R is a solution of the Long equation. On the other hand R12 R13 R23 (mσ ⊗ mτ ⊗ mθ ) = τ θ · mσ ⊗ θ · mτ ⊗ mθ and R23 R13 R12 (mσ ⊗ mτ ⊗ mθ ) = θτ · mσ ⊗ θ · mτ ⊗ mθ and we see that R is not a solution of the QYBE if G is not abelian.
7.2 The FRT Theorem for the Long equation We reconsider the Long dimodules introduced in Section 4.5, in the situation where the underlying algebra A and coalgebra C are equal to a given bialgebra H. For completeness sake, we recall the Definition. In the case where H is commutative and cocommutative, it is due to Long [118]. Definition 16. Let H be a bialgebra. A (left-right) Long H-dimodule is a threetuple (M, ·, ρr ), where (M, ·) is a left H-module and (M, ρr ) is a right H-comodule such that ρ(h · m) = h · m[0] ⊗ m[1]
(7.8)
for all h ∈ H and m ∈ M . The category of H-dimodules and H-linear H-colinear maps will be denoted by H LH . Examples 14. 1. Let G be a group. A left-right kG-dimodule M is a kGmodule M , together with a family {Mσ | σ ∈ G} of kG-submodules of M such that M = ⊕σ∈G Mσ (cf. Example 13 5)). Indeed, we know that M is a right kG-comodule if M = ⊕σ∈G Mσ , with every Mσ a subspace of M . The compatibility relation (7.8) means exactly that the Mσ are kG-submodules of M . Let us now suppose that M is a decomposable representation on G with the Long length smaller or equal to the cardinal of G. Let X be a subset of G
7.2 The FRT Theorem for the Long equation
305
and {Mx | x ∈ X} a family of idecomposable k[G]-submodules of M such that M = ⊕x∈X Mx . Then M is a right comodule over k[X], and as X ⊆ G, M can be viewed as a right k[G]-comodule. Hence, M ∈ k[G] Lk[G] . We obtain that the category of representations on G with the Long length smaller or equal to the cardinal of G can be viewed as a subcategory of k[G] Lk[G] . 2. For any left H-module (N, ·), N ⊗ H ∈ H LH with structure: ρ(n ⊗ l) = n ⊗ l(1) ⊗ l(2)
h · (n ⊗ l) = h · n ⊗ l,
for all h, l ∈ H, n ∈ N . Thus we have a functor G=•⊗H :
HM
→ H LH
which is a right adjoint of the forgetful functor F :
H HL
→ HM
3. For any (M, ρ) be a right H-comodule (M, ρr ), H ⊗ M ∈ structure
H HL
with
ρH⊗M (l ⊗ m) = l ⊗ m[0] ⊗ m[1]
h · (l ⊗ m) = hl ⊗ m,
for all h, l ∈ H, m ∈ M . We obtain a functor F = H ⊗ • : MH → H LH which is a left adjoint of the forgetful functor G:
H HL
→ MH
4. Let (N, ·) be a left H-module. Then, with the trivial structure of right H-comodule, ρr : N → N ⊗ H, ρ(n) = n ⊗ 1, for all n ∈ N , (N, ·, ρ) ∈ H LH . 5. Let (M, ρ) be a right H-comodule. Then, with the trivial structure of left H-module, h · m = ε(h)m, for all h ∈ H, m ∈ M , (M, ·, ρ) ∈ H LH . Remarks 22. 1. Let M , N ∈ structure maps are given by
H HL .
h · (m ⊗ n) = h(1) · m ⊗ h(2) · n,
M ⊗ N is also an object in
H HL ,
the
ρ(m ⊗ n) = m[0] ⊗ n[0] ⊗ m[1] n[1]
for all h ∈ H, m ∈ M , n ∈ N . k ∈ H LH with the trivial structure h · a = ε(h)a,
ρ(a) = a ⊗ 1
for all h ∈ H, a ∈ k. It is easy to show that (H LH , ⊗, k) is a monoidal category. 2. We have a natural functor H HL
→ H⊗H ∗ M
306
7 Long dimodules and the Long equation
Any Long dimodule M is a left H ⊗ H ∗ -module; the left H ⊗ H ∗ -action is given by (h ⊗ h∗ ) · m :=< h∗ , m[1] > hm[0] for all h ∈ H, h∗ ∈ H ∗ and m ∈ M . If H is finite dimensional, then the categories H LH and H⊗H ∗ M are isomorphic. The next Proposition is a generalization of Example 13 5). Proposition 142. Let H be a bialgebra and (M, ·, ρ) be a Long H-dimodule. Then the natural map R(M,·,ρ) (m ⊗ n) = n[1] · m ⊗ n[0] is a solution of the Long equation. Proof. Let R = R(M,·,ρ) . For l, m, n ∈ M we have R12 R23 (l ⊗ m ⊗ n) = R12 l ⊗ n[1] · m ⊗ n[0] = (n[1] · m)[1] · l ⊗ (n[1] · m)[0] ⊗ n[0] = m[1] · l ⊗ n[1] · m[0] ⊗ n[0] = R23 m[1] · l ⊗ m[0] ⊗ n = R23 R12 (l ⊗ m ⊗ n) and it follows that R is a solution of the Long equation. Lemma 30. Let H be a bialgebra, (M, ·) a left H-module and (M, ρ) a right H-comodule. Then the set {h ∈ H | ρ(h · m) = h · m[0] ⊗ m[1] , ∀m ∈ M } is a subalgebra of H. Proof. Straightforward. As a direct consequence of Lemma 30, we find that a vector space M on which H acts and coacts is a Long dimodule if and only if the compatibility condition (7.8) is satisfied for m running through a basis of M , and h running through a set of algebra generators of H. Now we will present the FRT Theorem for the Long equation: in the finite dimensional case, every solution R of the Long equation is of the form R = R(M,·,ρ) , where (M, ·, ρ) is a Long dimodule over a bialgebra D(R). As one might expect, the proof is similar to the corresponding proofs of the FRT Theorems for the quantum Yang-Baxter equation and the pentagon equation. Theorem 74. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ) a solution of the Long equation. Then
7.2 The FRT Theorem for the Long equation
307
1. There exists a bialgebra D(R) acting and coacting on M (with structure maps · and ρ) such that (M, ·, ρ) ∈ D(R) LD(R) , and R = R(M,·,ρ) 2. The bialgebra D(R) is universal with respect to this property: if H is a bialgebra acting and coacting on M (with structure maps · and ρ ) such that (M, · , ρ ) ∈ H LH and R = R(M,· ,ρ ) then there exists a unique bialgebra map f : D(R) → H such that ρ = (I ⊗ f )ρ and a · m = f (a) · m, for all a ∈ D(R), m ∈ M . Proof. 1. As usual, let {m1 , · · · , mn } be a basis for M and (xij uv ) the matrix of R, i.e. R(mu ⊗ mv ) = xij (7.9) uv mi ⊗ mj Let (C, ∆, ε) = Mn (k) be the comatrix coalgebra of order n. The tensor algebra T (C) has a unique bialgebra structure, with comultiplication and counit extending ∆ and ε. Following the arguments in the proof of Theorem 66, we find a left T (C)- action and a right T (C)-coaction on M such that R = R(M,·,ρ) . The action and coaction are given by the formulas ρ(ml ) = mv ⊗ cvl cju · ml = xij lu mi
(7.10) (7.11)
We now define the obstructions oij kl , measuring how far M is from being a Long dimodule over T (C). Keeping in mind the fact that T (C) is generated as an algebra by the cij , and using Lemma 30, we restrict ourselves to computing h · m[0] ⊗ m[1] − ρ(h · m) for h = cjk , and m = ml : h · m[0] ⊗ m[1] = cjk · ml[0] ⊗ ml[1] ij v v = cjk · mv ⊗ cvl = xij vk mi ⊗ cl = mi ⊗ xvk cl
and αj i ρ(h · m) = ρ(cjk · ml ) = xαj lk mα[0] ⊗ mα[1] = mi ⊗ xlk cα
Writing ij v αj i oij kl = xvk cl − xlk cα
(7.12)
h · m[0] ⊗ m[1] − ρ(h · m) = mi ⊗ oij kl
(7.13)
we find Let I be the two-sided ideal of T (C) generated by all oij kl . We claim that I is a bi-ideal of T (C) and I · M = 0. The fact that I is also a coideal will result from the following formula:
308
7 Long dimodules and the Long equation ij uj u i ∆(oij kl ) = oku ⊗ cl + cu ⊗ okl
(7.14)
First we observe that (7.12) can be rewritten as ij v vj i oij kl = xvk cl − xlk cv
so ij v vj i u u ∆(oij kl ) = xvk cu ⊗ cl − xlk cu ⊗ cv u = xij cv ⊗ cul − ciu ⊗ xvj lk cv vk u vj i uj uj v i = oij ku + xuk cv ⊗cul − cu ⊗ −okl + xvk cl uj u i = oij ku ⊗ cl + cu ⊗ okl
proving (7.14) holds. Now ij ij ε oij kl = xlk − xlk = 0 so we have proved that I is a coideal of T (C). I · M = 0 will follow from the fact that R is a solution of the Long equation. For any n ∈ M , we compute R23 R12 (n ⊗ mk ⊗ mj ) = R23 (cα k · n ⊗ mα ⊗ mj ) s = cα k · n ⊗ cj · mα ⊗ ms rs = cα k · n ⊗ xαj mr ⊗ ms rs α = xαj ck · n ⊗ mr ⊗ ms
and
R12 R23 (n ⊗ mk ⊗ mj ) = R12 (n ⊗ csj · mk ⊗ ms )
= R12 (n ⊗ xαs kj mα ⊗ ms ) r = xαs kj cα · n ⊗ mr ⊗ ms
so it follows that α αs r R23 R12 − R12 R23 (n ⊗ mk ⊗ mj ) = xrs αj ck − xkj cα ·n ⊗ mr ⊗ ms = ors jk · n ⊗ mr ⊗ ms
(7.15)
Now R is a solution of the Long equation, hence ors jk · n = 0, for all n ∈ M , so can and I · M = 0. Now we define D(R) = T (C)/I D(R) coacts on M via the canonical projection T (C) → D(R), and D(R) acts on M since I · M = 0. The cij generate D(R) and oij kl = 0 in D(R), so
7.2 The FRT Theorem for the Long equation
309
we find from (7.13) that (M, ·, ρ) ∈ D(R) LD(R) and R = R(M,·,ρ) . 2. Let H be a bialgebra and suppose that (M, · , ρ ) ∈ H LH is such that R = R(M,· ,ρ ) . Let (cij )i,j=1,···,n be a family of elements in H such that ρ (ml ) = mv ⊗ cv l Then and Let
R(mv ⊗ mu ) = cj u · mv ⊗ m j j ij cj u · mv = xvu mi = cj · mv αj i v o kl = xij vk cl − xlk cα ij
From the universal property of the tensor algebra T (C), it follows that there exists a unique algebra map f1 : T (C) → H such that f1 (cij ) = ci j , for all ij H ij i, j = 1, · · · , n. Now (M, · , ρ ) ∈ H L , so 0 = o kl = f1 (okl ), for all i, j, k, l = 1, · · · , n. So the map f1 factorizes through a map f : D(R) → H,
f (cij ) = cij
Obviously we have, for any l ∈ {1, · · · , n}, (I ⊗ f )ρ(ml ) = mv ⊗ f (cvl ) = mv ⊗ cvl = ρ (ml ) v
v
so ρ = (I ⊗ f )ρ. Conversely, (I ⊗ f )ρ = ρ necessarily implies f (cij ) = cij , proving the uniqueness of f . This completes the proof of the theorem. Remark 23. In the graded algebra T (Mn (k)) the obstruction elements oij kl are of degree one, i.e. are elements of the comatrix coalgebra Mn (k). This will lead us in the next section to the study of some special functions defined only for a coalgebra, which will also play an important role in solving the Long equation. Examples 15. 1. Let a, b, c ∈ k and R ∈ M4 (k) given by equation (5.10). R is then a solution of the quantum Yang-Baxter equation and the Long equation. We will describe the bialgebra D(R), which is obtained considering R as a solution for the Long equation. If (b, c) = (0, 0) then R = 0 and D(R) = T (M4 (k)). Suppose now that (b, c) = (0, 0), and write R(mu ⊗ mv ) =
2
xij uv mi ⊗ mj
i,j=1
the only xij uv that are different from zero are
310
7 Long dimodules and the Long equation
x21 21
x11 x11 11 = ab, 12 = ac, 11 = ab, x12 = c, x12 22 = b,
x12 12 = ab, 21 x22 = ac,
x11 21 = b, 22 x22 = ab
(7.16)
The sixteen relations oij kl = 0 are the following ones, written in the lexicografical in (i, j, k, l).
0 = 0,
abc11 + bc21 = abc11 ,
abc12 + bc22 = bc11 + abc12
acc11 + cc21 = acc11 ,
acc12 + cc22 = cc11 + acc12
0 = 0,
abc21 = abc21 , 0 = 0,
abc11 + bc21 = abc11 ,
abc22 = abc22 , 0 = 0,
abc12 + bc22 = bc11 + abc12
acc21 = acc21 ,
abc21 = abc21 ,
acc22 = cc21 + acc22
abc22 = abc22 + bc21
The only nontrivial relations are bc21 = 0,
bc22 = bc11 ,
cc21 = 0,
cc22 = cc11
As, (b, c) = (0, 0), these reduce to c21 = 0,
c22 = c11
Now, if we denote c11 = x, c12 = y we obtain that D(R) can be described as follows: – As an algebra D(R) = k < x, y >, the free algebra generated by x and y. – The comultiplication ∆ and the counit ε are given by ∆(x) = x ⊗ x,
∆(y) = x ⊗ y + y ⊗ x,
ε(x) = 1,
ε(y) = 0.
We observe that the bialgebra D(R) does not depend on the parameters a, b, c. 2. Take q ∈ k, and consider the solution Rq ∈ M4 (k) of the Hopf equation constructed in Proposition 135. Rq is also a solution of the Long equation, as Rq has the form f ⊗ g with f g = gf . In Proposition 135 we have described the bialgebra B(Rq ), using the FRT Theorem for the Hopf equation. The description of D(Rq ), using the FRT Theorem for the Long equation is much simpler, and does not depend on the parameter q. – As an algebra D(Rq ) = k < x, y >, the free algebra generated by x and y. – x and y are grouplike elements. This describes the coalgebra structure. This construction follows from a computation similar to the one in the previous example. The only surviving obstruction relations are c21 = 0,
c12 = q(c11 − c22 )
We obtain the above construction after we put c11 = x and c22 = y.
7.3 Long coalgebras
311
7.3 Long coalgebras In this Section we will define Long maps on coalgebras: if C is a coalgebra and I a coideal of C, then a Long map is a k-linear map σ : C⊗C/I → k satisfying (7.17). This condition ensures that, for any right C-comodule (M, ρ), the natural map R(σ,M,ρ) is a solution of the Long equation. Conversely, over a finite dimensional vector space M , any solution of the Long equation arises in this way. Hence Long maps on a coalgebra may be viewed as a Long equation analog of coquasitriangular bialgebras. The image of c ∈ C in the quotient coalgebra will be denoted by c. If (M, ρ) is a right C-comodule, then then (M, ρ) is a right C/I-comodule via ρ(m) = m[0] ⊗ m[1] , for all m ∈ M . Definition 17. Let C be a coalgebra and I be a coideal of C. A k-linear map σ : C ⊗ C/I → k is called a Long map if σ(c(1) ⊗ d) c(2) = σ(c(2) ⊗ d) c(1)
(7.17)
for all c, d ∈ C. If I = 0, then σ is called proper Long map and (C, σ) is called a Long coalgebra. Examples 16. 1. If C is cocommutavive then any k-linear map σ : C ⊗ C/I → k is a Long map. In particular, any cocommutative coalgebra is a Long coalgebra for any σ : C ⊗ C → k. 2. Let C be a coalgebra, I a coideal, and f ∈ Homk (C/I, k). Then σf : C ⊗ C/I → k,
σf (c ⊗ d) = ε(c)f (d)
for all c, d ∈ C, is a Long map. In particular, any coalgebra has a trivial structure of Long coalgebra via σ : C ⊗ C → k, σ(c ⊗ d) = ε(c)ε(d). 3. Let C = Mn (k) be the comatrix coalgebra of order n. For any a ∈ k, the map σ : C ⊗ C → k, σ(cij ⊗ cpq ) = δji a is a proper Long map. Indeed, for c = cij , d = cpq , we have σ(c(1) ⊗ d)c(2) = σ(cit ⊗ cpq )ctj = acij , and σ(c(2) ⊗ d)c(1) = σ(ctj ⊗ cpq )cit = acij , Proposition 143. Let σ : C ⊗ C/I → k a Long map, and consider a right C-comodule (M, ρ). Then the map R(σ,M,ρ) : M ⊗ M → M ⊗ M,
R(σ,M,ρ) (m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0]
is a solution of the Long equation.
312
7 Long dimodules and the Long equation
Proof. A straightforward computation. We write R = R(σ,M,ρ) . For all l, m, n ∈ M , we have R12 R23 (l ⊗ m ⊗ n) = R12 σ(m[1] ⊗ n[1] )l ⊗ m[0] ⊗ n[0] = σ(m[2] ⊗ n[1] )σ(l[1] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n[0] (7.17)
= σ(l[1] ⊗ m[2] )σ(m[1] ⊗ n[1] )l[0] ⊗ m[0] ⊗ n[0] = R23 σ(l[1] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n = R23 R12 (l ⊗ m ⊗ n)
so R is a solution of the Long equation. Theorem 75. Let M be an n-dimensional vector space and R ∈ End(M ⊗M ) a solution of the Long equation. 1. There exists a coideal I(R) of the comatrix coalgebra Mn (k), a coaction ρ of Mn (k) on M , and a unique Long map σ : Mn (k) ⊗ Mn (k)/I(R) → k such that R = R(σ,M,ρ) . Furthermore, if R is bijective, σ is convolution invertible in Homk (Mn (k) ⊗ Mn (k)/I(R), k). 2. If R is commutative or Rτ = τ R, then there exists a Long coalgebra (L(R), σ ˜ ) and a coaction ρ˜ of L(R) on M such that R = R(˜σ,M,ρ) ˜ . Proof. 1. As before, let {m1 , · · · , mn }, and write R in matrix form R(mu ⊗ mv ) = xij uv mi ⊗ mj
(7.18)
The comatrix coalgebra Mn (k) coacts on M : ρ(ml ) = mv ⊗ cvl Let I(R) be the k-subspace of Mn (k) generated by the oij kl . We know from (7.14) that I(R) is a coideal of Mn (k). We will first prove that σ is unique. Let σ : Mn (k) ⊗ Mn (k)/I(R) → k be a Long map such that R = Rσ . Then Rσ (mv ⊗ mu ) = σ (mv )[1] ⊗ (mu )[1] (mv )[0] ⊗ (mu )[0] = σ(civ ⊗ cju )mi ⊗ mj and it follows using (7.18) that σ(civ ⊗ cju ) = xij uv and σ is completely determined. We will now prove the existence of σ. First define
(7.19)
7.3 Long coalgebras
σ0 : Mn (k) ⊗ Mn (k) → k,
313
σ0 (civ ⊗ cju ) = xij uv
We have to show that σ0 factorizes through a map σ : Mn (k) ⊗ Mn (k)/I(R) → k To this end, it suffices to show that σ0 (Mn (k) ⊗ I(R)) = 0. This can be seen as follows ij αj p v p i σ0 (cpq ⊗ oij kl ) = xvk σ0 (cq ⊗ cl ) − xlk σ0 (cq ⊗ cα ) pv αj pi = xij vk xql − xlk xqα = 0
We still have to show that σ is a Long map. For c = cij and d = cpq , we find v σ(c(1) ⊗ d) c(2) = σ(civ ⊗ cpq ) cvj = xip vq cj
and p αp i i σ(c(2) ⊗ d) c(1) = σ(cα j ⊗ cq ) cα = xjq cα
so σ(c(1) ⊗ d) c(2) − σ(c(2) ⊗ d) c(1) = oip qj = 0 and it follows that σ is a Long-map. ij ) be a family of Suppose now that R is bijective and let S = R−1 . Let (yuv scalars of k such that ij S(mu ⊗ mv ) = yuv mi ⊗ m j ,
S is the inverse of R, so αβ pi αβ p i xpi αβ yqj = δq δj = yαβ xqj
We define σ0 : Mn (k) ⊗ Mn (k) → k,
ij σ0 (civ ⊗ cju ) = yvu
First we prove that σ0 is a convolution inverse of σ0 . We easily compute that σ0 (cpq )(1) ⊗ (cij )(1) σ0 (cpq )(2) ⊗ (cij )(2) β pi αβ p i i p = σ0 (cpα ⊗ ciβ )σ0 (cα q ⊗ cj ) = xαβ yqj = δq δj = ε(cj )ε(cq )
and
σ0 (cpq )(1) ⊗ (cij )(1) σ0 (cpq )(2) ⊗ (cij )(2) β pi αβ p i i p = σ0 (cpα ⊗ ciβ )σ0 (cα q ⊗ cj ) = yαβ xqj = δq δj = ε(cj )ε(cq )
We next show that σ0 factorizes through a map σ : Mn (k)⊗Mn (k)/I(R) → k. We have
314
7 Long dimodules and the Long equation ij pv αj pi σ0 (cpq ⊗ oij kl ) = xvk yql − xlk yqα
S = R−1 and R is a solution of the Long equation, so R23 S 12 = S 12 R23 . Using (7.2) we obtain that σ0 (cpq ⊗ oij kl ) = 0, hence σ0 factorizes through a map σ . σ is a convolution inverse of σ. 2. Suppose first that Rτ = τ R. Then ij xji uv = xvu
(7.20)
Let L(R) = Mn (k)/I(R) The rest of the proof is similar to the proof of part 1), we only have to prove that the map σ : Mn (k) ⊗ Mn (k)/I(R) → k factorizes through a map σ ˜ : L(R) ⊗ L(R) → k. This can be seen as follows p ij p αj p v i σ(oij kl ⊗ cq ) = xvk σ(cl ⊗ cq ) − xlk σ(cα ⊗ cq ) vp αj pi = xij vk xlq − xlk xαq
(7.20)
pv αj pi = xij vk xql − xlk xqα = 0
If R is commutative, then we can prove that R12 R13 = R13 R12 if and only if vp αj ip xij vk xlq = xlk xαq
Thus, p ij vp αj ip σ(oij kl ⊗ cq ) = xvk xlq − xlk xαq = 0
and again σ factorizes through a map σ ˜ : L(R) ⊗ L(R) → k. As an immediate consequence, we obtain Corollary 39. Let M be a finite dimensional vector space and R ∈ End(M ⊗ M ). The following statements are equivalent: 1. R is a solution of the system 12 13 R R = R13 R12 R12 R23 = R23 R12
(7.21)
in End(M ⊗ M ⊗ M ); 2. there exists a Long coalgebra (L(R), σ) and a structure of right L(R)comodule (M, ρ) such that R = R(σ,M,ρ) . Remark 24. If R is a commutative solution of the Long equation, then R satisfies the integrability condition [R12 , R13 + R23 ] = 0 which appears in the study of the Knizhnik-Zamolodchikov equation (see [108], [166]).
7.3 Long coalgebras
315
Examples 17. 1. Let C = M4 (k) and I the two dimensional k-subspace of C with k-basis {c21 , c22 − c11 }. Then I is a coideal of C. Take scalars a, b, c ∈ k and let (xij uv ) be given by the formulas (7.16). Then σ : M4 (k) ⊗ M4 (k)/I → k,
σ(ciu ⊗ cjv ) = xij uv
is a Long map. 2. Let C = M4 (k), q ∈ k and I the two-dimensional coideal of C with k-basis {c21 , c12 + qc22 − qc11 }. Let x11 12 = −q,
x12 12 = 1,
2 x11 22 = −q ,
x12 22 = q
and (xij uv ) = 0 in all other situations. Then σ : M4 (k) ⊗ M4 (k)/I → k,
σ(ciu ⊗ cjv ) = xij uv
is a Long map. 3. Let C = M4 (k) and I the two-dimensional coideal of C with basis {c12 , c21 }. Let a, b ∈ k and σ : M4 (k) ⊗ M4 (k)/I → k such that σ(c11 ⊗ c11 ) = a,
σ(c11 ⊗ c22 ) = b
and all others are zero. Then σ is a Long map. 4. Let n be a positive integer and φ : {1, · · · , n} → {1, · · · , n} a function with φ2 = φ. It is easy to see that R = (xkl ij ) given by xji uv = δuv δφ(i)v δφ(j)v for all i, j, u, v = 1, · · · , n is a commutative solution of the Long equation. Hence we can construct the Long coalgebra L(R). By a long but trivial computation we can show that the coalgebra L(R) is a quotient of the comatrix coalgebra Mn (k) through the coideal I generated by elements of the form for all l = φ(j) c φ(j)l , ciα , for all φ(i) = φ(j) −1 (7.22) α∈φ (φ(j)) c − c , for all i = 1, · · · , n φ(i)φ(i) α∈φ−1 (φ(i)) iα We will now construct a specific example. Let n = 4 and φ given by φ(1) = 1,
φ(2) = φ(3) = φ(4) = 2.
Then I is the coideal generated by (cf.(7.22)): c1l , c2j , ci1 , c32 +c33 +c34 −c22 , c42 + c43 + c44 − c22 for all l = 1, j = 2, i = 1. Now, if we denote c11 = x1 , c22 = x2 , c32 = x3 , c33 = x4 , c42 = x5 , c44 = x6 we obtain the description of the corresponding Long coalgebra L(R): L(R) is the six dimensional vector
316
7 Long dimodules and the Long equation
space with {x1 , · · · , x6 } as a basis. The comultiplication ∆ and the counity ε are given by ∆(x1 ) = x1 ⊗ x1 , ∆(x2 ) = x2 ⊗ x2 , ∆(x3 ) = x3 ⊗ x2 + x4 ⊗ x3 + (x2 − x3 − x4 ) ⊗ x5 ∆(x4 ) = x4 ⊗ x4 + (x2 − x3 − x4 ) ⊗ (x2 − x5 − x6 ), ∆(x5 ) = x5 ⊗ x2 + (x2 − x5 − x6 ) ⊗ x3 + x6 ⊗ x5 ∆(x6 ) = (x2 − x5 − x6 ) ⊗ (x2 − x3 − x4 ) + x6 ⊗ x6 ε(x1 ) = ε(x2 ) = ε(x4 ) = ε(x6 ) = 1,
ε(x3 ) = ε(x5 ) = 0.
8 The Frobenius-Separability equation
We introduce and study the Frobenius-separability equation (or FS-equation) R12 R23 = R23 R13 = R13 R12 ; we will see that it implies the braid equation, in the sense that all solutions of the FS-equation are solutions of the braid equation. The FS-equation can be used to determine the structure of separable algebras and Frobenius algebras. Given a solution R of the FS-equation satisfying a certain normalizing condition, we construct a Frobenius or a separable algebra A(R) that can be described using generators and relations. Furthermore, any finite dimensional Frobenius or separable algebra is isomorphic to such an A(R). It is remarkable that the same equation can be used to describe two different kinds of algebras, namely separable algebras and Frobenius algebras. We had a similar phenomenon in Chapter 3, where we gave a categorical explanation for the relation between separability properties and Frobenius type properties. Here also we will see that the difference lies in the normalizing properties.
8.1 Frobenius algebras and separable algebras 1 Let A be a k-algebra. An element e = e ⊗ e2 = e1 ⊗ e2 ∈ A ⊗ A will be called A-central if for any a ∈ A we have a·e=e·a
(8.1)
where A ⊗ A is viewed as an A-bimodule in the usual way a1 · (b ⊗ c) · a2 = a1 b ⊗ ca2 for all a1 , a2 , b, c ∈ A. Of course, there exists a bijection between the set of all A-central elements and the set of all A-bimodule maps ∆ : A → A ⊗ A. Recall that A is called a separable algebra if there exists a separability idempotent, that is an A-central element e = e1 ⊗ e2 ∈ A ⊗ A satisfying the normalizing separability condition e1 e2 = 1.
(8.2)
A is separable if and only if the multiplication map m : A ⊗ A → A splits in the category A MA of A-bimodules.
S. Caenepeel, G. Militaru, and S. Zhu: LNM 1787, pp. 317–343, 2002. c Springer-Verlag Berlin Heidelberg 2002
318
8 The Frobenius-Separability equation
Remark 25. We have proved in Proposition 12 that any separable k-algebra is finite dimensional over k, and in Proposition 13 we have show that any separable algebra is semisimple. Also recall that a k-coalgebra C is called coseparable if there exists a coseparability idempotent, that is a k-linear map σ : C ⊗ C → k such that σ(c ⊗ d(1) )d(2) = σ(c(2) ⊗ d)c(1) and σ(c(1) ⊗ c(2) ) = ε(c) for all c, d ∈ C. For a k-algebra A, the dual A∗ = Hom(A, k) is an (A, A)-bimodule with A-actions given by the formulas (a · f · b)(x) = f (bxa) for all a, b, x ∈ A and f ∈ A∗ . We recall that a k-algebra A is called a Frobenius algebra if A is finite dimensional over k and there exists an isomorphism of left A-modules A ∼ = A∗ . Furthermore, A is called a symmetric ∗ ∼ algebra if A = A as A-bimodules. Of course, a symmetric algebra is always a Frobenius algebra. Remark 26. Using Wedderburn’s Theorem, Eilenberg and Nakayama proved that any semisimple k-algebra is symmetric (cf. e.g. [113, p. 443]). In particular, any separable k-algebra is Frobenius. There exist necessary and sufficient conditions for a Frobenius algebra to be separable; for a detailed discussion, we refer to [100]. We will see below that a Frobenius algebra is separable if its characteristic element ωA is invertible. Proposition 144. Let A be a k-algebra, ∆ : A → A⊗A an A-bimodule map and e = ∆(1A ). 1. In A ⊗ A ⊗ A, we have the equality e12 e23 = e23 e13 = e13 e12
(8.3)
2. ∆ is coassociative; 3. If (A, ∆, ε) is a coalgebra structure on A, then A is finite dimensional over k. Proof. 1. From the fact that ∆ is an A-bimodule map, it follows immediately that e A-central. Write E = E 1 ⊗ E 2 = e. Then e12 e23 = (e1 ⊗ e2 ⊗ 1)(1 ⊗ E 1 ⊗ E 2 ) = e1 ⊗ e2 E 1 ⊗ E 2 = E 1 e1 ⊗ e2 ⊗ E 2 = e13 e12 and e12 e23 = e1 ⊗ e2 E 1 ⊗ E 2 = e1 ⊗ E 1 ⊗ E 2 e2 = e23 e13
8.1 Frobenius algebras and separable algebras
319
2. follows from 1. and the formulas (∆ ⊗ I)∆(a) = e12 e23 · (1A ⊗ 1A ⊗ a) (I ⊗ ∆)∆(a) = e23 e13 · (1A ⊗ 1A ⊗ a) 3. Let a ∈ A. Applying ε ⊗ IA and IA ⊗ ε to (8.1), we obtain, using the fact that ε is a counit map, a = ε(ae1 )e2 = e1 ε(e2 a) and it follows that {e1 , ε(e2 •)} (or {e2 , ε(•e1 )}) are dual bases of A as a vector space over k. The next Corollary is a special case of the item 3) of Theorem 27: the equivalence 1) ⇔ 3) has been proved by Abrams (see [4, Theorem 2.1]). Corollary 40. For a k-algebra A, the following statements are equivalent: 1. A is a Frobenius algebra; 2. there exist e = e1 ⊗ e2 ∈ A ⊗ A and ε ∈ A∗ such that e is A-central and the normalizing Frobenius condition ε(e1 )e2 = e1 ε(e2 ) = 1A
(8.4)
is satisfied. (ε, e) is called a Frobenius pair. 3. there exist a coalgebra structure (A, ∆A , εA ) on A such that the comultiplication ∆A : A → A ⊗ A is an A-bimodule map. Proof. 1. ⇔ 2. is a special case of Theorem 27. 2. ⇔ 3.: observe that an A-bimodule map ∆ : A → A ⊗ A is completely determined by e = ∆(1A ), and that there is a bijective corespondence between the set of all A-central elements and the set of all A-bimodule maps ∆ : A → A ⊗ A, and use the second statement in Proposition 144. It is easy to see that the counit property is satisfied if and only if (8.4) holds. Let A be a Frobenius algebra over a field k. The element ωA = (mA ◦∆)(1A ) ∈ A is called the characteristic element of A and it generalizes the classical Euler class e(X) of a connected orientated compact manifold X ([5]). If the characteristic element ωA is invertible in A then A is a separable k-algebra. Indeed, let ∆(1A ) = e1 ⊗ e2 . Then ae1 ⊗ e2 = e1 ⊗ e2 a for all a ∈ A. Hence, ωA ∈ Z(A), the center of A. It follows that its inverse −1 ωA is also an element in Z(A). Now, −1 1 (e ⊗ e2 ) R = ωA
is a separability idempotent, so A is a separable k-algebra. Our next result is the coalgebra version of Corollary 40.
320
8 The Frobenius-Separability equation
Theorem 76. For a coalgebra C over a field k, the following statements are equivalent: 1. C is a co-Frobenius coalgebra, i.e. the forgetful functor F : C M → k M is Frobenius; 2. C is finite dimensional and there exists an associative and unitary algebra structure (C, mC , 1C ) on C such that the multiplication mC : C ⊗C → C is left and right C-colinear.
8.2 The Frobenius-separability equation Proposition 144 leads us to the following definition. Definition 18. Let A be a k-algebra and R = R1 ⊗ R2 ∈ A ⊗ A. 1. R is called a solution of the FS-equation (or Frobenius-separability equation) if R12 R23 = R23 R13 = R13 R12 (8.5) in A ⊗ A ⊗ A. 2. R is called a solution of the S-equation (or separability equation) if R is a solution of the FS-equation and the normalizing separability condition holds: (8.6) R1 R2 = 1A 3. (R, ε) is called a solution of the F-equation (Frobenius equation) if R is a solution of the F S-equation, and ε ∈ A∗ is such that the normalizing Frobenius condition holds: ε(R1 )R2 = R1 ε(R2 ) = 1A .
(8.7)
4. Two solutions of the FS-equation are called equivalent if there exists an invertible element u ∈ A such that S = (u ⊗ u)R(u−1 ⊗ u−1 ). Remarks 23. 1. The FS-equation appeared first in [15, Lemma 3.6]. If R is a solution of the FS-equation then R is also a solution of the braid equation R12 R23 R12 = R23 R12 R23
(8.8)
2. In many applications, the algebra A is of the form A = End(M ), where M is a finite dimensional vector space. Then we can view R as an element of End(M ⊗ M ), using the canonical isomorphism End(M ) ⊗ End(M ) ∼ = End(M ⊗ M ). (8.5) can then be viewed as an equation in End(M ⊗ M ⊗ M ). This is what we will do in Proposition 145 and in most of the examples in this Section. 3. We will prove now that the FS-equation is at the same time an associativity and coassociativity constraint. First, let H be a Hopf algebra and let
8.2 The Frobenius-separability equation
R = β : H ⊗ H → H ⊗ H,
321
R(g ⊗ h) = gh(1) ⊗ h(2)
be the canonical map that shows that H is a Hopf-Galois extension of k. We have proved in Examples 10 that R is a solution of the Hopf equation. Moreover, the comultiplication ∆H and the multiplication mH of H can be recovered from R, namely ∆H (h) = R(1H ⊗ h) and mH = (Id ⊗ ε)R We generalize this construction as follows. Let M be a vector space, 1M ∈ M , ε ∈ M ∗ , and R ∈ End(M ⊗ M ). We define ∆ = ∆R : M → M ⊗ M,
∆(x) = R(1M ⊗ x)
for all x ∈ M and m = mR : M ⊗ M → M,
m = (Id ⊗ ε)R
Then we have (∆ ⊗ Id)∆(x) = (R12 R23 )(1M ⊗ 1M ⊗ x) (Id ⊗ ∆)∆(x) = (R23 R13 )(1M ⊗ 1M ⊗ x) for all x ∈ M and m(Id ⊗ m) = (Id ⊗ ε ⊗ ε)R12 R23 ,
m(m ⊗ Id) = (Id ⊗ ε ⊗ ε)R13 R12 .
We conclude that, if R is a solution of the FS-equation, then ∆R is coassociative and mR is an associative. The FS-equation can be rewritten in matrix form. We follow the notation introduced in Section 5.1. Fix a basis {m1 , · · · , mn } of M . A linear map R : M ⊗ M → M ⊗ M can be described by its matrix X = (xij uv ). We have R(mu ⊗ mv ) = xij uv mi ⊗ mj
(8.9)
and u v R = xij uv ei ⊗ ej
(8.10)
It is then straightforward to compute kl R12 R23 (mu ⊗ mv ⊗ mw ) = xij uk xvw mi ⊗ mj ⊗ ml
R R (mu ⊗ mv ⊗ mw ) = 23
13
R R (mu ⊗ mv ⊗ mw ) = 13
12
ik xjl vk xuw mi kj xil kw xuv mi
(8.11)
⊗ mj ⊗ ml
(8.12)
⊗ mj ⊗ ml
(8.13)
Proposition 145. For R ∈ End(M ⊗ M ), we have, with notation as above.
322
8 The Frobenius-Separability equation
1. R is a solution of the FS-equation if and only if jl ik kl il kj xij uk xvw = xvk xuw = xkw xuv
(8.14)
for all i, j, l, u, v, w ∈ {1, · · · , n}. 2. R satisfies (8.6) if and only if j xkj ik = δi
(8.15)
for all i, j ∈ {1, · · · , n}. 3. Let ε be the trace map. Then (R, ε) satisfies (8.7) if and only if jk j xkj ki = xik = δi
(8.16)
for all i, j ∈ {1, · · · , n}. Proof. 1. follows immediately from (8.11-8.13). 2. and 3. follow from (8.10), using the multiplication rule eij ekl = δli ekj and the formula for the trace ε(eij ) = δji As a consequence of Proposition 145, we can show that R ∈ End(M ⊗ M ) is a solution of the equation R12 R23 = R13 R12 if and only if a certain multiplication on M ⊗ M is associative. Corollary 41. Let {m1 , · · · , mn } be a basis of M and R ∈ End(M ⊗ M ), given by (8.9). Then R is a solution of the equation R12 R23 = R13 R12 if and only if the multiplication on M ⊗ M given by (mk ⊗ ml ) · (mr ⊗ mj ) = xak jl mr ⊗ ma (k, l, r, j = 1, · · · , n) is associative. In this case M is a left M ⊗ M -module with structure (mk ⊗ ml ) • mj = xak jl ma a
for all k, l, j = 1, · · · , n. Proof. Write mkl = mk ⊗ ml . Then ak ip mpq · (mkl · mrj ) = xak jl mpq mra = xjl xaq mri
and ik ap (mpq · mkl ) · mrj = xap lq mka mrj = xja xlq mri
8.2 The Frobenius-separability equation
323
such that
ap ak ip x − x x (mpq · mkl ) · mrj − mpq · (mkl · mrj ) = xik ja lq jl aq mri
and the right hand side is zero for all indices p, q, k, l, r and j if and only if (8.14) holds, and in Proposition 145, we have seen that (8.14) is equivalent to R12 R23 = R13 R12 . The last statement follows from ap ak ip x − x x (mpq · mkl ) • mj − mpq • (mkl • mj ) = xik ja lq jl aq mi = 0 where we used (8.14) at the last step. Examples 18. 1. The identity IM ⊗M and the switch map τM are trivial solutions of the FS-equation in End(M ⊗ M ⊗ M ). 2. Let A = Mn (k). Then n Rj = eji ⊗ eij i=1
n is a solution of the S-equation and (R = j=1 Rj , trace) is a solution of the F-equation. 3. Let R ∈ A ⊗ A be a solution of FS-equation and u ∈ A invertible. Then u
R = (u ⊗ u)R(u−1 ⊗ u−1 )
(8.17)
is also a solution of the FS-equation. Let FS(A) be the set of all solutions of the FS-equation, and U (A) the multiplicative group of invertible elements in A. Then (8.17) defines an action of U (A) on FS(A). 4. If a ∈ A is an idempotent, then a ⊗ a is a solution of the FS-equation. 5. Let A be a k-algebra, and e ∈ A ⊗ A an A-central element. Then for any left A-module M , the homotety R = Re : M ⊗ M → M ⊗ M given by R(m ⊗ n) = e1 · m ⊗ e2 · n
(8.18)
(m, n ∈ M ) is a solution of the FS-equation in End(M ⊗ M ⊗ M ). This is an easy consequence of (8.3). Moreover, if e is a separability idempotent (respectively (e, ε) is a Frobenius pair), then R is a solution of the S-equation (respectively a solution of the F-equation). g ⊗ g −1 is an A-central 6. Let G be a finite group, and A = kG. Then e = g∈G
element and (e, p1 ) is a Frobenius pair (p1 : kG → k is the map defined by p1 (g) = δ1,g , for all g ∈ G). Hence, kG is a Frobenius algebra. Furthermore, if (char(k), |G|) = 1, then e = |G|−1 e is a separability idempotent. 7. Using a computer, Bogdan Ichim computed for us that FS(M2 (Z2 )) (resp. FS(M2 (Z3 ))) consists of exactly 38 (resp. 187) solutions of the FS-equation. We will present only two of them. Let k be a field of characteristic 2 (resp. 3). Then
324
8 The Frobenius-Separability equation
1
1 R= 1 0
0
1
1
0
0
0
0
1 , 1
1
1
1
1
0 (resp. R = 0 1
1
0
0
1
1
1
1
2 ) 2
2
2
1
are solutions of F S-equation. 8. Let {m1 , m2 , · · · , mn } be a basis of M , aij scalars of k and let R be given by R(mu ⊗ mv ) = aju mv ⊗ mj j i j Thus xij uv = δv au , with notation as in (8.9). An immediate verification shows that R is a solution of the FS-equation. If ε is the trace map, then (R, ε) is a solution of the F-equation if and only if aji = δij . R is a solution of the S-equation if and only if n is invertible in k, and naji = δij .
If H is a finite dimensional unimodular involutory Hopf algebra, and t is a two-sided integral in H, then R = t(1) ⊗ S(t(2) ) is a solution of the quantum Yang-Baxter equation (cf. [114, Theorem 8.3.3]). In our next Proposition, we will show that, for an arbitrary Hopf algebra H, R is a solution of the FS-equation and the braid equation. Proposition 146. Let H be a Hopf algebra and t ∈ H a left integral of H. Then R = t(1) ⊗ S(t(2) ) ∈ H ⊗ H is H-central, and therefore a solution of the FS-equation and the braid equation. Proof. For all h ∈ H, we have that ht = ε(h)t and, subsequently, h(1) t ⊗ h(2) = t ⊗ h h(1) t(1) ⊗ S(h(2) t(2) ) ⊗ h(3) = t(1) ⊗ S(t(2) ) ⊗ h Multiplying the second and the third factor, we obtain ht(1) ⊗ S(t(2) ) = t(1) ⊗ S(t(2) )h proving that R is H-central. Remarks 24. 1. If ε(t) = 1, then R is a separability idempotent and H is separable over k, and we recover Maschke’s Theorem for Hopf algebras (see [1]). 2. If t is a right integral, then a similar argument shows that S(t(1) ) ⊗ t(2) is an H-central element. Let H be a Hopf algebra and σ : H ⊗ H → k be a normalized Sweedler 2-cocycle, i.e. σ(h ⊗ 1H ) = σ(1H ⊗ h) = ε(h)
8.2 The Frobenius-separability equation
σ(k(1) ⊗ l(1) )σ(h ⊗ k(2) l(2) ) = σ(h(1) ⊗ k(1) )σ(h(2) k(2) ⊗ l)
325
(8.19)
for all h, k, l ∈ H. The crossed product algebra Hσ is equal to H as k-module and the (associative) multiplication is given by g · h = σ(g(1) ⊗ h(1) )g(2) h(2) for all g, h ∈ Hσ = H. Proposition 147. Let H be a cocommutative Hopf algebra over a commutative ring k, t ∈ H a right integral, and σ : H ⊗ H → k a normalized convolution invertible Sweedler 2-cocycle. Then Rσ = σ −1 S(t(2) ) ⊗ t(3) S(t(1) ) ⊗ t(4) ∈ Hσ ⊗ Hσ (8.20) is Hσ -central, and a solution of the FS-equation. Consequently, if H is a separable algebra, then Hσ is also a separable algebra. Proof. The method of proof is the same as in Proposition 146, but the situation is more complicated. For all h ∈ H, we have th = ε(h)t, and h ⊗ t = h(1) ⊗ th(2) and h ⊗ S(t(1) ) ⊗ S(t(2) ) ⊗ t(3) ⊗ t(4) = h(1) ⊗ S(h(2) )S(t(1) ) ⊗ S(h(3) )S(t(2) ) ⊗ t(3) h(4) ⊗ t(4) h(5) We now compute easily that h · Rσ = σ −1 (S(t(2) ) ⊗ t(3) )h · S(t(1) ) ⊗ t(4) = σ −1 (S(h(3) )S(t(2) ⊗ t(3) h(4) )h(1) · (S(h(2) )S(t(1) )) ⊗ t(4) h(5) = σ −1 (S(h(3) )S(t(2) ⊗ t(3) h(4) )σ((h(1) )(1) ⊗ S(h(2) )(1) S(t(1) )(1) )(h(1) )(2) S(h(2) )(2) S(t(1) )(2) ⊗ t(4) h(5) =σ
−1
(S(h(3) )S(t(3) ⊗ t(4) h(4) )σ(h(1) ⊗ S(h(2) S(t(2) ))S(t(1) ) ⊗ t(5) h(5)
On the other hand Rσ · h = σ −1 (S(t(2) ) ⊗ t(3) )S(t(1) ) ⊗ t(4) · h = σ −1 (S(t(2) ) ⊗ t(3) )σ(t(4) ⊗ h(1) )S(t(1) ) ⊗ t(5) h(2) In order to prove that Rσ is Hσ -central, it suffices to show that, for all f, g ∈ H: σ −1 (S(h(3) )S(g(2) ) ⊗ g(3) h(4) )σ(h(1) ⊗ S(h(2) )S(g(1) )) = σ −1 (S(g(1) ) ⊗ g(2) )σ(g(3) ⊗ h)
(8.21)
326
8 The Frobenius-Separability equation
So far, we have not used the cocommutativity of H. If H is cocommutative, then we can omit the Sweedler indices, since they contain no information. Hence we can write ∆(h) = h⊗h=h⊗h The cocycle relation (8.19) can then be rewritten as σ(h ⊗ kl) = σ −1 (k ⊗ l)σ(h ⊗ k)σ(kh ⊗ l) σ(hk ⊗ l) = σ
−1
(h ⊗ k)σ(k ⊗ l)σ(h ⊗ kl)
(8.22) (8.23)
Using (8.22), (8.23) and the fact that σ is normalized, we compute σ −1 (S(h)S(g) ⊗ gh)σ(h ⊗ S(h)S(g)) = σ(S(h) ⊗ S(g))σ −1 (S(g) ⊗ gh)σ −1 (S(h) ⊗ S(g)gh) σ −1 (S(h) ⊗ S(g))σ(h ⊗ S(h))σ(hS(h) ⊗ g) = σ(g ⊗ h)σ −1 (S(g) ⊗ g)σ −1 (S(g)g ⊗ h) = σ −1 (S(g) ⊗ g)σ(g ⊗ h) proving (8.21). We also used that σ(h ⊗ S(h)) = σ(S(h) ⊗ h) which follows from the cocycle condition and the fact that σ is normalized. Finally, if H is separable, then we can find a right integral t such that ε(t) = 1, and we easily see that mHσ (Rσ ) = 1, proving that Rσ is a solution of the S-equation. In [84] solutions of the braid equation are constructed starting from 1-cocycles on a group G. The interesting point in this construction is that, at set theory level, any “nondegenerate symmetric” solution of the braid equation arises in this way (see [84, Theorem 2.9]). Now, taking G a finite group and H = kG in Proposition 147, we find a large class of solutions to the braid equation, arising from 2-cocycles σ : G × G → k ∗ . These solutions R can be described using a family of scalars (xij uv ), as in Proposition 145, where the indices now run through G. Let n = |G|, and write Mn (k) ∼ = MG (k), with entries indexed by G × G. Corollary 42. Let G be a finite group of order n, and σ : G × G → k ∗ a normalized 2-cocycle. Then Rσ = (xij uv )i,j,u,v∈G given by −1 (iu−1 , ui−1 )σ(iu−1 , u)σ(ui−1 , v) xij uv = δj, ui−1 v σ
(8.24)
(i, j, u, v ∈ G) is a solution of the FS-equation. If n is invertible in k, then n−1 R is a solution of the S-equation.
8.2 The Frobenius-separability equation
327
Proof. The twisted group algebra kσ G is the k-module with basis G, and multiplication given by g · h = σ(g, h)gh, for any g, h ∈ G. t = g∈G g is a left integral in kG and the element Rσ defined in (8.20) takes the form Rσ = σ −1 (g −1 , g)g −1 ⊗ g g∈G
Using the multiplication rule on kσ G, we find that the map R˜σ : kσ G ⊗ kσ G → kσ G ⊗ kσ G,
R˜σ (u ⊗ v) = Rσ · (u ⊗ v)
is given by R˜σ (u ⊗ v) =
σ −1 (iu−1 , ui−1 )σ(iu−1 , u)σ(ui−1 , v) i ⊗ ui−1 v
i∈G
If we write R˜σ (u ⊗ v) =
xij uv i ⊗ j
i,j∈G
then
xij uv
is given by (8.24).
We will now present a coalgebra version of Example 18.3. First we adapt an old definition of Larson ([116]). Definition 19. Let C be a k-coalgebra. A map σ : C ⊗ C → k is called an FS-map if σ(c ⊗ d(1) )d(2) = σ(c(2) ⊗ d)c(1) . (8.25) If, in addition, σ satisfies the normalizing condition σ(c(1) ⊗ c(2) ) = ε(c)
(8.26)
then σ is called a coseparability idempotent. If there exists an f ∈ C such that the FS-map σ satisfies the normalizing condition σ(f ⊗ c) = σ(c ⊗ f ) = ε(c) (8.27) for all c ∈ C, then we call (σ, f ) an F-map. The following Corollary is a special case of Theorem 35. Corollary 43. Let C be a coalgebra. 1. The forgetful functor F : MC → Mk is separable (or equivalently C is a coseparable coalgebra) if and only if there exists a coseparability idempotent σ. 2. Let G = − ⊗ C : Mk → MC be the right adjoint of F . Then (F, G) is a Frobenius pair if and only if there exists an F -map (σ, f ).
328
8 The Frobenius-Separability equation
Using FS-maps, we can construct solutions of the FS-equation. Examples 19. 1. The comatrix coalgebra Mn (k) is coseparable and σ : Mn (k) ⊗ Mn (k) → k,
σ(cji ⊗ clk ) = δkj δll
is a coseparability idempotent. 2. Let k be a field of characteristic zero, and consider the Hopf algebra C = k[Y ], with ∆(Y ) = Y ⊗ 1 + 1 ⊗ Y and ε(Y ) = 0. Then there is only one FS-map σ : C ⊗ C → k, namely the zero map. Indeed, σ is completely described by σ(Y i ⊗ Y j ) = aij . We will show that all aij = 0. Using the fact that ∆(Y n ) = ∆(Y )n , we easily find that (8.25) is equivalent to m m j=0
j
j
an,m−j Y =
n n i=0
i
an−i,m Y i
(8.28)
for all positive integers n and m. Taking n > m, and identifying the coefficients in Y n , we find anm = 0. If m > n, we also find anm = 0, now identifying coefficients in Y m . We can now write anm = an δnm . Take m > n. The right-hand side of (8.28) amounts to zero, while the left-hand side is m an Y n−m m−n It follows that an = 0 for all n, and σ = 0. Proposition 148. Let C be a coalgebra, σ : C ⊗ C → k an F S-map and M a right C-comodule. Then the map Rσ : M ⊗ M → M ⊗ M,
Rσ (m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0]
is a solution of the F S-equation in End(M ⊗ M ⊗ M ). Proof. Write R = Rσ and take l, m, n ∈ M . R12 R23 (l ⊗ m ⊗ n) = R12 σ(m[1] ⊗ n[1] )l ⊗ m[0] ⊗ n[0] = σ(m[2] ⊗ n[1] )σ(l[1] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n[0] Applying (8.25), with m = c, n = d, we obtain R12 R23 (l ⊗ m ⊗ n) = σ(m[1] ⊗ n[1] )σ(l[1] ⊗ n[2] )l[0] ⊗ m[0] ⊗ n[0] = R23 σ(l[1] ⊗ n[1] )l[0] ⊗ m ⊗ n[0] = R23 R13 (l ⊗ m ⊗ n) Applying (8.25), with m = d, l = c, we obtain
(8.29)
8.2 The Frobenius-separability equation
329
R12 R23 (l ⊗ m ⊗ n) = σ(l[1] ⊗ n[1] )σ(l[2] ⊗ m[1] )l[0] ⊗ m[0] ⊗ n[0] = R13 σ(l[1] ⊗ n[1] )l[0] ⊗ m[0] ⊗ n = R13 R12 (l ⊗ m ⊗ n) proving that R is a solution of the F S-equation. Remark 27. If C is finite dimensional and A = C ∗ is its dual algebra, then there is a one-to-one correspondence between F S-maps σ : C ⊗ C → k and A-central elements e ∈ A ⊗ A. The correspondence is given by the formula σ(c ⊗ d) = c, e1 d, e2 . In this situation, the map Re from Example 18.5 is equal to Rσ . Indeed, Rσ (m ⊗ n) = σ(m[1] ⊗ n[1] )m[0] ⊗ n[0] = m(1) , e1 n(1) , e2 = e1 · m ⊗ e2 · n = Re (m ⊗ n) We will now present two more classes of solutions of the F S-equation. Proposition 149. Take a ∈ k, X = {1, . . . , n}, and θ : X 3 → X a map satisfying θ(u, i, j) = v ⇐⇒ θ(v, j, i) = u θ(i, u, j) = v ⇐⇒ θ(j, v, i) = u
(8.30) (8.31)
θ(i, j, u) = v ⇐⇒ θ(j, i, v) = u θ(i, j, k) = θ(u, v, w) ⇐⇒ θ(j, i, u) = θ(k, w, v)
(8.32) (8.33)
1. R = (xuv ij ) ∈ Mn2 (k) given by u xuv ij = aδθ(i,v,j)
(8.34)
is a solution of the FS-equation. 2. Assume that n is invertible in k, and take a = n−1 . 2a. R is a solution of the S-equation if and only if θ(k, k, i) = i for all i, k ∈ X. 2b. Let ε be the trace map. (R, ε) is a solution of the F-equation if and only if θ(i, k, k) = i for all i, k ∈ X.
330
8 The Frobenius-Separability equation
Proof. 1. We have to verify (8.14). Using (8.32), we compute kl 2 i k xij uk xvw = a δθ(u,j,k) δθ(v,l,w) θ(j,u,i)
k k = a2 δθ(j,u,i) δθ(v,l,w) = a2 δθ(v,l,w)
(8.35)
In a similar way, we find ik 2 xjl vk xuw = a δθ(l,v,j)
θ(w,i,u)
(8.36)
θ(i,w,l) a2 δθ(u,j,v)
(8.37)
kj xil kw xuv =
Using (8.33), we find that (8.35), (8.36), and (8.37) are equal, proving (8.14). 2a. We easily compute that j −1 k δθ(i,j,k) = n−1 δθ(k,k,i) xkj ik = n
and it follows that R is a solution of the S-equation if and only if θ(k, k, i) = i for all i and k. 2b. We compute j −1 k xkj δθ(k,j,i) = n−1 δθ(i,k,k) ki = n −1 j xjk δθ(i,k,k) ik = n
and it follows that (R, ε) is a solution of the F-equation if and only if θ(i, k, k) = i for all i, k. Examples 20. 1. Let G be a finite group. Then the map θ : G × G × G → G, θ(i, j, k) = ij −1 k satisfies conditions (8.30-8.33). 2. Let G be a group of order n acting on X = {1, 2, · · · , n}, and assume that the action of G is transitive and free, which means that for every i, j ∈ X, there exists a unique g ∈ G such that g(i) = j. Then the map θ : X×X×X → X defined by θ(i, j, k) = g −1 (k) where g ∈ G is such that g(i) = j, satisfies conditions (8.30-8.33). Proposition 150. Let n be a positive integer, φ : {1, · · · , n} → {1, · · · , n} a function with φ2 = φ and {m1 , · · · , mn } a basis of M . Then Rφ : M ⊗ M → M ⊗ M, Rφ (mi ⊗ mj ) = δij ma ⊗ mb (8.38) a,b∈φ−1 (i)
is a solution of the F S-equation.
8.2 The Frobenius-separability equation
331
Proof. Write R = Rφ , and take p, q, r ∈ {1, . . . , n}. Then R12 R23 (mp ⊗ mq ⊗ mr ) = R12 δqr mp ⊗ m a ⊗ m b = δqr
a,b∈φ−1 (q)
δap
a,b∈φ−1 (q)
= δqr δφ(p)q
mc ⊗ m d ⊗ m b
c,d∈φ−1 (p)
mc ⊗ m d ⊗ m b .
b∈φ−1 (q) c,d∈φ−1 (p)
If φ−1 (p) is nonempty (take x ∈ φ−1 (p)), and φ(p) = q, then q = φ(p) = φ2 (x) = φ(x) = p, so we can write R12 R23 (mp ⊗ mq ⊗ mr ) = δpqrφ(p) ma ⊗ m b ⊗ m c . a,b,c∈φ−1 (p)
In a similar way, we can compute that R23 R13 (mp ⊗ mq ⊗ mr ) = R13 R12 (mp ⊗ mq ⊗ mr ) = δpqrφ(p) ma ⊗ m b ⊗ m c . a,b,c∈φ−1 (p)
Now we will generalize Example 18.3 to algebras without unit. Recall that a left A-module M is called unital (or regular) if the natural map A⊗A M → M is an isomorphism. Proposition 151. Let M be a unital A-module, and f : A → A ⊗ A an A-bimodule map. Then the map R : M ⊗ M → M ⊗ M mapping m ⊗ a · n to f (a)(m ⊗ n) is a solution of the F S-equation. Proof. Observe first that it suffices to define R on elements of the form m ⊗ a · n, since the map A ⊗A M → M is surjective. R is well-defined since R(m ⊗ a · (b · n)) = f (a)(m ⊗ b · n) = f (a)(IM ⊗ b)(m ⊗ n) = f (ab)(m ⊗ n) Write f (a) = a1 ⊗ a2 , for all a ∈ A. Then f (ab) = a1 ⊗ a2 b = ab1 ⊗ b2 . Now R12 (R23 (m ⊗ bn ⊗ ap)) = R12 (m ⊗ a1 bn ⊗ a2 p) = a1 b1 m ⊗ b2 n ⊗ a2 p = R13 (b1 m ⊗ b2 n ⊗ p) = R13 (R12 (m ⊗ bn ⊗ ap)) In a similar way, we prove that R12 R23 = R23 R13 .
332
8 The Frobenius-Separability equation
8.3 The structure of Frobenius algebras and separable algebras The first statement of Proposition 144 can be restated as follows: for a kalgebra A, any A-central element R ∈ A ⊗ A is a solution of the FS-equation. Over a field k, we can prove the converse: any solution of the FS-equation arises in this way. Theorem 77. Let A be a k-algebra and R = R1 ⊗ R2 ∈ A ⊗ A a solution of the FS-equation. 1. Then there exists a k-subalgebra A(R) of A such that R ∈ A(R) ⊗ A(R) and R is A(R)-central. 2. If R ∼ S are equivalent solutions of the F S-equation, then A(R) ∼ = A(S). 3. (A(R), R) satisfies the following universal property: if (B, mB , 1B ) is an algebra, and e ∈ B ⊗ B is an B-central element, then any algebra map α : B → A with (α ⊗ α)(e) = R factors through an algebra map α ˜: B→ A(R). 4. If R ∈ A ⊗ A is a solution of the S-equation (resp. the F-equation), then A(R) is a separable (resp. Frobenius) algebra. Proof. 1. Let A(R) = {a ∈ A | a·R = R·a}. Obviously A(R) is a k-subalgebra of A and 1A ∈ A(R). We also claim that R ∈ A(R) ⊗ A(R). To this end, we observe first that A(R) = Ker(ϕ), with ϕ : A → A ⊗ Aop defined by the formula ϕ(a) = (a ⊗ 1A − 1A ⊗ a)R. A is flat as a k-algebra, so A(R) ⊗ A = Ker(ϕ ⊗ IdA ). Now, (ϕ ⊗ IA )(R) = r1 R1 ⊗ R2 ⊗ r2 − R1 ⊗ R2 r1 ⊗ r2 = R13 R12 − R12 R23 = 0 and it follows that R ∈ A(R) ⊗ A. In a similar way, using that R12 R23 = R23 R13 , we get that R ∈ A ⊗ A(R), and we find that R ∈ A(R) ⊗ A(R). Indeed, k is a field, so A(R) ⊗ A(R) = A ⊗ A(R) ∩ A(R) ⊗ A Hence, R is an A(R)-central element of A(R) ⊗ A(R). 2. Let u ∈ U (A) such that S = (u ⊗ u)R(u−1 ⊗ u−1 ). Then fu : A(R) → A(S),
fu (a) = uau−1
for all a ∈ A(R) is a well-defined isomorphism of k-algebras. 3. Let b ∈ B. If we apply α ⊗ α to the equality (b ⊗ 1B )e = e(1B ⊗ b) we find
8.3 The structure of Frobenius algebras and separable algebras
333
that the image of α is contained in A(R), and the universal property follows. 4. The first statement follows from the definition of separable algebras and the second one follows from 3) of Corollary 40. Remark 28. The converse of 2. does not hold: let A be a separable k-algebra and R ∈ A ⊗ A a separability idempotent. Then R and S = 0 ⊗ 0 are solutions of the F S-equation, A(R) = A(S) = A and, of course, R and S are not equivalent. If A is finite dimensional, then we can describe the algebra A(R) using generators and relations. Let {m1 , m2 , . . . , mn } be a basis of a finite dimensional vector space M . We have seen that an endomorphism R ∈ End(M ⊗ M ) can be written in matrix form (see (8.9) and (8.10)). Suppose that R is a solution of FS-equation. Identifying End(M ) and Mn (k), we will write A(n, R) for the subalgebra of Mn (k) corresponding to A(R). An easy computation shows that α iα j A(n, R) = { aij ∈ Mn (k) | xij αv au = xuv aα , for all i, j, u, v = 1, · · · , n} (8.39) where R is a matrix satisfying (8.14). We can now prove the main result of this Chapter. Theorem 78. For an n-dimensional k-algebra A, the following statements are equivalent: 1. A is a Frobenius (resp. separable) algebra. 2. There exists an algebra isomorphism A∼ = A(n, R), ∼ where R = (xij uv ) ∈ Mn (k) ⊗ Mn (k) = End(A) ⊗ End(A) is a solution of the Frobenius (resp. separability) equation. Proof. 1. ⇒ 2. Both Frobenius and separable algebras are characterized by the existence of an A-central element with some normalizing properties. Let e = e1 ⊗ e2 ∈ A ⊗ A be such an A-central element. Then the map R = Re : A ⊗ A → A ⊗ A,
R(a ⊗ b) = e1 a ⊗ e2 b
(a, b ∈ A), is a solution to the FS-equation. Here we view Re ∈ Endk (A⊗A) ∼ = Endk (A) ⊗ Endk (A) (A is finite dimensional over k). Consequently, we can construct the algebra A(R) ⊆ Endk (A). We will prove that A and A(R) are isomorphic when A is a Frobenius algebra, or a separable algebra. First we consider the injection i : A → Endk (A), with i(a)(b) = ab, for a, b ∈ A. Then image of i is included in A(R). Indeed, A(R) = {f ∈ Endk (A) | (f ⊗ IA ) ◦ R = R ◦ (IA ⊗ f )}
334
8 The Frobenius-Separability equation
Using the fact that e is an A-central element, it follows easily that (i(a) ⊗ IA ) ◦ R = R ◦ (IA ⊗ i(a)), for all a ∈ A, proving that Im(i) ⊆ A(R). If f ∈ A(R), then (f ⊗ IA ) ◦ R = R ◦ (IA ⊗ f ), and, evaluating this equality at 1A ⊗ a, we find (8.40) f (e1 ) ⊗ e2 a = e1 ⊗ e2 f (a) Now assume that A is a Frobenius algebra. Then there exists ε : A → k such that ε(e1 )e2 = e1 ε(e2 ) = 1A . Applying ε ⊗ IA to (8.40) we obtain that f (a) = (ε(f (e1 ))e2 )a for all a ∈ A. Thus, f = i(ε(f (e1 ))e2 ). This proves that Im(i) = A(R), and the corestriction of i to A(R) is an isomorphism of algebras. If A is separable, then e1 e2 = 1A . Applying mA to (8.40) we find f (a) = (f (e1 )e2 )a for all a ∈ A. Consequently f = i(f (e1 )e2 ), proving again that A and A(R) are isomorphic. 2. ⇒ 1. This is the last statement of Theorem 77. Remark 29. Over a field k that is algebraically closed or of characteristic zero, the structure of finite dimensional separable k-algebras is given by the classical Wedderburn-Artin Theorem: a finite dimensional algebra A is separable if and only if is semisimple, if and only if it is a direct product of matrix algebras. As we have seen in Lemma 27, the dual A∗ of a separable finite dimensional algebra is a coseparable finite dimensional coalgebra. Thus we obtain a structure Theorem for coseparable coalgebras, by using duality arguments. More precisely, let C be a coseparable k-coalgebra which of dimension n over k. Then there exists an FS-map σ : C ⊗ C → k satisfying the normalizing condition (8.26). Let A = C ∗ and A ⊗ A ∼ = C ∗ ⊗ C ∗ . Then σ is an A-central element of A, satisfying the normalizing condition (8.6). It follows from Theorem 78 and (8.39) that A ∼ = A(n, σ), and therefore C ∼ = A(n, σ)∗ . Now A(n, σ)∗ can be described as a quotient of the comatrix coalgebra: A(n, σ)∗ = Mn (k)/I, where I is the coideal of Mn (k) that annihilates A(n, σ); I is generated by ij iα {oij uv = cuα xαv − xuv cαj | i, j, k, l = 1 . . . , m}
This coalgebra will be denoted by C(n, R) = Mn (k)/I We have obtain the following Theorem 79. For an n-dimensional C, the following statements are equivalent.
8.3 The structure of Frobenius algebras and separable algebras
335
1. C is a co-Frobenius (resp. coseparable) coalgebra. 2. There exists a coalgebra isomorphism A∼ = C(n, R), ∼ where R = (xij uv ) ∈ Mn (k) ⊗ Mn (k) = Endk (A) ⊗ Endk (A) is a solution of the Frobenius (resp. separability) equation. Examples 21. 1. Let M be a n-dimensional vector space and R = IM ⊗M . Then A(R) = {f ∈ End(M ) | f ⊗ IM = IM ⊗ f } = k 2. Now let R = τM be the switch map. For all f ∈ End(M ) and m, n ∈ M , we have ((f ⊗ IM ) ◦ τ )(m ⊗ n) = f (n) ⊗ m = (τ ◦ (IM ⊗ f ))(m ⊗ n) and, consequently, A(τ ) ∼ = Mn (k). 3. Let M be a finite dimensional vector space over a field k, and f ∈ End(M ) an idempotent. Then A(f ⊗ f ) = {g ∈ End(M ) | f ◦ g = g ◦ f = αf for some α ∈ k} Indeed, g ∈ A(f ⊗ f ) if and only if g ◦ f ⊗ f = f ⊗ f ◦ g. Multiplying the two factors, we find that g ◦ f = f ◦ g. Now g ◦ f ⊗ f = f ⊗ g ◦ f implies that g ◦ f = αf for some α ∈ k. The converse is obvious. In particular, assume that M has dimension 2 and let {m1 , m2 } be a basis of M . Let f be the idempotent endomorphism with matrix 1 − rq q r(1 − rq)
rq
Assume first that rq = 1 and r = 0, and take g ∈ End(M ) with matrix a b c
d
g ∈ A(f ⊗ f ) if and only if α = a + br
;
c + dr = r(a + br)
;
br(1 − rq) = qc
The two last equations can be easily solved for b and c in terms of a and d, and we see that A(f ⊗ f ) has dimension two. We know from the proof of Theorem 77 that f ∈ A(f ⊗ f ). Another solution of the above system is IM , and we find that A(f ⊗ f ) is the two-dimensional subalgebra of End(M ) with basis {f, IM }. Put f = IM − f . Then {f, f } is also a basis for A(f ⊗ f ), and A(f ⊗ f ) = k × k. C(f ⊗ f ) is the grouplike coalgebra of dimension two. We find the same result if rq = 1 or q = 0.
336
8 The Frobenius-Separability equation
4. Let R ∈ Mn2 (k) given by equation (8.34) as in Proposition 149. Then the algebra A(n, R) is given by A(n, R) = { aij ∈ Mn (k) | auθ(j,v,i) = ajθ(u,i,v) , (∀) i, j, u, v ∈ X} (8.41) Proposition 149 tells us when this algebra is separable or Frobenius over k. Assume now that G is a finite group with |G| = n invertible in k and θ is given as in Example 20.2. Then the above algebra A(n, R) equals A = { aij ∈ Mn (k) | aig(i) = ajg(j) , (∀) i, j ∈ X and g ∈ G} Indeed, if (aij ) ∈ A, then (aij ) ∈ A(n, R), since g(θ(j, v, i)) = u implies that g(j) = θ(u, i, v). Conversely, for (aij ) ∈ A(n, R), we choose i, j ∈ X and g ∈ G. Let u = g(i) and v = g(j). Then θ(j, v, u) = i, hence aig(i) = aθ(j,v,u) = ajθ(u,u,v) = ajv = ajg(j) , u showing that A = A(n, R). From the fact that G acts transitively, it follows that a matrix in A(n, R) is completely determined by its top row. For every g ∈ G, we define Ag ∈ A(n, R) by (Ag )1i = δg(1),i . Then A(n, R) = {Ag | g ∈ G}, and we have an algebra isomorphism f : A(n, R) → kG,
f (Ag ) = g
For example, take the cyclic group of order n, G = Cn . x2 · · · xn x1 x x · · · x n 1 n−1 A(n, R) = xn−1 xn · · · xn−2 | x1 , · · · , xn ∈ k ∼ = kCn · · ··· · x2 x3 · · · x1 5. Let G be a finite group of order n and σ : G × G → k∗ a 2-cocycle. Let Rσ be the solution of the F S-equation given by (8.24). We then obtain directly from (8.39) that A(n, Rσ ) consists of all G × G-matrices (aij )i,j∈G satisfying the relations ajv u
−1
i
σ −1 (ui−1 vj −1 , jv −1 )σ(vj −1 , jv −1 i)σ(jv −1 , v) =
ajui−1 v σ −1 (iu−1 , ui−1 )σ(iu−1 , u)σ(ui−1 , v) for all i, j, u, v ∈ G. This algebra is separable if n is invertible in k. We will now present some new classes of examples, starting from the solution Rφ of the F S-equation discussed in Proposition 150. In this case, we find easily that
8.3 The structure of Frobenius algebras and separable algebras
337
xij uv = δuv δφ(i)u δφ(j)v
and, according to (8.39), A(Rφ ) consists of matrices aij satisfying n
ij aα u xαv
=
α=1
n
j xiα uv aα
α=1
or avu δφ(i)v δφ(j)v =
δuv δφ(i)u ajα
(8.42)
α∈φ−1 (v)
for all i, j, v, u = 1, . . . , n. The left hand side of (8.42) is nonzero if and only if φ(i) = φ(j) = v. If φ(i) = φ(j) = v = u, then (8.42) amounts to φ(i) aφ(i) = aiα . {α|φ(α)=φ(i)}
If φ(i) = φ(j) = v = u, then (8.42) takes the form auφ(i) = 0. Now assume that the left hand side of (8.42) is zero. If φ(i) = φ(j), then the right hand side of (8.42) is also zero, except when u = v = φ(i). Then (8.42) yields ajα = 0. {α|φ(α)=φ(i)}
If φ(i) = φ(j) = u, then (8.42) reduces to 0 = 0. We summarize our results as follows. Proposition 152. Consider an idempotent map φ : {1, . . . , n} → {1, . . . , n}, and the corresponding solution Rφ of the F S-equation. Then A(Rφ ) is the subalgebra of Mn (k) consisting of all matrices (aij ) satisfying φ(i)
aφ(i) =
aiα
(i = 1, . . . , n)
(8.43)
(φ(i) = j)
(8.44)
(φ(i) = φ(j))
(8.45)
{α|φ(α)=φ(i)} φ(i)
aj
=
0=
0
ajα
{α|φ(α)=φ(i)}
A(Rφ ) is a separable k-algebra if and only if (Rφ , ε = trace) is a solution of the F-equation if and only if φ is the identity map. In this case, A(Rφ ) is the direct sum of n copies of k. Proof. The first part was done above. A(R) is separable if and only if (8.15) holds. This comes down to
338
8 The Frobenius-Separability equation
δju =
n
δuv δφ(u)u δφ(j)v = δφ(u)u δφ(j)u
v=1
and this implies that φ(u) = u for all u. (8.43,8.44,8.45) reduce to aij = 0 for i = j, and A(RI ) consists of all diagonal matrices. (Rφ , ε = trace) is a solution of the F-equation if and only if (8.16) holds, and a similar computation shows that this also implies that φ is the identity. Examples 22. 1. Take n = 4, and φ given by φ(1) = φ(2) = 2, φ(3) = φ(4) = 4 (8.43,8.44,8.45) take the following form a21 and
a22 = a11 + a12 a44 = a33 + a34 2 2 4 = a2 = a4 = 0 a1 = a42 = a43 = 0 a13 = −a14 a31 = −a32
x 0 A(4, Rφ ) = v 0
y−x u y
0
−v
z
0
0
−u
0 | x, y, z, t, u, v ∈ k t−z t
The dual coalgebra can also be described easily. Write xi = cii (i = 1, . . . , 4), x5 = c13 and x6 = c31 . Then C(Rφ ) is the six dimensional coalgebra with basis {x1 , . . . , x6 } and ∆(x1 ) = x1 ⊗ x1 + x5 ⊗ x6 , ∆(x2 ) = x2 ⊗ x2 , ∆(x3 ) = x3 ⊗ x3 + x6 ⊗ x5 , ∆(x4 ) = x4 ⊗ x4 , ∆(x5 ) = x1 ⊗ x5 + x5 ⊗ x3 , ∆(x6 ) = x6 ⊗ x1 + x3 ⊗ x6 , ε(xi ) = 1 (i = 1, 2, 3, 4), ε(xi ) = 0 (i = 5, 6). 2. Again, take n = 4, but let φ be given by the formula φ(1) = 1, φ(2) = φ(3) = φ(4) = 2 Then (8.43,8.44,8.45) reduce to a22 = a32 + a33 + a34 = a42 + a43 + a44 a12 = a13 = a14 = a21 = a23 = a24 = a31 = a41 = 0, hence
8.4 The category of FS-objects
x 0 φ A(4, R ) = 0 0
0
0
y
0
u
z
v
y−t−v
0
339
| x, y, z, t, u, v ∈ k y − z − u t 0
Putting cii = xi (i = 1, . . . , 4), x5 = c32 and x6 = c42 , we find that C(Rφ ) is the six dimensional coalgebra with structure maps ∆(x1 ) = x1 ⊗ x1 , ∆(x2 ) = x2 ⊗ x2 , ∆(x3 ) = x3 ⊗ x3 + (x2 − x3 − x5 ) ⊗ (x2 − x4 − x6 ), ∆(x4 ) = x4 ⊗ x4 + (x2 − x4 − x6 ) ⊗ (x2 − x3 − x5 ), ∆(x5 ) = x5 ⊗ x2 + x3 ⊗ x5 + (x2 − x3 − x5 ) ⊗ x6 , ∆(x6 ) = x6 ⊗ x2 + x4 ⊗ x6 + (x2 − x4 − x6 ) ⊗ x5 , (i = 1, 2, 3, 4), ε(xi ) = 0 (i = 5, 6). ε(xi ) = 1
8.4 The category of FS-objects We have seen in Corollary 41 that the equation R12 R23 = R13 R12 is equivalent to the fact that a certain multiplication on M ⊗ M is associative. We will now prove that the other equation, namely R12 R23 = R23 R13 , is equivalent to the fact that a certain comultiplication is coassociative. Proposition 153. Let (A, mA , 1A ) be an algebra, R = R1 ⊗ R2 ∈ A ⊗ A and δ : A → A ⊗ A,
δ(a) = R1 ⊗ R2 a
for all a ∈ A. The following statements are equivalent: 1. (A, δ) is a coassociative coalgebra (not necessarily with counit); 2. R12 R23 = R23 R13 in A ⊗ A ⊗ A. In this case any left A-module (M, ·) has a structure of left comodule over the coalgebra (A, δ) via ρ : M → A ⊗ M,
ρ(m) = R1 ⊗ R2 · m
for all m ∈ M . Proof. The equivalence of 1. and 2. follows from the formulas (δ ⊗ I)δ(a) = R12 R23 · (1A ⊗ 1A ⊗ a) (I ⊗ δ)δ(a) = R23 R13 · (1A ⊗ 1A ⊗ a) for all a ∈ A. The final statement follows from
340
8 The Frobenius-Separability equation
(δ ⊗ I)ρ(m) = R12 R23 · (1A ⊗ 1A ⊗ m) (I ⊗ ρ)ρ(m) = R23 R13 · (1A ⊗ 1A ⊗ m) for all m ∈ M . Suppose now that (A, mA , 1A ) is an algebra over k and let R ∈ A ⊗ A be a A-central element. Then R12 R23 = R23 R13 in A ⊗ A ⊗ A. It follows that (A, ∆R = δ cop ) is also a coassociative coalgebra, where ∆R (a) = δ cop (a) = R2 a ⊗ R1 , for all a ∈ A. We remark that ∆R is not an algebra map, i.e. (A, mA , ∆R ) is not a bialgebra. Any left A-module (M, ·) has a structure of right comodule over the coalgebra (A, ∆R ) via ρR : M → M ⊗ A,
ρR (m) = R2 · m ⊗ R1
for all m ∈ M . Moreover, for any a ∈ A and m ∈ M we have that ρR (a · m) = a(1) · m ⊗ a(2) = m[0] ⊗ am[1] . Indeed, from the definition of the coaction on M and the comultiplication on A, we have immediately ρR (a · m) = R2 a · m ⊗ R1 = a(1) · m ⊗ a(2) On the other hand m[0] ⊗ am[1] = R2 · m ⊗ aR1 = R2 a · m ⊗ R1 where in the last equality we used that R is a A-central element. These considerations lead us to the following Definition 20. Let (A, mA , ∆A ) be at once an algebra and a coalgebra (but not necessarily a bialgebra). An F S-object over A is a k-module M that is at once a left A-module and a right A-comodule such that ρ(a · m) = a(1) · m ⊗ a(2) = m[0] ⊗ am[1]
(8.46)
for all a ∈ A and m ∈ M . The category of F S-objects and A-linear A-colinear maps will be denoted by A A FS . This category measures how far A is from a bialgebra. If A has not a unit (resp. a counit), then the objects in A FS A will be assumed to be unital (resp. counital). Proposition 154. If (A, mA , 1A , ∆A , εA ) is a bialgebra with unit and counit, then the forgetful functor F : A FS A → k M is an isomorphism of categories.
8.4 The category of FS-objects
341
Proof. Define G : k M → A FS A as follows: G(M ) = M as a k-module, with trivial A-action and A-coaction: ρ(m) = m ⊗ 1A and a · m = εA (a)m for all a ∈ A and m ∈ M . It is clear that G(M ) ∈ A FS A . Now, assume that M is an F S-object over A. Applying I ⊗ εA to (8.46), we find that a · m = m[0] εA (a)εA (m[1] ) = εA (a)m Taking a = 1A in (8.46), we find ρ(m) = 1A · m ⊗ 1A = m ⊗ 1A . Hence, G is an inverse for the forgetful functor F : A FS A → k M. Definition 21. A triple (A, mA , ∆A ) is called a weak Frobenius algebra (W F -algebra, for short) if (A, mA ) is an algebra (not necessarily with unit), (A, ∆A ) is a coalgebra (not necessarily with counit) and (A, mA , ∆A ) ∈ A A FS , that is ∆(ab) = a(1) b ⊗ a(2) = b(1) ⊗ ab(2) . (8.47) for all a, b ∈ A. Remarks 25. 1. Assume that A is an W F -algebra with unit, and write ∆(1A ) = e2 ⊗ e1 . From (8.47), it follows that ∆(a) = e2 a ⊗ e1 = e2 ⊗ ae1
(8.48)
and this implies that ∆cop (1A ) = e1 ⊗ e2 is an A-central element. Conversely, if A is an algebra with unit, and e = e1 ⊗ e2 is an A-central element, then e12 e23 = e23 e13 (see (8.3)), and it is easy to prove that this last statement is equivalent to the fact that ∆ : A → A ⊗ A given by ∆(a) = e2 a ⊗ e1 is coassociative. Thus A is a W F -algebra. We have proved that W F algebras with unit correspond to algebras with unit together with an Acentral element. 2. From (8.47), it follows immediately that f := ∆cop : A → A ⊗ A is an Abimodule map. Conversely, if f is an A-bimodule map, then it is easy to prove that ∆ = τ ◦ f defines a coassociative comultiplication on A, making A into a W F -algebra. Now, using Corollary 40, we obtain that a finitely generated projective and unitary k-algebra (A, mA , 1A ) is Frobenius if and only if A is an unitary and counitary W F -algebra. Thus, we can view W F -algebras as a generalization of Frobenius algebras. Proposition 155. Let (A, mA , 1A , ∆A ) be a W F -algebra with unit. Then the forgetful functor F : A FS A → A M is an isomorphism of categories.
342
8 The Frobenius-Separability equation
Proof. We define a functor G : A M → A FS A as follows: G(M ) = M as an A-module, with right A-coaction given by the formula ρ(m) = e2 · m ⊗ e1 where ∆(1A ) = e2 ⊗ e1 . ρ is a coaction because e12 e23 = e23 e13 , and, using (8.48), we see that ρ(a · m) = e2 a · m ⊗ e1 = a(1) · m ⊗ a(2) = e2 · m ⊗ ae1 = m[0] ⊗ am[1] as needed. Now, G and F are each others inverses. Now, we will give the coalgebra version of Proposition 155. Consider a W F algebra (A, mA , ∆A , εA ), with a counit εA , and consider σ = ε ◦ mA ◦ τ : A ⊗ A → k, that is, σ(c ⊗ d) = ε(dc) for all c, d ∈ A. Now ∆(cd) = c(1) d ⊗ c(2) = d(1) ⊗ cd(2) so σ(d ⊗ c(1) )c(2) = (ε ⊗ IC )(∆(cd)) = (IC ⊗ ε)(∆(cd)) = σ(d(2) ⊗ c)d(1) and σ is an F S-map. Conversely, let (A, ∆A ) be a coalgebra with counit, and assume that σ : A ⊗ A → k is an F S-map. A straightforward computation shows that the formula c · d = σ(d(2) ⊗ c)d(1) defines an associative multiplication on A and that (A, ·, ∆A ) is a W F algebra. Thus, W F -algebras with counit correspond to coalgebras with counit together with an F S-map. Proposition 156. Let (A, mA , ∆A , εA ) be a W F -algebra with counit. Then the forgetful functor F : A FS A → MA is an isomorphism of categories. Proof. The inverse of F is the functor G : MA → A FS A defined as follows: G(M ) = M as a A-comodule, with A-action given by a · m = σ(m[1] ⊗ a)m[0] for all a ∈ A, m ∈ M . Further details are left to the reader.
8.4 The category of FS-objects
343
As an immediate consequence of Proposition 155 and Proposition 156 we obtain the following generalization of Abrams’ result [4, Theorem 3.3] Corollary 44. Let (A, mA , 1A , ∆A , εA ) be a W F -algebra with unit and counit. Then we have an equivalence of categories A FS
A
∼ = MA = AM ∼
Let us finally show that Proposition 155 also holds over W F -algebras that are unital as modules over themselves. Proposition 157. Let A be a W F -alegebra that is unital as a module over itself. We have an equivalence between the categories A M and A FS A . Proof. For a unital A-module M , we define F (M ) as the A-module M with A-coaction given by ρ(a · m) = a(1) · m ⊗ a(2) It is clear that ρ defines an A-coaction. One equality in (8.46) is obvious, and the other one follows from (8.47): for all a, b ∈ A and m ∈ M , we have that (b · m)[0] ⊗ a(b · m)[1] = b(1) · m ⊗ ab(2) = (ab)(1) · m ⊗ (ab)(2) = ρ(a · (b · m)) It follows that F (M ) is an F S-object, and F defines the desired category equivalence.
Index
A-central 317 adjoint pair 25 algebroid 137 alternative Doi-Koppinen structure 42 antipode 6 augmentation map 4 bialgebra 6 bijective 1-cocycle 241 braid equation 219 braiding operator 240 Brauer-Long group 193 canonical coring 204 canonical element 278 Casimir element 30 characteristic element 105, 319 classical Yang-Baxter operator 221 coalgebra 4 coalgebra Galois extension 211 coalgebra over a comonad 212 coassociative 4 cocommutative coalgebra 4 coinvariants 12 comodule algebra 21 comodule coalgebra 21 comonad 211 comparison functor 204 compatible entwining structures 198 compatible pair of actions 239 comultiplication 4 comultiplicative map 5 conjugate braiding 241 convolution 5 coquasitriangular bialgebra 229 coring 71 coseparable coalgebra 136, 318 counit 4 crossed G-module 232 crossed module 182
crossed product 325 derivation 28 descent datum 205 diagonal map 4 distinguished element 116 DK structure 41 Doi-Hopf modules 49 Doi-Koppinen Hopf modules 49 Doi-Koppinen structure 41 Drinfeld double 186 effective descent morphism 204 entwined module 48 entwining structure 39 factorization structure 50 Frobenius algebra 32,318 Frobenius bimodule 117 Frobenius functor 89 Frobenius pair 34, 89, 101 Frobenius pair of the second kind 91 Frobenius-separability equation 320 FS-map 327 FS-object 340 fusion equation 245 Galois type coring 209 Godement product 22 Harrison cocycle 293 Heisenberg double 165, 279 Hochschild cohomology 29 Hopf algebra 6 Hopf element 266 Hopf equation 245 Hopf function 260 Hopf-Galois extension 169 integral 109, 179 Knizhnik-Zamolodchikov equation 314 Koppinen smash 57 Long coalgebra 311 Long dimodule 48, 193, 304
354
Index
Long equation 301 Long map 311 Maschke functor 97 module algebra 8 module coalgebra 8 monoidal entwining structure 81 naturally faithful functor 92 paratrophic matrix 33 pentagon equation 245 pentagon objects 278 pure morphism 208 quantum determinant 238 quantum Yang-Baxter equation 219 quasitriangular bialgebra 225 rack 239 relative Hopf module 49 relative injective object 96 relative projective object 96 right twisted smash coproduct 79 separability idempotent 30, 96, 317
separable algebra 29, 317 separable bimodule 117 separable functor 91 smash coproduct structure 59 smash product 50 smash product structure 50 symmetric algebra 318 T -generator 146 3-cocycle equation 246 total integral 159, 179 transfer map 90 triangular Hopf algebra 226 twist 224 twist equation 225, 293 two-sided entwined module 68 unified Hopf modules 49 unimodular Hopf algebra 116 weak Frobenius algebra 341 Yetter-Drinfeld module 48, 181 Yetter-Drinfeld structure 181