Making European Merger Policy More Predictable
Making European Merger Policy More Predictable By
Stefan Voigt Univers...
10 downloads
759 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Making European Merger Policy More Predictable
Making European Merger Policy More Predictable By
Stefan Voigt University of Kassel, Germany and
André Schmidt University of Göttingen, Germany
A C.I.P. Catalogue record for this book is available from the Library of Congress.
ISBN 1-4020-3089-4 (HB) ISBN 1-4020-3090-8 (e-book)
Published by Springer, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. Sold and distributed in North, Central and South America by Springer, 101 Philip Drive, Norwell, MA 02061, U.S.A. In all other countries, sold and distributed by Springer, P.O. Box 322, 3300 AH Dordrecht, The Netherlands.
Printed on acid-free paper
All Rights Reserved © 2005 Springer No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Printed in the Netherlands.
TABLE OF CONTENTS
Preface
xi
CHAPTER I: PREDICTABILITY AS A CRUCIAL CONDITION FOR ECONOMIC GROWTH AND DEVELOPMENT
1
1. Introductory Remarks
1
2. Some Theoretical Considerations Concerning Predictability
1
3. Some Empirical Results Concerning Predictability
3
4. The Predictability of European Merger Policy 4.1. A Survey Amongst Large European Firms 4.1.1. Before Notification 4.1.2. After notification
5 8 9 10
5. Proposals for Improving Predictability
11
CHAPTER II: DEVELOPMENTS IN COMPETITION THEORY
13
1. Introductory Remarks
13
2. The Harvard Approach 2.1. Main Points 2.2. Policy Implications 2.3. Critique
13 14 16 16
3. The Chicago Approach 3.1. Main Points 3.2. Policy Implications 3.3. Critique
18 18 20 21
4. Contestability Theory 4.1. Main Points 4.2. Policy Implications 4.3. Critique
22 22 24 24
5. The Contribution of Game Theory: the New Industrial Organisation 5.1. Game Components 5.2. Advantages of Using Game Theory in Competition Theory 5.3. Critique Concerning the Use of Game Theory in Competition Theory
24 25 25 27
v
vi
TABLE OF CONTENTS
6. The Contribution of The New Institutional Economics: Transaction Cost Economics 6.1. Transactions and Transaction Costs 6.2. Assumptions of Transaction Cost Economics 6.3. Policy Implications
28 29 30 32
7. In lieu of a summary: Consensus and Dissensus Between the Various Approaches
34
CHAPTER III: TRENDS IN THE BUSINESS ENVIRONMENT
39
1. Liberalisation as a Driving Force of Globalisation 1.1. General Trends 1.1.1. Liberalisation within nation-states 1.1.2. Liberalisation by regional integration 1.1.3. Liberalisation on a worldwide scale 1.2. Sector-Specific Liberalisation 1.2.1. Liberalisation of goods markets 1.2.2. Liberalisation of capital markets 1.2.3. Facilitation of Foreign Direct Investment 1.2.4. Liberalisation of service markets
40 40 41 42 42 43 43 44 44 45
2. Economic and Technological Factors 2.1. Rapid Technological Change 2.2. Increasing Mobility of Supply 2.3. Developments in Transport Costs 2.4. The Internet 2.5. Homogenisation of Preferences 2.6. Rapid Change of Consumption Patterns
45 45 46 46 47 49 50
3. Conclusions
50
CHAPTER IV: POSSIBLE CONSEQUENCES OF TRENDS IN THEORY (B) AND DEVELOPMENTS IN BUSINESS (C) FOR COMPETITION POLICY
53
1. Introduction
53
2. From Market Definition to Assessing Dominance 2.1. The Standard Approach 2.1.1. The relevant product market 2.1.2. The relevant geographic market
54 54 55 56
MAKING EUROPEAN MERGER POLICY MORE PREDICTABLE
2.2. 2.3. 2.4.
2.5.
2.1.3. Defining relevant markets in practice: the hypothetical monopolist test 2.1.4. Predicting the post-merger structure 2.1.5. Assessing Single Dominance Consequences of Recent Theoretical Developments Consequences of Recent Trends in Business Environment Current EU Practice 2.4.1. The Relevant Product Market 2.4.2. The Relevant Geographic Market 2.4.3. Assessing Dominance Proposals Towards Enhancing Predictability 2.5.1. Simple Tools Delineate Relevant Market Taking Both Demand and Supply Side Into Account Reliance on Quantitative Methods to Delineate the Relevant Market Reliance on Quantitative Methods to Assess Dominance Some Critical Remarks Concerning Quantitative Techniques Assessing the Importance of Customer Loyalty for Delineating the Relevant Market More Systematically 2.5.2. Improvements Due to Theoretical Developments Take Efficiencies Explicitly into Consideration Assess Importance of Asset Specificity Assess Importance of Uncertainty Assess Importance of Frequency 2.5.3. Improvements Due to Trends in the Business Environment Taking the time dimension adequately into account Taking the geographic dimension adequately into account
3. A Closer Look at Barriers to Entry and Contestability 3.1. The Standard Approach State-Mandated Barriers to Entry Structural Barriers to Entry Strategic Barriers to Entry 3.2. Consequences of Recent Theoretical Developments 3.3. Consequences of Recent Trends in the Business Environment 3.4. Current EU Practice 3.5. Proposals Towards Enhancing Predictability
vii
57 57 58 60 61 63 63 66 68 71 72 72 73 75 76 77 79 79 84 85 85 86 86 87 87 87 89 89 90 91 93 94 97
viii
TABLE OF CONTENTS
4. Assessing Collective Dominance 4.1. Standard Approach 4.2. Recent Theoretical Developments 4.3. Recent Trends in the Business Environment 4.4. Current EU Practice 4.5. Reform Proposals
100 102 104 108 110 113
CHAPTER V: CASE STUDIES
119
1. Assessment of Barriers to Entry in European Merger Control: The Cases of Volvo/Scania, Mercedes-Benz/Kässbohrer, and MAN/Auwärter 1.1. Barriers to Entry in the Case of MERCEDES-BENZ/KÄSSBOHRER 1.2. Barriers to Entry in the Case of VOLVO/SCANIA 1.2.1. The Market for Heavy Trucks 1.2.2. The Market for Buses 1.3. MAN/AUWÄRTER 1.4. Economic Assessment
120 120 122 122 123 124 125
2. Assessment of Barriers to Entry in European Merger Control: The Cases of SCA/Metsä Tissue and SCA Hygiene Products/Cartoinvest 2.1. Barriers to Entry in SCA/METSÄ TISSUE 2.2. Barriers to Entry in SCA HYGIENE PRODUCTS/CARTOINVEST 2.3. Comparison of the Decisions 2.4. Economic Assessment
129 129 131 132 132
3. Assessment of Barriers to Entry in European Merger Control: The Case of BASF/Bayer/Hoechst/Dystar 3.1. Barriers to Entry in the Case of BASF/BAYER/HOECHST/DYSTAR 3.2. Economic Assessment
136 137 138
4. The Assessment of Barriers to Entry in European Merger Control: The Markets for Telecommunications in the Case of Telia/Telenor 4.1. The Special Conditions for Entry in the Markets for Telecommunications 4.2. The Case of TELIA/TELENOR 4.3. Assessment of the Decision
140 142 144
5. Collective Dominance under the European Merger Regulation 5.1. NESTLÉ/PERRIER (1992) 5.2. KALI&SALZ/MDK/TREUHAND (1993) 5.3. GENCOR/LONRHO (1996) 5.4. EXXON/MOBIL (1999) 5.5. AIRTOURS/FIRST CHOICE (1999)
146 146 149 150 152 154
139
MAKING EUROPEAN MERGER POLICY MORE PREDICTABLE 5.6. UPM-KYMMENE/HAINDL and NORSKE SKOG/ PARENCO/WALSUM (2001)
ix
157
CHAPTER VI: PRACTICAL PROPOSALS
161
1. Introductory Remarks
161
2. Overview of Substantive Proposals as Developed in Chapter IV
161
3. Procedural Proposals
166
4. Conclusions and Outlook
175
Appendix
177
Endnotes
181
References
185
Index
191
PREFACE
More than a decade after the European Union first introduced a common merger policy, this study evaluates the quality of that policy from a specific point of view, namely from the point of view of its predictability. It is argued that predictability is crucial for a welfare-enhancing merger policy and that European Merger Policy could be substantially improved on that score. European Merger Policy has already been subject to a number of reforms. A revised Merger Regulation came into effect in May 2004, which included a change in the substantive criterion to be used in order to assess whether a notified merger is compatible with the common market. Guidelines on horizontal mergers have also been introduced for the first time ever in European merger control. Additionally, after the Court of First Instance decided against the Commission in a number of widely publicized cases, a number of procedural changes have taken place within the European Commission’s Directorate General for Competition such as the appointment of a chief economist and the requirement for proposed merger decisions to be subjected b to peer review. Although it is too early to evaluate the quality of these recent reforms based on experience, all of them are critically evaluated on theoretical grounds in this study. But we do not stop there. We believe that the reforms did not go far enough and thus develop a number of substantial as well as procedural proposals for how European merger policy could be further improved. In a sense, we try to open the debate for yet another round of reforms. This study was sponsored by the European Round Table of Industrialists (ERT), a forum of some 45 leaders of major industrial companies whose headquarters are in Europe. ERT Members were concerned that the goals agreed upon by the European Council in Lisbon in March 2000 that aim at making the European Union “the most competitive and dynamic knowledge-based economy in the world” by 2010 called for EU competition policy to better reflect economic and market realities and to become more predictable and accountable. This led ERT Members to commission an academic study on the importance of predictability to successful competition policy. This study would not have been possible without the active support of the members of the ERT Competition Policy Working Group, chaired by Alain Joly. As economists from academia, we greatly benefited from m the opportunity to discuss our hypotheses and preliminary results with the members of the ERT Working Group, all business-based competition specialists who have been involved in merger cases with the Commission. Our various discussions with the Head and members of the Commission’s now defunct Merger Task Force proved equally useful. Special thanks to Claes Bengtsson who read the entire manuscript and made a number of critical remarks. Others who made very valuable comments include Ulrich Immenga, Manfred E. Streit, and Ingo Schmidt. Last but not least, two of our publisher’s anonymous referees helped with additional suggestions. Stefan Voigt and André Schmidt
Kassel and Göttingen, July 2004 xi
CHAPTER I PREDICTABILITY AS A CRUCIAL CONDITION FOR ECONOMIC GROWTH AND DEVELOPMENT
1. INTRODUCTORY REMARKS The title of this study puts the predictability of European Merger Policy at centre stage. This chapter serves to motivate and explain the outstanding importance atm of predictability tached to predictability. In the next section, the crucial importance for economic growth and development will be spelt out on an abstract and general level as well as with regard to competition policy and, more focused still, with regard to merger policy. Section three contains a brief summary of some of the empirical evidence dealing with the consequences off predictability – or its absence. In section four, the current degree of predictability in European Merger Policy is operationalised. Predictability in European Merger Policy was also the issue of a survey conducted among large European firms. Their perception of predictability is reported there as well. Section five contains a number of institutional proposals whose implementation could help increase the predictability of European Merger Policy – and hence its overall quality. 2. SOME THEORETICAL CONSIDERATIONS CONCERNING PREDICTABILITY Predictability can be defined as the capacity to make predictions concerning the actions of others that have a high chance of turning r out to be correct. In economics, the term uncertainty is much more frequently used than the term predictability. The two terms are closely intertwined: the absence of predictability can also be described as the presence of uncertainty. As markets become increasingly turbulent, the predictability of the framework established and implemented by politics has become even more crucial. Merger policy can, however, also increase uncertainty if competition authorities are granted too much discretionary power and if they draw on various economic theories eclectically. It has been shown empirically m that the stability of the framework within which companies act is decisive for economic growth and development. This includes, of course, the predictability of competition policy. A market economy is best described as a system of decentralised exchange often by millions of actors who are free to set their own goals. Ideally, exchange based on voluntarily entered-into contracts indicates that the parties involved expect to bebetter off as a result of the contract, otherwise they would not have consented to it in 1
2
CHAPTER I
the first instance. But some contracts that make the participating parties better off might have welfare-reducing effects on other, third parties. From an economic point of view, the function of legislation in a market economy consists in (a) making welfare-enhancing exchange as easy as possible, and in (b) making welfare-reducing exchange as difficult as possible. Antitrust or competition rules – two terms that will be used interchangeably – play an important role with regard to these two functions. Since market economies are often made up of millions of autonomous decisionmakers, uncertainty is a constitutional condition of such systems. We cannot with any certainty predict the actions of millions of other actors. Uncertainty can, however, have detrimental effects on economic growth and development: it is connected with a short time-horizon, which reduces the propensity to invest as well as the readiness to specialise and to contribute to a welfare-enhancing division of labour. One function of legislation thus consists in reducing the degree of uncertainty. It has been argued (Kant 1797/1995, Hayek 1960, 1973) that the best way in which the state can contribute to the reduction of uncertainty consists in passing universalisable rules. These are rules that are (1) general, i.e., they apply to a multitude of cases in which a multitude of actors will be engaged, (2) abstract, i.e., they are formulated negatively and thus do not prescribe a certain behaviour but simply prohibit a finite number of actions, and (3) certain, i.e., interested individuals can know whether a certain action is within the legal domain or not. Universalisable rules do not eliminate uncertainty completely because they do not prescribe actors what to do. The dynamics y of market economies depend on the possibility of individuals to act innovatively and innovations will by definition be unexpected by many. Universalisable legislation is a necessary but not a sufficient condition for a high degree of predictability. A high degree of predictability will only result if newly passed legislation will be implemented by an administration in a manner that allows interested individuals to anticipate the administration’s decisions; if courts are involved, then the same applies, of course, to the courts’ adjudication. This was already recognised by James Madison (1788/1961), who wrote in the 37th of the Federalist Papers: “All new laws, though penned with the greatest technical skill and passed on the fullest and most mature deliberation, are considered as more or less obscure and equivocal, until their meaning be liquidated and ascertained by a series of particular discussions and adjudications.” Formulated differently: A system of rules whose interpretation is perceived as unsystematic or erratic is almost as bad as not having a rule system in the first place; hence the particular importance of case law. Lack of predictability in competition policy can also be detrimental to the realisation of welfare gains. If market actors are uncertain as to what competition rules mean or how they will be interpreted by the competition authorities and/or the courts, a number of negative welfare effects can result: if market actors believe that a merger will not be cleared, mergers whose consummation would allow the realisation of efficiency gains, will not take place. This means that productive efficiency is lower than it could be were the merger carried out. On the other hand, if market actors believe that a proposed merger will be cleared, they might already invest in the entity to be created as the result of the merger. Prohibition off a merger can, then, be
PREDICTABILITY AS A CRUCIAL CONDITION
3
interpreted as a case of disappointed expectations, or: unpredictability of the competition policy as currently implemented. Decisions on single cases have effects that often go far beyond a single case: they are observed by a multitude of other actors and thus become precedent. Precedent that signals a tough stance on proposed mergers can also be a barrier to the realisation of welfare gains: potentially welfare-enhancing mergers might be discouraged by precedent right from the outset. As just pointed out, this will prevent the realisation of higher levels of productive efficiency. Predictability allows market participants to make predictions concerning the decisions of competition authorities that have a good chance of turning out to be correct. The existence of precedent as such is not sufficient to secure a high level of predictability. Rather, the criteria used by the competition authorities should be as clear-cut as possible. Additionally, the analytical tools used to translate criteria into operational data should be as transparent as possible. Legal certainty also has a time dimension. For entrepreneurial action in general and for mergers in particular, timing is often crucial. It is therefore not only important to get the right decision at all but to get it in time. In case the merging firms believe that a competition agency interpreted the relevant legislation falsely, the availability of legal remedies that are at their disposal within very short notice can thus be very important. Before inquiring to what degree these essentials of predictability are realised with European merger policy, let us have a quick look at some empirical evidence concerning the role of predictability. 3. SOME EMPIRICAL RESULTS CONCERNING PREDICTABILITY Let us now quickly turn to some empirical studies that can be interpreted as evidence for the crucial importance of predictability. As an example m for the importance that should be attached to the issue of predictability with regard to the general functioning of market economies, we cite a cross-country study by Brunetti et al. (1997). Empirical research concerning the effects of merger policy is very rare. Most of the existing studies deal with the effects of US antitrust policy. We will refer to two of these studies, Shughart and Tollison (1991) and Bittlingmayer (2001). Empirical evidence concerning the economic consequences of European merger policy is virtually non-existing and definitely a desideratum. Over the last number of years, an entire cottage industry dealing with the economic effects of formal institutions as, e.g., constitutions and legislation has developed. Rather than providing an overview of that literature, we confine ourselves to the presentation of one such study here (a short survey can be found in chapter five of Voigt 2002). It has been selected because it does explicitly exclude the material content of rules and concentrates exclusively on the perceived credibility of rules. In a study commissioned by the World Bank, Brunetti, Kisunko, and Weder (1997) use what could be called “subjective uncertainty indicators”. They asked local entrepreneurs in some five dozen countries whether they believed the announcements of their governments to be credible. If the locals do not believe their government, then a government announcing secure private property rights, low tax
4
CHAPTER I
rates, etc., will not induce investment. There is thus a very close relationship between credibility and predictability. Notice that neither concept focuses on the content of the rules but just on the formal issue of whether some policy (in the form of legislation or the like) announcement is believed to be implemented. Brunetti et al. (1997) surveyed more than 2,500 entrepreneurs in less developed countries and approximately 200 entrepreneurs in OECD member states. The partial indicators “security of persons and property rights” and “predictability of legislation” are most closely correlated with economic growth in this sample of 58 countries. The partial indicators “corruption”, “perceived political instability” and “predictability of judicial decisions” closely correlate with investment rates. Let us now turn to the empirical studies interested in the economic effects of antitrust policy. Shughart and Tollison (1991) are interested in ascertaining the employment effects of the Sherman and Clayton Acts, i.e., those two laws that are commonly believed to be the core of US antitrust policy. They conjecture that unanticipated strengthening off antitrust enforcement, i.e., low predictability of the actions of antitrust agencies, can force firms to adopt inefficient production technologies. As firms are forced to lower their current and planned production quantities, they might have to lay off some of their employees as a consequence. Suppose enforcement of antitrust legislation is unexpectedly strengthened in a particular industry. Mergers might be prevented and divestitures ordered. Suppose this is accompanied by unchanged production technology, which would mean that from a production oriented point-of-view, the optimal size of firms in the industry has remained unchanged. Yet, the optimal size of firms in that specific industry is reduced due to increased enforcement effort. Shughart and Tollison argue that this will lead firms to lower their current and planned production as well as to lay off some of their employees. If other industries with similar structures observe this, unemployment may also increase there. For the period between 1947 and 1981 (the starting point was chosen because 1947 is the first year for which extensive data concerning the U.S. labour market are available), they find a positive and significant relationship between unanticipated increases in antitrust enforcement activities and the unemployment rate. This study can thus be read as empirical evidence supporting the claim that the absence of predictability – here called unanticipated increases in antitrust enforcement activities – can have real economic costs, namely an unemployment rate that is higher than if merger policy was predictable. Bittlingmayer (2001) is interested in determinants of investment levels. In empirical studies seeking to identify the factors determining investment, only economic variables such as interest rates orr the prices of capital goods have traditionally been taken into consideration. Bittlingmayer now argues that these regressions always leave a fair amount of variation in investment unexplained and suspects that political uncertainty could be one such factor. He is then in need of a proxy for political uncertainty and chooses antitrust enforcement as that proxy. Based on data for the period between 1947 and 1991, he comes to the conclusion (ibid., 321) that “statistically, each extra case [of antitrust enforcement] is thus associated with a total decline in investment of aboutt $1.7 billion.” It is worth noting that the major adverse effects on investment do not arise within the industry of the firms against which a
PREDICTABILITY AS A CRUCIAL CONDITION
5
suit has been brought. Using an industry classification made up of 21 industries, an extra case filed is associated with a decline in investment in each of these 21 industries, where decreases are between $34 million and $110 million. It can thus be concluded that unpredictable merger policy can have real costs in terms of increased unemployment and decreased investment levels. We now turn to the question whether European Merger Policy is predictable. 4. THE PREDICTABILITY OF EUROPEAN MERGER POLICY Predictability has been defined as the capacity to make predictions concerning the actions of others that have a high chance of turning out to be correct. With regard to European merger policy, predictability can thus be interpreted as the capacity of a firm willing to merge with another firm m to predict the likely reaction of the Commission of the European Community to the notification of a proposed merger. It seems reasonable to assume that no firm handing in a notification enjoys being prohibited from executing a proposed merger since prohibitions entail considerable costs that would otherwise not accrue. If predictability y of European merger policy was perfect, this should thus lead to the absence of any prohibitions because prohibitions are costly. Taking Madison’s argument quoted above explicitly into account, one should expect predictability to increase over time as uncertainty regarding the interpretation of legislation by the administration and the courts decreases case-by-case. Ideally, the relationship between the time passed since new legislation was enacted and the number of cases should thus look like the following graph: Prohibitions
Time
Figure 1:Idealized Development of Prohibitions over Time
CHAPTER I
6
This is, of course, an idealised slope. Changes, even marginal ones, in legislation can cause local bumps just as the inauguration of new personnel that wants to document its tough stance on the issues can. But a curve derived from empirical cases should be at least somewhat reminiscent of the idealised curve. Based on all the cases dealt with by the European Commission during the first twelve years of European merger policy, this is not exactly what factually happened.
The Development of Prohibitions over Time
Prohibitions in %
2,5 2 1,5 1 0,5 0 1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
Year
Figure 2: The Development of Prohibitions over Time
The next graph simply shows the number of prohibitions as they occurred over time. True, the number of prohibitions is rather low: only 18 out of 1908 notified cases were prohibited, i.e., a little less than one percent. But there has certainly not been a downward trend in prohibitions, which means that predictability has not increased over the years. A second aspect regards the time dimension. It was said that the time needed in order to get legal remedies could be crucial for the legal certainty of merger policy. European merger policy used to be hailed for the tough time limits imposed by Regulation 4064/89 that forces the Commission to distinguish between two phases of investigation and that proscribes the Commission to finish the first phase after one month and the second phase after another four months.1 Practitioners have often noted that de facto, the process is not quite as fast because getting all the information that the Commission demands before being willing to open phase I would oftentimes require a number of weeks. But by and large, the decision-making process of DG Competition is a rather fast one. But what about the case in which the Commission decides to prohibit a merger? In that case, the merging firms can decide to appeal the decision by taking a case to the Court of First Instance (CFI). A decision by the CFI declaring the proposed merger compatible with EU Law will, at least in most cases, only “save” the proposed merger if it does not take years before the CFI decides. Table 1 below con-
PREDICTABILITY AS A CRUCIAL CONDITION
7
tains a list of cases in which the merging firms appealed to the CFI and lists the number of months that passed before the CFI published its decision. The number of companies that have taken their case to the CFI is very low; the “average” is therefore a rather crude indicator for the time-length a firm would have to expect should it decide to take its case to Luxembourg. But actually, the expected time-length might be the single most important reason for the low number of cases actually ending up at the CFI: if some three years pass before a firm can expect to get the “right” decision, the proposed merger might not make sense any more. A long time-span until the decision can therefore mean the de facto absence of legal protection. Although this picture is already pretty gloomy, it is not even the entire picture yet: Even after three years have passed and the CFI has published its decision, legal certainty is not yet achieved because the Commission now has the possibility to appeal the decision by bringing the case to the European Court of Justice. There, the case would supposedly be pending for at least another two years.2 Table 1: Time passed between the Decision of the Commission and the Court of First Instance (Source: http://europa.eu.int) Case
Date of Decision by Commission
Mediobanca/Generali
19.12.1991
Nestlé/Perrier Kali&Salz/MdK/Treuhand
22.07.1992 14.12.1993 08.06.1994 (first decision) 24.04.1996 (revised decision)
Shell/Montecatini
Schickedanz Procter&Gamble/VP Schickedanz
RTL/Veronica/Endemol
Gencor/Lonrho Kesko/Tuko Coca Cola/Amalgamated Beverages Airtours/First Choice Schneider/Legrand Tetra Laval/Sidel
21.06.1994
Date of Decision by CFI or ECJ
Time Lage in Month
28.10.1993 (CFI) 11.01.1996 (ECJ) 27.04.1995 (CFI) 31.03.1998 (ECJ)
49
12.02.1994 (CFI)
23
27.11.1997 (CFI)
41
33 52
20.09.1995 (first decision) 17.07.1996 (revised version) 24.04.1996 20.11.1996
28.04.1999 (CFI)
43
25.03.1999 (CFI) 15.12.1999 (CFI)
35 37
22.01.1997
22.03.2000 (CFI)
38
22.09.1999 10.10.2001 30.10.2001
06.06.2002 (CFI) 22.10.2002 (CFI) 25.10.2002 (CFI)
33 123 12
8
CHAPTER I
In this section, two rather abstract and academic criteria have been used in order to ascertain the predictability of European merger policy, namely the number of prohibitions that were compared with a stylised curve that should materialise were European merger policy highly predictable and the time needed before a final decision in a merger case could be obtained. But these are criteria used by outside observers who try to make sense of European merger policy. A radically different approach towards learning something about the quality of European merger policy in general and its predictability, more specifically, is to ask those who have first-hand experience with DG Comp. This is exactly what we did. 4.1. A Survey Amongst Large European Firms The motivation for the survey was to receive reports from companies that have been involved in mergers that had to be cleared by the European Commission. A second reason for conducting a survey was the hope to elicit proposals for reform that had grown out of direct experience with DG Comp. A straightforward method was used in order to receive data: a questionnaire was devised which contained quantifiable as well as open questions. The quantifiable questions can, again, be broken down into two parts, namely one part dealing with the experience of firms with European merger policy in general and the other part dealing with one specific case. Here, the respondents were asked to choose the most complicated case their company had been involved with since 1997. The open questions were added in order to elicit reform proposals. The survey was conducted in November 2002. It thus antedates the recent reforms of EU merger policy. A total of 42 questionnaires were sent via the European Round Table (ERT) office in Brussels to all member companies of ERT. Companies were promised that information would only be used in anonymised form. Shipment of the questionnaires was handled by the ERT in Brussels in order to make that promise credible. 25 completed questionnaires were returned, a very good response rate, especially if one keeps in mind that not all of the 42 members might have had first-hand experience with mergers on the European level. Since 1997, these 25 companies have been involved in 153 merger cases. The total number of merger cases that the European Commission has dealt with in the same period was 1,562. Our survey thus covers some 9.8% of all cases. On average, these companies notified 6.1 mergers over the course of the last five years. They are thus quite experienced in merger activities, which means that their evaluation of the predictability of European merger policy should be given some weight – and their suggestions at improvement should be taken very seriously. Table 2:Descriptive Statistics of the Survey Results Mergers cleared without remedies in phase I Mergers cleared with remedies in phase I Mergers that went to phase II Mergers that were withdrawn
78.0% 10.4% 11.8% 3.3%
PREDICTABILITY AS A CRUCIAL CONDITION
9
We propose to present the results off the study emulating the chronology of a typical merger case, namely (1) before notification, and (2) after notification.4 4.1.1. Before Notification In the theoretical section on predictability, it was pointed out that decisions on single cases can have effects that reach far beyond the specific case: prohibitions signal a tough stance, and potentially welfare-enhancing a mergers might not even be pursued due to fear of being prohibited by the antitrust authorities. This effect does not show up in any statistics on merger policy although it is potentially very important. We therefore decided to include a question that would give us at least an impression concerning the relevance of this effect. The firms were asked whether they had pursued any mergers between 1997 and October 2002 that were eventually not even notified to the Commission for fear of being prohibited. More than a quarter of all respondents (28%) had experienced such cases. This shows that the impact of European merger policy clearly reaches beyond the number of cases counted in Commission statistics. Art. 4 Regulation 139/2004 provides that mergers must be notified “prior to their implementation and following the conclusion of the agreement, the announcement of the public bit or the acquisition of a controlling interest”. Yet, many companies decide to contact DG Comp before a merger is formally notified. There are a large number of possible reasons for this: they might, e.g., be interested in getting the very first evaluation of the merger. In the questionnaire, the reasons for contacting DG Comp before agreeing on the broad outlines of a deal were not looked into. It was simply asked whether enterprises contact DG Comp before the deal is concluded. Almost 36 % of all companies do. Some ten years ago, Neven et al. (1993) published a study on the first experiences with European merger control. It also contained a survey among the firms that had already experienced the then new policy. We decided to ask a number of questions that had been asked some ten years ago in exactly the same way in order to produce comparable data.5 The question whether firms had contacted DG Comp before agreeing on the broad outlines of a deal is one of those. Neven et al. (1993, 140) had only elicited a positive response rate of 20% to this question. Early contacts with the Commission have thus almost doubled. This could mean that predictability has not improved. If it had, fewer pre-notification talks should be the result. Another question that was also asked ten years ago, would, however, not exactly support this hypothesis: We asked whether the pre-notification talks with DG Comp had led to “significant modifications” of the proposed merger. 12% of the companies answered that this was the case. Ten years ago, 12.8% of the respondents to Neven et al. had answered that “major modifications” had taken place. The relative number of modifications thus seems to have been rather stable. Before a merger can be officially notified to the Commission, a questionnaire – the so-called Form CO – needs to be filled in. Complaints concerning the time requirements to fill this form in can often f be heard. On average, companies need 64.4 person-days to prepare the form. Ten years ago, the average duration elicited by
10
CHAPTER I
Neven et al. (1993, 140) was 45 days. Considering the importance of all the mergers notified at the European level, the time required to fill in the form does not appear to be overly excessive. But because one can hear so many complaints concerning the complexity of Form CO, we decided to add an open question that was to encourage respondents to make proposals how Form CO could be improved. It is, first of all, noteworthy that the necessity of extended information was almost universally acknowledged.6 The Commission was, however, requested to display more flexibility depending on the merits of the individual case. It was further proposed that phase I proceedings should not come to a stop if only details in Form CO are missing. On a more specific level, it was proposed to shorten sections 8 (“General Conditions in Affected Markets”) and 9 (“General Market Information”). Another proposal made was that in order to reduce redundancy, section 6 (“Market Definition”) should be combined with section 8 (“General Conditions in Affected Markets”). Another way to ascertain the necessary lead time before a merger can be notified is to ask for the number of weeks that elapse between the first contact with the DG Comp and notification. The average time passed was 6.3 weeks (6.5 weeks ten years ago). The time necessary has thus been very stable. It has remained, however, constant on a high level; on average, pre-notification talks last longer than phase I proceedings that are constrained to a single month. 4.1.2. After notification After a merger has been notified, an average number of 8.4 meetings are held between company representatives and DG Comp officials. Compared to 1993, the number of meetings has considerably increased; Neven et al. then found an average of only 2.5 meetings. At least two interpretations seem to offer themselves: DG Comp staff has become more accessible; this is readily used by merging firms. But it could also be that firms have the impression that it has become more important to plead one’s case personally in order to improve chances for a favourable decision. European merger policy is often criticised as unpredictable. This critique can also be made with regard to the way the Commission delineates the relevant markets both with regard to product as well as to geographical aspects. In 1997, the Commission published a notice in which it explains how it delineates the relevant market. We wanted to know whether the publication of that notice had been helpful in the sense that it had increased the firm’s capacity to predict the Commission’s decision with regard to the delineation of the relevant market. 56.5% of respondents answered that the Notice had been helpful. This is a majority, albeit not an overwhelming one. We believe that the following conclusions can be drawn from this result: generally, predictability can be increased by the publication of “notices” in which the Commission explains its own approaches, methods, and procedures. The notices themselves should be as clear and precise as possible in order to elicit even higher approval rates. Until May 2004, the European Merger Control Regulation did not provide for efficiencies as a possibly offsetting factor. Yet, it could be that in practice they did play a certain role even before the reform of European Merger Policy. And indeed,
PREDICTABILITY AS A CRUCIAL CONDITION
11
62.5% of the respondents indicated that “expected synergies” play a role in their argumentation vis-à-vis the Commission. Quite a few mergers can only be achieved after the merging parties have promised a number of “remedies” to the Commission. The predictability criterion can also be applied to remedies: If the merging parties correctly anticipate the remedies to be demanded by the Commission, they could decide ex ante that the expected gains of the merger still outweigh the costs – that would include the costs of remedies here. It would thus be helpful if predictability were also present with regard to this specific aspect of merger policy. Four out of five of the firms actively participating in the survey had been involved in mergers comprising remedies. Out of these firms, only about half of them (55.6%) claimed that remedies were predictable. We interpret these number as indicating thatt the policy regarding remedies should be made more transparent and predictable. The process that is used in order to identify remedies should be made as general as possible. It should also be published. This should increase predictability. At the end of the questionnaire, there was an open question soliciting further suggestions that would be relevant with regard to European merger policy. The following proposals were made: (1) the separation of powers in European merger policy was believed to be insufficient. Many y respondents therefore proposed that the separation of powers be implemented with regard to European merger policy. (2) More specifically, it was proposed to introduce the institution of an “independent hearing officer”. (3) The introduction of a stop-the-clock mechanism was suggested. (4) A further speeding up of the decision-making procedure was proposed; this was particularly suggested with regard to the time span that is necessary before the Court of First Instance publishes its decisions. (5) Another proposal focusing on the same problem was the introduction of a special chamber for competition issues at the Court of First Instance. In this section, we have seen that on various accounts, the predictability of European merger policy is problematic and would need improvement. In the next section, we turn to present some possible improvements. 5. PROPOSALS FOR IMPROVING PREDICTABILITY This section contains a number of proposals that are developed as a consequence of the analysis concerning the current status quo concerning the predictability of European merger policy. These proposals are very general ones and they could entail sweeping changes. They are thus not developed with an eye on the probability of being implemented soon or in a piecemeal manner. Concrete, hands-on proposals will be developed in chapters D and F of this study, explicitly taking the arguments developed throughout the course of this study into account. The goals to be achieved by a certain policy will influence predictability in a number of ways: first, predictability is a function of the precision with which certain policy goals are described in relevant legislation. It is secondly a function of the number and consistency of goals described there. To give an example: if competition policy is not only to establish and (or) maintain a high level of competition, but
12
CHAPTER I
is also to protect competitors, small and medium-sized companies, employment rates, a balanced geographical industry structure, etc., lots of trade-offs will have to be made. A multiplicity of partially inconsistent and difficult to operationalise goals is a sure means towards uncertainty and unpredictability. Therefore, competition policy should be oriented at one single goal, namely the maintenance of a high level of competition. This goal should be spelt out as clearly and precisely as possible. Given a precise and clear goal of competition policy, a competition agency needs instruments to realise this goal. The instruments must, of course, be rule-based themselves. As discussed above, they should be as universalisable as possible. Universalisable rules guarantee that like cases are treated likewise. They further guarantee that anyone interested in knowing whether a certain action is legal or not is able to ascertain the legality of various actions. A precise goal and universalisable rules notwithstanding, a competition agency will still have some discretion in making decisions. If it is used in ways that are not understood by observers of the agency, this will, of course, not lead to predictability. In order to achieve predictability, the procedures used by the competition agency should thus be as transparent as possible. The separation of powers has been a hallmark of liberal societies ever since Montesquieu. There are good reasons for a functional separation of powers, and there is convincing empirical evidence that it has beneficial effects for the affected societies. The missing separation of powers within the European Union is said to be one reason for its current legitimacy problems.7 What is true for the European Union in general is also true for its competition policy: the separation of powers needs improvement. There are a number of reform options: the creation of an independent competition agency is probably the most radical one. Others include the more systematic division of legislative, executive and judicial functions. A number of more specific proposals have been made and this is not the right place to discuss them in any detail.8 Lastly, legal protection that is not granted within a reasonable period of time amounts to the non-existence of legal protection. It is thus important to speed up decision-making within the Court of First Instance. Whether the creation of a specialised competition chamber within the Court of First Instance would be an adequate remedy need not concern us here.
CHAPTER II DEVELOPMENTS IN COMPETITION THEORY
1. INTRODUCTORY REMARKS Over the past decades, there have been fads and fashions in competition theory but there has also been substantial development. Three developments are particularly noteworthy: (1) the challenge posed by contestability theory to the structureconduct-performance approach, (2) the application off game theory to issues of competition and (3) the rise of the New Institutional Economics with its emphasis on the relevance of organisational structures and their consequences for competitive behaviour. In this chapter, an attempt will be made at spelling out possible implications of these three developments for competition policy. This theoretical framework will later be used to evaluate the EU merger policy as it is currently practiced. Before describing these three more recent developments in sections 3, 4, and 5 of this chapter, the so-called “structure-conduct-performance” paradigm will shortly be described and critically evaluated. This has been done many a time and we do not claim to add any new or original insights here. Yet, it is important to clarify some of the basic assumptions of this approach because it has been of overwhelming importance. Although it has been heavily criticised on various grounds, its core ideas still loom large in many competition authorities. After shortly presenting some of the insights of the so-called “Chicago school” in section two, we turn to the three developments just alluded to. 2. THE HARVARD APPROACH The Harvard Approach, although often vigorously attacked, is still the most influential approach guiding competition policy all over the world. Notwithstanding its sometimes-outdated appearance, some observers (e.g., Lopez 2001) claim that its adherents are not only alive but also well. The approach can be traced back to John M. Clark’s (1940) programmatic paper in the American Economic Review. It belongs to the old tradition of industrial organisation (which has been superseded by the so-called New Industrial Organisation) in which general hypotheses were developed on the basis of single observations that were then generalised and empirically m tested. This approach is also called “workable competition”, which indicates that some deviation from the model of perfect or pure competition is tolerated. Until today, the model of perfect competition has remained the textbook model that occupies many first chapters of many textbooks (see, e.g., the introductory chapter to Tirole 1988). The outcome of that model 13
CHAPTER II
14
is a state of the world thatt cannot be improved upon without making at least one actor worse off. In that sense, it establishes a frame of reference with which the realised states of the world can be compared. The assumptions of the model are, however, wildly unrealistic and could never even be approximated in reality. As a matter of fact, it has been shown that marginal improvement towards the ideal can make things even worse (the so-called Second-Best Theorem; Lipsey/Lancaster 1956). The Harvard approach is an attempt to get to grips with the critique concerning the model of perfect competition, yet preventing to throw out the baby with the bath water, namely not to give up some standard of reference that could be used in order to evaluate realised states of the world. 2.1. Main Points Representatives of the Harvard approach claim that there exists a causal chain from the structure of a market through the conductt of the participants in the market to the performance of the market. With regard to market structure, two factors are taken to be of overwhelming importance: (1) the number m of suppliers, and (2) the concentration of their market shares. Other factors that have at times been recognised abound: the degree of product differentiation, the degree of market transparency, the kind of production technology used (are economies of scale relevant?), the relevance of barriers to entry, the market phase (introduction, d expansion, saturation, decline), the degree of vertical integration, elasticities of price or income, etc. Notwithstanding the large number of factors that could influence market structure, the two factors playing the key-role always remained the number of suppliers and their market shares.
Structure
Conduct
Performance
Figure 3: The Structure-Conduct-Performance Paradigm
Market conduct is operationalised by the way market participants make use of possible strategies. Here the price-setting behaviour turned out to be of central importance. Other factors sometimes subsumed under the conduct of firms include the propensity to act competitively but also the propensity to enter into anti-competitive agreements with competitors. Market performance is measured by looking at prices, qualities, quantities, but also technological progress, and, quite importantly, profit rates. Drawing on the theoretical model of perfect competition, the “optimal” sequence of structure-conduct-performance would translate into polypolistic markets (structure) in which firms have small market shares which leads to their charging prices that are equivalent to marginal cost (conduct), which results in a profit rate that just
DEVELOPMENTS IN COMPETITION THEORY
15
covers the costs of factor inputs (performance). But the representatives of the Harvard approach were unsatisfied with the concept of perfect competition. This was exactly the reason why Clark (1940) proclaimed the alternative of “workable” competition. According to representatives of workable competition, perfect competition models are fine in order to bring aboutt static efficiency, i.e., the situation in which welfare cannot be improved by reallocation of any factors. Static efficiency is threatened by market structures with a low number of competing firms that have, in turn, high market shares. But the concept of perfect competition is a static concept. There is no space for innovation and technological progress, in short: for dynamic efficiency. Innovation presupposes the capacity to invest into research and development. According to the representatives of workable competition, a certain degree of concentration of firms is needed in order for them to be able to finance research and development. Concerning “optimal” market structures, representatives of the Harvard approach thus find themselves in a dilemma between static and dynamic efficiency. They usually opt for a “middle of the road” approach: allow a market structure with a moderate degree of concentration but ensure by way of an active merger policy that it will remain moderate because high levels of concentration would enable firms to reap monopoly profits by setting the price above marginal costs thus violating static efficiency. To ensure moderate levels of concentration, mergers were often prohibited. If there is indeed a clear and unequivocal causal link between structure, conduct, and performance, then, in order to ascertain the workability off a market it is, at least in principle, sufficient to test for market structure orr market conduct orr market performance. This means that should market performance be unsatisfactory, this would be sufficient for proclaiming market structure t to be unsatisfactory – and possibly demand political intervention in order to make it “workable.” For a long time, the primary occupation of representatives of this approach consisted in assembling industry data and estimate regressions of the type 3i = D+ ESi in which 3i stands for the profitability off an industry i which is supposed to be determined by the structure S of that industry. For a long time, empirical evidence was supposed to be in favour of this simple three-step process. Shepherd (1972), e.g., presented evidence showing that between 1960 and 1969, a significant positive correlation between the market share of a firm and its profitability existed. Many of the representatives of this approach took evidence of this as sufficient for a competition policy of not allowing mergers to be carried out but at times also of busting existing companies (such as the famous AT&T case). The Harvard approach thus seemed to be the perfect justification for interventionist competition policies.
CHAPTER II
16 2.2. Policy Implications
Believing in an unidirectional causal link from structure to conduct to performance and having a clear-cut reference of what the performance of a market should look like is the basis for far-reaching interventions into market structures as well as firm conduct. The Harvard approach would call for interventions in case the structure of a market is not likely to reach its mix of static and dynamic efficiency goals. Intervention into market structures can thus cut both ways: if the number of firms is deemed to be too small to reach the proclaimed performance indicators, divestiture might be the policy called for. The number of firms, can, however, also appear to be too large, e.g., if heavy investments into research and development seem to be necessary in order to speed up technological progress. In that case, representatives of the Harvard approach would call for merger-enhancing policies. Such policies have even played a certain role on the level of the European Union. An alternative way of intervening into the market process is to monitor firm conduct. One could, e.g., tolerate a rather high market share but closely control conduct, e.g., by publishing maximum prices, etc. A variant of this approach is implemented in Art. 82 of the EEC Treaty, which prohibits any abuse of dominant market positions. Approaches to monitor and sanction firm conduct have often f been evaluated as unsuccessful and have been used less and less over an extended time period. 2.3. Critique Both the approach and its policy implications have been heavily criticised. Some of the more important points are simply highlighted here. –
Assuming that there is a causal chain, it would supposedly not be unidirectional: performance would have repercussions on the number of market participants, hence on market structure.
Structure
Conduct
Performance
Figure 4: Structure-Conduct-Performance Paradigm under repercussions
–
Correlations are not necessarily causal chains. High profitability could, e.g., also be the consequence of lower production costs. If low production costs enable a firm to capture a large market share, one would indeed expect a high correlation between market share and profitability.
DEVELOPMENTS IN COMPETITION THEORY –
–
–
–
17
There is a very small number of actions that can be said to always be conducive or always be detrimental to competition: price cuts have often been said to be part of a predatoryy strategy, yet they can be a sign of fierce competition that forces firms to lower their prices to reflect marginal cost. Observing conduct will, in other words, seldom be sufficient to establish that competition is not workable in a particular market. The number of criteria for the market performance test is very high. This means that criteria need to be prioritised, be attached certain weights, etc. Representatives of a procedural approach toward competition would claim that “competition is a discovery procedure” (Hayek 1978) and that its results are therefore systematically unpredictable. Neither the optimum number of firms in a market nor their optimal conduct can be known ex ante but have to be discovered by the competitive process. The interventionist stance promoted by y this approach would therefore be inimical to the very basis of a market economy. The possibility of widespread interventionism is detrimental to predictability, the criterion identified as crucial for a good competition policy in chapter I. This becomes very apparent in the following quote (Kantzenbach/Kottman/Krüger 1995, 25): “A structure oriented competition policy often reduces the level of planning security.” Kantzenbach is the most influential adherent of the Harvard approach in Germany.
Especially in its heyday, the 1960s, the paradigm was highly influential. When its followers observed some action that was nott immediately explainable, they supposed the action was undertaken in pursuit of monopolising. That firms were trying to economise – and thus to increase welfare – did often not even come to the minds of those who believed in the paradigm. In the early 70s, the later Nobel laureate Ronald Coase (1972, 67) wrote: “If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to very large, and the reliance on a monopoly explanation frequent.” Barriers to entry were another obsession of the representatives of this approach. We will deal with them in some detail in the third section of this chapter. Before, we turn to the so-called Chicago approach which can only be understood as an answer to Harvard: Whereas followers of the Harvard approach suspected monopolizing practices just about everywhere, followers of the Chicago approach turned the whole story squarely onto its head: they explained justt about every behaviour with underlying efficiencies.
18
CHAPTER II 3. THE CHICAGO APPROACH
What was to become the Chicago School of Antitrust started out as a critique against the concept of workable competition that was summarised in the last section. 3.1. Main Points Instead of aiming for a variety of – partiallyy incompatible – goals as “Harvard” had done, “Chicago” radically reduced complexity by focusing on one single goal, efficiency. Market structure did not play any role – as long as outcomes were efficient. Monopoly was, in fact, radically re-interpreted: if a firm has been able to establish a monopoly position, this was taken as an indicator that it must be the most efficient firm in the market (the so-called “survival of the fittest” or “survivor test”). Reasons for firms being able to establish monopoly positions could, e.g., lie in their achieving economies of scale or cost savings as an effect of learning by doing. According to Chicago, it would be foolish to prohibit firms from achieving efficiencies because these mean cost-savings and, in the end, higher consumer surplus. Chicago economists distinguish between three kinds of competitive constraints, namely (1) natural, (2) artificial and (3) state-created ones. Natural constraints to competition are not created by men, they just exist as such (e.g., iff there is just one river that can be used for shipping or just one deposit of bauxite). Even if they have an influence on competitive results, trying to fight them would be pointless because they are not wilful creations of men. The artificial creation of constraints to competition by competitors is deemed to be “foolish and self-defeating behaviour”. Since erecting such constraints is not in the rational self-interest of firms, their appearance is supposed to be highly unlikely. Hence, competition authorities should not be preoccupied with them. State-created competitive constraints are, however, a different story. These include a host of regulations. A good example is tariff and non-tariff trade barriers to trade that protect domestic firms from having to compete on an equal footing with foreign firms. Such trade barriers will result in a loss of consumer surplus, and thus efficiency. Some of these constraints are very dangerous: take import caps as an example. For a long time, Italian regulations prohibited the import of more than 3,500 Japanese-made cars into Italy on an annual basis. No matter how good, cheap or efficient Japanese cars were, Japanese carmakers were completely barred access to the Italian markets beyond the threshold of 3,500 cars. Such policies can obviously entail heavy costs for consumers. Ultimately, they will also hurt the producers as they will be (partially) exempt from competition. In the long run, their competitiveness will decrease and their business prospects worsen. In 1968, Oliver E. Williamson published the so-called trade-off model in which potential costs of a horizontal merger are weighted against its potential benefits. At the time, Williamson was often considered to belong to the Chicago School. In the meantime, he has, of course, been central in the development of Transaction Cost Economics to which we turn later in this chapter. The trade-off model is, however, still an important and central model and thus deserves to be shortly presented here.
DEVELOPMENTS IN COMPETITION THEORY
19
This model entails a worst-case scenario: suppose that a merger changes outcomes from the perfectly competitive case (hence price equals marginal cost) to the monopoly case (in which marginal returns equal marginal costs). In that case, the welfare losses of the merger can be depicted as the triangle ADH in figure 5. These are the costs, and the possibility of this triangle is the main reason why many mergers have not been passed. But Williamson does not stop there. He stresses that there are potential benefits that should be taken into consideration, namely benefits based on lower cost curves. There are, of course, a variety of reasons why costs could be lower subsequent to a merger, lower input prices being the most obvious one. The pre-merger cost curve is depicted by MC1, the post-merger cost curve by MC2.
p C
E
p2 p1
D
deadweight loss
A
B
F
G
MC1
MC C2
cost savings
q2
q1
q
Figure 5: The Williamson Trade off
The gains of the merger in terms of saved resources are the difference in cost curves (i.e., MC1 – MC2) times the quantity produced and they are depicted as the rectangle which is called “cost savings”. In this case, cost-savings are expected to outweigh the deadweight loss. From an efficiency point of view, the merger should thus pass although the merged entity might decide to increase prices. This is, however, only correct if the decisive criterion used to decide merger cases is overall welfare, which can be decomposed into consumer rent and producer rent. The pre-merger consumer rent is depicted by the triangle abc, the producer rent is zero. Post-merger, things have changed: The consumer rent has been reduced to
20
CHAPTER II
cde, while there now exists a positive producer rent depicted by the rectangle defg. The triangle adh is nobody’s gain, that is why it is called deadweight loss. We thus observe a redistribution of rents from the consumers to the producers. Most members of the Chicago School now argue that producer rent is also part of overall welfare and does not constitute a problem as such. Others argue that the decisive criterion should not be overall welfare, but only consumer rents. They are thus ready to forego efficiencies in production. It is one of the important achievements of the Chicago approach to have pointed to the detrimental effects of state-mandated constraints with regard to the functioning of competition. An important task of competition policy would thus be the undoing of state-created competitive constraints that inhibit the realisation of efficiencies. 3.2. Policy Implications The implications of the Chicago approach for competition policy are straightforward: market structure should not be an intermediary goal of competition policy. High degrees of concentration are the result of attempting to achieve efficiency and should therefore not be suspect as such. Merger control should be handled restrictively and should only set in past very high market shares. Divestiture should not be pursued since it leads to the reduction of consumer surplus by making the realisation of cost savings impossible. The detrimental role that representatives of the Chicago approach attributed to state-mandated constraints to competition was just spelled out. The ensuing policy implication is easy to name: the state was called upon to reduce the amount of regulations that effectively worked as constraints to competition. This could be tariff and non-tariff barriers with regard to border-crossing trade. But it could also be healthor safety-standards that often result in effectively protecting a limited number of domestic producers from more vigorous competition and thereby keeping consumer surplus at unnecessary low levels. Representatives of Chicago did not, however, opt for a complete absence of competition policy as is sometimes argued. They argued, e.g., that horizontal agreements such as collective fixing of market shares or price fixing agreements should be prohibited. Predictability, a concept of crucial importance here, is highly valued by the lawyers and economists who belong to the Chicago school of antitrust. In order to make competition policy as predictable as possible, they opted for predominantly using per se rules as opposed to the rule of reason. These two concepts have played a major role in antitrust policy. Due to their direct implications for predictability, they will be dealt with here in some detail. It should be stressed that their use is not confined to adherents of the Chicago approach. On per se rules, US Supreme Courtt Justice Thurgood Marshall had the following to say: “Per se rules always contain a degree of arbitrariness. They are justified on the assumption that the gains from imposition of the rule will far outweigh the losses and that significant administrative advantages will result. In other words, the potential competitive harm plus the administrative costs of determining in what particular situations the practice may be harmful must far outweigh the benefits that
DEVELOPMENTS IN COMPETITION THEORY
21
may result. If the potential benefits in the aggregate are outweighed to this degree, then they are simply not worth identifying in individual cases.” (cited after Bork 1978, 18) Per se Rule vs. Rule of Reason According to per se-rules, certain actions are prohibited as such, i.e., regardless of the consequences that they would supposedly bring about in a specific case. According to the rule of reason, a competition authority or a judge has to decide whether the behavioural effects in a specific case have detrimental effects on a given goal such as welfare or efficiency. The decision between these two types of rules is not an either-or decision because per se elements can be combined with rule of reason elements: in many jurisdictions, cartels are prohibited per se, but can be allowed given that certain offsetting effects are expected to materialise. With regard to mergers, one can often observe the opposite approach: below certain specified criteria, mergers are allowed per se, past those criteria, the rule of reason sets in and the competition authorities will have to evaluate predicted welfare effects of a specific merger. Per se-rules are certainly conducive to predictability. Competition authorities and judges will not have to make complicated m welfare evaluations. Therefore, this saves on decision-making costs. On the other hand, there might be cases in which the competition authorities and the judges could be almost certain that the negative effect that is generally connected with a certain behaviour and that has led to the passing of a per se-rule in the first place will not materialise in a specific case under consideration. In some cases, per se-rules thus force competition authorities and judges to ignore knowledge that they really have. Hayek (1964) has shown that this can be rational if the aggregate sum of the advantages of per se-rules outweigh the aggregate costs of having to ignore better knowledge that one might dispose of in a minority of cases. The rule of reason, on the other hand, is based on a cost-benefit assessment of individual cases. Its application thus presupposes widely available knowledge concerning the use of quantitative techniques. Especially, where competition authorities lack economics-trained staff and where judges do not receive economic training on a regular basis, use of the rule of reason seems problematic. Yet, if efficiency arguments as introduced in the last section are to play a role in merger policy, use of the rule of reason is indispensable. 3.3. Critique The Chicago approach has been heavily criticised on various grounds. Some of the points are just mentioned here: –
The models are still built on the concept of perfect competition. More recent developments are not sufficiently taken into account;
CHAPTER II
22 – –
Some of the assumptions are not likely to lead to correct predictions (rationality, absence of barriers to entry); The models were wrong in some important details; Chicago, e.g., argued that monopoly positions could not be expanded upstream or downstream by way of vertical integration. In the meantime, it has been shown that this can be done (e.g., Riordan 1998)
The Chicago approach can only be understood as an answer to Harvard. What is fascinating about the two approaches is that both of them were inspired by traditional price theory and welfare economics. Comparing the two approaches shows that a very similar theoretical body can be used to develop completely diverging models and to draw radically different policy conclusions. Whereas Harvard always looked for a monopoly motivation of some action, Chicago tried to justify almost every action by first asking whether it could not enhance efficiencies. What was obviously lacking were more fine-grained approaches with a more elaborate theoretical basis and a lesser degree of ideological prejudices. 4. CONTESTABILITY THEORY We have just seen that – under certain conditions – the representatives of the Chicago approach would not be intrigued by high market shares of a limited number of competitors. Quite to the contrary, they would argue that because these competitors are more efficient, they have been able to grow at the expense of their less efficient competitors. We now turn to the theory of contestable markets that was developed in the early 1980ies and whose representatives also claim that – under certain conditions – market structure is not a good predictor of the performance to be expected in a particular market. Formulated differently: although a market might be described by a narrow oligopoly or even a monopoly, this does not necessarily have to stand in the way of allocative efficiency. Contestability theory can thus also be read as a critique of the Harvard approach. 4.1. Main Points William Baumol and his various co-authors (e.g., 1982) reach their central insight by integrating potential competition into their analysis. If some potential entrant can credibly threaten to enter a market in a “hit-and-run” manner, this will lead to the erosion of all monopoly profit by the incumbent. “Hit-and-run” means that a new competitor can enter into a market but will be able to leave it before the incumbent has a chance to react, e.g., by retaliation. Since this is a very important development of competition policy, we want to have a closer look at the underlying model. Baumol et al. (1982) illustrate the relevance of potential competition by drawing on the case of natural monopoly. Economists talk of natural monopolies if fixed costs are so important that the entire relevant demand of a certain good can be supplied cost efficiently by a single supplier. Markets in which fixed costs play an important role are markets in which a net, e.g. a telephone or a railway net, is necessary
DEVELOPMENTS IN COMPETITION THEORY
23
before a single unit of goods can be supplied. Once a supplier has at its disposal a net, the average cost of every additional unit decreases over the entire relevant demand range. To transport only one container on a newly erected railway net will be very expensive indeed, every additional container transported will reduce average costs because the fixed costs are spread over a larger number of units. Natural monopolies pose a problem for politics: on the one hand, monopolies are generally considered to be undesirable, on the other, a higher number of nets would lead to an increase of average costs since the fixed costs would have to be borne more than once.9 The answer to this predicament has traditionally been to introduce regulation and have some agency control the behaviour of the natural monopolist. In natural monopolies, application of the price-setting rule “price should equal marginal costs” would lead to losses of the supplier since he would not be able to recoup his fixed costs. Therefore, a host of other price-setting rules have been discussed, “price should equal average cost” being one of them. What is more interesting is that numerous agencies are run on the basis of this theory (in the U.S., e.g., the Federal Aviation Administration and the Federal Communication Commission). The achievement of Baumol et al. (1982) now lies in challenging conventional wisdom concerning the necessity to regulate natural monopolies. Suppose an incumbent has hitherto set price equal to average costs and now tries to increase his price above that level. According to Baumol et al. (1982), this would induce somebody else to enter into this market with a lower price who could still make profits. Just for completeness: the incumbent cannot make itself better off by setting a price below average costs because that will not allow him to recoup his fixed costs. We can thus conclude that – – –
there will only be one firm on that market; the firm does not make any profits; price equals average costs (see also Tirole 1988/99, 678-80).
Notice that this result was achieved without widespread regulation. It is secured simply by the threat of a potential entrant entering the market. The approach has often been criticised by pointing at the entry conditions that have to be present for contestability in order to achieve its beneficial results. Baumol and Willig (1986) have stressed the following fact: “Contestability does not imply the ability to start and stop production withoutt cost– only the ability to sell without vulnerability to incumbent’s responses for a time long enough to render all costs economically reversible.” It is thus not the absence of sunk costs as such but the ability of the incumbent to react to the entry before a potential entrant has earned the costs that had to be sunk. An incumbent might be able to change prices rather quickly. In certain instances this is, however, not sufficient to make customers lost to the entrant come back immediately, e.g., if the entrant has agreed on contracts long enough to enable him to cover all the sunk costs incurred in order to enter into the market. A case that is often named in order to prove the real-world relevance of contestability theory is the airline industry. If entrants can lease aircraft on rather short terms, their sunk costs might be sufficiently low to make entry worthwhile.
CHAPTER II
24
t can pay off are the following: The conditions under which hit and run entry (1) there is a pool of potential entrants; (2) entrants do not have a cost disadvantage; (3) sunk costs are low; and (4) contracts are long orr incumbents are slow to react. 4.2. Policy Implications The insights gained by adherents of contestability theory indicate that there was great potential to deregulate telecommunications, the airline industry, but also railways and public utilities. With regard to competition m policy, the insights mean that even monopolies with high fixed costs could be accepted because from a welfareeconomic point of view, the equilibrium described above is the best achievable one given production technologyy (price equals marginal costs could only be achieved if the state was to pay subsidies to the natural monopolies). A more general conclusion from contestability theory is that market structure can be entirely irrelevant for the outcomes to be expected on a particular market. Antitrust agencies should thus inquire into the possibilities of potential entrants to constrain the price-setting behaviour of incumbents. Narrow oligopolies would not be suspect as such but only if high barriers to entry prevent potential competitors from controlling effective competitors in their behaviour. 4.3. Critique This theory has been met with scepticism on various grounds. Since we have already dealt with some of it in the discussion above, the major points are just sketched here without lengthy explanations. – –
There are few markets in which price inflexibility would be high enough to make hit-and-run entries worthwhile. There are few markets in which sunk costs, i.e., irreversible investment, are sufficiently low to make hit-and-run entry worthwhile.
The models of competition policy never even faintly resemble messy economic reality. But we believe that an important message concerning competition policy follows from the models of contestability theory, namely that market structure can be negligible as long as entry barriers into a market do not constitute a serious threshold. This will be taken up again in greater detail in chapter IV. 5. THE CONTRIBUTION OF GAME THEORY: THE NEW INDUSTRIAL ORGANISATION The “old” approach of Industrial Organisation, of which the Harvard approach described above is one articulation, was primarily interested in empirical research. Its representatives were interested in describing firm behaviourr by estimating equations that were based on micro data as described above. More theoretically inclined
DEVELOPMENTS IN COMPETITION THEORY
25
economists tended to describe the “old” industrial organisation as basically atheoretical in nature: science was here misunderstood as consisting of measurement, sound theoretical foundations why certain equations were estimated in the first place were often lacking. Much of this has changed with the advent of game theory in industrial organisation. This is true to such a degree that one can talk of a “new industrial organisation” that is much more theoretically inclined than its predecessor. 5.1. Game Components Before discussing some advantages – and correspondingly some disadvantages – that the widespread use of game theory in industrial organisation entails, a very short description of the basic components of games might be in order. Game theory helps to analyse situations in which strategic uncertainty is present. Strategic uncertainty is always present if the outcome of an action does not only depend on my own action but at least on the action of one more actor. Strategic uncertainty is distinguished from parametric uncertainty in which the outcome depends on some move of nature, e.g., whether it rains or snows. A game is regularly made up of six components: (1) The players. A distinction is often made between two- and more actor games. (2) The rules. They describe the options of the various players. An important distinction with regard to competition issues is whether players are assumed to move simultaneously or sequentially. It depends on the structure of the game whether it is an advantage or a disadvantage to be the first mover. (3) The strategies. A strategy is a complete description of all possible options that could open to the player during the entire course of a game. (4) The information set. Assuming complete information means that the players fully know the rules of the game, the strategies available to all actors, but also the payoffs that result from various strategy combinations. Perfect information, in turn, is present if an actor perfectly knows all the previous moves of the players he interacts with. (5) The payoff-function. It contains the utility values that all players attach to all possible outcomes of a game. (6) The outcome. Here, the concept of (Nash-)equilibrium is of special importance. Nash equilibrium is a situation in which, given that all other players have chosen their moves and will stick to them, no player has an incentive to deviate unilaterally because there is no possibility to make himself better off by such a move. 5.2. Advantages of Using Game Theory in Competition Theory Game theory assumes players to be individual utility maximisers that act rationally in their pursuit to maximise individual utility. The Prisoners’ Dilemma famously shows that individual rationality does not automatically translate into collective rationality. What is best for oneself is not necessarily best for the group. Individual rationality does not necessarily lead to collectively optimal results. Formulated dif-
26
CHAPTER II
ferently: there are situations in which Adam Smith’s invisible hand simply does not work. One example are cartel agreements: although all participants to a cartel could make themselves better off by fulfilling the terms of the agreement, individual rationality will often lead to some cartel members reneging on the agreement and thus let the entire cartel agreement bust. Empirically, the overwhelming majority of all markets have oligopolistic structures. It is well known and economists have long explicitly recognised that in oligopolies, strategic interactions among the members play an important role (“oligopolistic interdependency”). Quantities sold, prices and profits realised depend not only on my actions but also on what my competitors do. Strategic uncertainty is thus present and game theory is an excellent tool to analyse interaction situations involving strategic uncertainty. Additionally, game theory carries with it the potential to bring to an end the perennial conflict between outcome-oriented and process-oriented approaches to competition. The Harvard approach would be the paradigmatic example of an outcomeoriented competition approach: some performance characteristics are declared as normatively desirable, if these characteristics are not fulfilled empirically, some interventionist act is called for. Representatives of process-oriented approaches, in turn, believe that the outcomes of competitive processes are systematically unpredictable. They, therefore, refrain from stating criteria that competition should bring about but rather focus on the rules according to which the competitive process should be organised. Being able to make normative statements about how the process should be organised (what antitrust rules would make the process welfareenhancing) presupposes knowledge concerning the working of the process. Game theory has the potential to help us understand some processes better. At the same time, it also carries the potential to understand interrelationships between process and outcome better. If these processes are better understood, this might eventually enhance our capacity to pass more adequate competition rules. Game theory might also help to question the outcome-oriented view of competition policy. An eminent scholar of the new industrial organisation, Louis Phlips (1995, 12), observes: “Pervasive to the entire argument is the idea that antitrust authorities are not social planners. A social planner wants price equal to marginal cost, plus optimal taxes and subsidies. Antitrust authorities want the best possible market structure given technology and tastes, and, given this market structure, as much competition as is compatible with it and with entrepreneurial freedom. But that is precisely, it seems to me, what is described by a perfect competitive Nash equilibrium.” Phlips here seems to argue that a decision needs to be made between the concept of antitrust authorities as social planners and a concept that sees their function in strengthening and maintaining as much competition as possible under the concrete circumstances. He seems to argue against the social planning concept which is built on the model of perfect competition and that plays such a dominant role in the structure-conduct-performance paradigm. Instead, he is an advocate of the Nashequilibrium, which he interprets as a description of how much competition is possible given the relevant circumstances.
DEVELOPMENTS IN COMPETITION THEORY
27
This is an interesting position because it implies that an either-or decision needs to be made. Many adherents of the new industrial organisation do, however, supposedly not share this position. Instead, the outcomes postulated by welfare economics would still be hailed as the theoretical ideal. Game theory can be interpreted as a theory informing actors what would be in their (utility-maximising) interest given that they were rational. Assuming that they are rational, it can be used to predict what actors will do under various circumstances. It can thus also be interpreted as a positive theory. The either-or view advocated by Phlips is therefore not convincing: one can still believe in the fundamental theorems of welfare economics and simultaneously analyse what the results of certain interactions are given specific circumstances. If the predictions deviate too much from the ideal striven for, then many policy-oriented game theorists would be ready to propose changing the circumstances. This could, e.g., mean to change competition rules, to increase sanctions, etc. All these changes would be aimed at bringing reality closer to a theoretical ideal. Game theory does thus not fundamentally alter the policy stance of industrial organisation. Proponents of a game-theory based industrial organisation could still be advocates of far-reaching interventions. In the literature, a number of additional advantages for the use of game theory in industrial organisation are named: – –
the introduction of sequential decision-making processes (Güth 1992); the explicit recognition of incomplete information (Güth 1992).
These advantages should, however, not lead one to conclude that the heavy use of game theory in industrial organisation is warmly welcomed everywhere. 5.3. Critique Concerning the Use off Game Theory in Competition Theory There has been a good deal of criticism concerning this tool. The old industrial organisation was a discipline with a primary interest in empirical results that lacked a sound theoretical basis. The new industrial organisation is a discipline with elaborate and elegant models whose application to real-world problems is, however, often very problematic or even outright impossible. Louis Phlips (1995, 11) an advocate of the use of game theory in competition policy is very frank in admitting it: “ … I know that much work remains do be done on practical questions, such as how a given industry can be identifiedd as being in a Nash equilibrium, how it gets into such an equilibrium, how it gets out of it, and how it moves from one such equilibrium to another one.” Game theoretic models will only help us in coming up with good predictions if its central assumptions are not too far off the mark. The rationality assumptions regularly used in game theory have been met with scepticism. Werner Güth (1992, 272), e.g., believes that the rationality hypothesis is the central weakness of the use of game theory in industrial organisation. If rational behaviour as assumed by the theory cannot generally be taken for granted then game-theoretic predictions will be incorrect even if the model itself is adequately specified (1992, 272).
28
CHAPTER II
The summer of 2000 offers a good real-life test of the applicability of gametheoretical models: In Germany, third generation mobile phone (UMTS) licences were auctioned off in a highly regulated process. Some game theorists analysed the rules and made far-reaching predictions concerning the outcome of the auction and were proven wrong (see, e.g., Moldovanu and Jehiel 2001). One could now reply that the participants in the auction were no experts in game theory and the result therefore diverged from the one expected. Yet, all companies participating in the auction heavily relied on experts – even including Nobel laureates. This shows that the predictions derived from game-theoretic models do not seem to be very reliable. As soon as games are played repeatedly, a very large number of outcomes become possible (this insight is called folk theorem by game theorists because it was common knowledge long before it was formally proven). For the predictive power of game theory, this is a serious problem: one central criterion for judging the quality of theories is the number off outcomes that it predicts will nott occur. If a large number of equilibria are possible in repeated games, this is thus a serious problem for the predictive quality of game theory. Attempts to deal with this problem, such as equilibrium selection theories as advocated by Selten (1975) have been only moderately successful.10 In any theory, the outcomes are driven by the assumptions imputed into a theory. This is, of course, also true for game theory. But with regard to game-theoretic models, the sensitivity of the outcomes to minor modifications in the assumptions seems to be very far-reaching. Formulated the other way around: game-theoretic models are not robust. Some, but not all, game-theoretic models seem to assume a curious asymmetry concerning the information at the disposal of many actors: actors might have incomplete, imperfect or asymmetric information but it is sometimes assumed that the (scientific) observer is not constrained by such problems. Now, if a player is able to fool those he is interacting with it is hard to see why he should not be able to fool scientific observers watching him. Game theory thus has lots of advantages as well as disadvantages. One problem with game theory that has not been mentioned so far is the assumption of a firm as “given”. Nobody asks for the rationale off its existence because it simply exists. We now turn to a theory in which firms are not assumed to be exogenously given any more but are endogenous to the competitive process. This is the new institutional economics. 6. THE CONTRIBUTION OF THE NEW INSTITUTIONAL ECONOMICS: TRANSACTION COST ECONOMICS The New Institutional Economics is a success story. At least five Nobel laureates can be counted as belonging to it (Kenneth Arrow, Ronald Coase, Friedrich Hayek, Douglass North, and Herbert Simon). Its competition theory branch, transaction cost economics, has made considerable impact on U.S. antitrust policy. It has, e.g., led to a fundamental modification of the evaluation of vertical as well as geographical restraints which were shown to be welfare enhancing under specific circumstances
DEVELOPMENTS IN COMPETITION THEORY
29
(Williamson 1985, chapter 14 and passim). In Europe, however, its effects on competition policy have been rather marginal. We therefore decided to present this approach in a little more detail than the other approaches dealt with in this chapter. 6.1. Transactions and Transaction Costs The representatives of Transaction Cost Economics believe that transactions are fundamental for the economic process. They are interested in the analysis of conditions under which welfare-enhancing transactions take place. As already spelt out in the chapter on predictability, the costs that have to be incurred to execute a transaction are a crucial factor for the number of transactions to be expected. Transaction costs are thus a basic concept used by representatives of the New Institutional Economics. Transaction costs are –
– – –
the costs of searching exchange partners with whom to transact and to get information on the qualities of the goods that they offer as well as information concerning their reliability; the costs of reaching agreement, i.e., bargaining and decision-making costs; the costs of monitoring whether the exchange partner has delivered as promised; the costs that have to be incurred to get the terms of the original contract implemented, e.g., fees for lawyers and court costs.
In economics, transaction costs have long been neglected. It was thus implicitly assumed that the costs of transacting were zero. This amounts to assuming that all actors were fully rational and had at their disposal complete knowledge concerning every conceivable state of the world. This is a highly unrealistic assumption and the representatives of transaction cost economics can claim credit for having outlined the consequences of assuming transaction costs to be positive. Another traditional assumption in economics was to model the firm as a production function, i.e., as a technological relationship between inputs and outputs. In this approach, the firm really was a black box, because the process by which inputs were transformed into outputs was completely ignored. But different organisational structures have different consequences on the incentives of those working inside the firm and also on those transacting with the firm. It is, again, the merit of the representatives of transaction cost economics to have pointed to the crucial importance of organisational structures. Conceptualising organisation structures as devices to economise on transaction costs can lead to a fresh look at some business practices that had hitherto been judged as monopolising behaviour which can be interpreted differently now. This should, quite obviously, have far-reaching consequences for competition policy.
30
CHAPTER II
6.2. Assumptions off Transaction Cost Economics Transaction cost economists start from assumptions that are different from some of the more established approaches. True, taken separately, most of these assumptions have been around for a long time. It is, however, the merit of transaction cost economists to have synthesised them into a coherent theory. Three assumptions are of particular importance: (1) bounded rationality; (2) opportunistic behaviour, and (3) asset specificity. We shortly want to deal with every assumption in turn. (1) Bounded rationality; this assumption means that actors are not assumed to be completely rational anymore but only limitedly so. Starting from boundedly rational individuals would make the assumption of actors who try to maximise utility in every instance a shaky one. Nobel-laureate Herbert Simon (1955) therefore proposed to assume that actors behave in a “satisficing” manner. Actors form expectations concerning the level of utility they hope to secure. As long as they factually secure that level, they do not have any incentives to change their behaviour. Only if the utility level aspired for is not reached anymore do they start to search for modified behaviour with the aim of reaching their old level of utility again. One consequence of bounded rationality is that contracts will not cover every possible contingency that could arise. They will, in other words, remain incomplete. This means that situations can arise that are not fully anticipated in contracts and the contracts do thus not specify the consequences of these situations completely, either. In such situations, general structures that can be used to settle conflicts are needed. (2) Opportunistic behaviour; this assumption means that actors who can make themselves better off to the detriment of others should generally be expected to do so. If no institutional safeguards are available making opportunistic behaviour unattractive, then many potentially welfare-enhancing transactions will not take place. (3) Asset specificity; this assumption means that some assets can only be used for very specific purposes. Opportunistic behaviour in combination with asset specificity can, e.g., become relevant if a good with specific characteristics needs to be produced before it can be sold. Ex ante, the buyer has every incentive to point at his ability and willingness to pay. Once the good is produced – and the second-best way to use this specific product is worth a lot less than the first-best – the buyer can be expectedd to ask for a reduction in price. Economic institutions now serve the purpose of reducing transaction costs. Depending on the relevance of the assumptions just spelt out and on a number of factors to be spelt out in a minute, different “governance structures” (Williamson) are optimal in order to cope with the specific circumstances of the situation. Anything from a classic spot market transaction to a hierarchical firm can be a governance structure. It is important to note that governance structures can be conceptualised as a continuum with the two forms just mentioned as their most extreme points. In between, a
DEVELOPMENTS IN COMPETITION THEORY
31
large number of hybrid forms such a long-term contracts, franchising agreements, etc., are conceivable. Before the introduction off transaction cost into economics, the existence of firms could not be convincingly explained. Iff one assumes market transactions to be executable without any costs, it is unclear why any transactions should not be executed via the market. Hierarchies, firms, however, are a way to ensure transactions not through voluntary consent but through command. Setting up and running organisational structures is certainly connected with positive costs and in the absence of transaction costs it is unclear why they should be incurred. As soon as transaction costs are introduced, this whole picture changes dramatically. A shorthand for defining transaction costs is to call them the “costs of using markets” (Coase 1937). As soon as transaction costs and organisation costs are taken into consideration, predictions concerning the (optimal) size of firms can be generated: a firm will grow until the marginal revenue of integrating yet another activity is equivalent to the marginal costs that have to be incurred in order to integrate that activity. Formulated differently: the expansion of a firm will stop as soon as the transaction cost savings from integration are less than the additional organisation costs to be incurred. The central hypothesis of transaction cost economics is that the specific characteristics of the relevant transactions determine the optimal governance structure. Williamson analyses the effects of three characteristics, namely of (1) asset specificity, (2) the degree of uncertainty, and (3) the frequency with which the relevant transactions are expected to take place. These three can be considered as independent variables, the optimal governance structure t is the dependent variable to be explained with the independent ones. Simply put: we would expect governance structures to be more integrated, the more specific the assets used in some business relationship, the more important the role of uncertainty, and the more frequently transactions are expected to occur. This is a static description concerning the optimal size of the firm. Changes in transaction as well as in organisation costs can be one factor leading to changes in optimal firm size. Reductions in organisation costs would, ceteris paribus, increase optimal firm size, reductions in transaction costs would, again ceteris paribus, decrease optimal firm size. In the early years of transaction cost economics, there was indeed the simple dichotomy between “Markets and Hierarchies” (Williamson 1975). In the meantime, representatives of this approach tend to think of these two forms of organisation as the extreme points of a continuous line, which allows for a multitude of so-called “hybrid” contractual agreements. They allow to explain the rationale of franchising, joint ventures, long-term contracts, etc., which were traditionally met with much scepticism by competition authorities. With regard to the new industrial organisation, we observed that its representatives have produced exciting theories but that the empirical tests were somewhat lagging behind. This judgment cannot be made with regard to transaction cost economics. Although measuring concepts such as asset specificity or uncertainty with any degree of reliability seems no mean feat, it has been done successfully. Most empirical studies measuring asset specificity have relied upon Williamson’s (1985, 95f.) proposal to distinguish four kinds of it, namely (1) site specificity (costs of
32
CHAPTER II
geographical relocation are great), (2) physical asset specificity (relationshipspecific equipment), (3) human asset specificity (learning-by-doing, especially in teams comprising various stages of the production process), and (4) dedicated assets (investments that are incurred due to one specific transaction with one specific customer). When estimating the effects of asset specificity on governance structures, one thus needs ways to measure one (or more) of these four kinds. Physical proximity of contracting firms has been used as a proxy for site specificity (e.g., by Joskow 1985, 1987, 1990 and Spiller 1985) and R&D expenditure as a proxy for physical asset specificity. With regard to both human asset specificity and dedicated assets, survey data have been used. Instead of describing the empirical evidence in any detail here, we refer the reader to the survey of empirical studies by Shelanski and Klein (1999) and quote an early paper by Joskow (1991, 81) who observes that the empirical literature testing hypotheses based on transaction cost economics “is in much better shape than much of the empirical work in industrial organisation generally.” Nevertheless, one empirical study dealing with alternative explanations for vertical mergers is too much to the point not to be cited. Spiller (1985) compared the predictive qualities off transaction cost economics with those of the market power paradigm of the Harvard approach. The latter predicts that the benefits of a merger increase in the degree of (supplier) market concentration, while transaction cost economics predicts that they increase in the degree of asset specificity. Gains from mergers are here operationalised according to the unexpected gains in the firm’s stock market prices at the announcement of the merger. Spiller finds that gains from mergers are smaller the greater the distance between the merging firms, i.e., the lower site specificity, whereas the industry concentration has no significant effect. This can be interpreted as evidence that the power of transaction cost economics in predicting mergers is higher than that of the more traditional structure-conductperformance paradigm. 6.3. Policy Implications In academia, the New Institutional Economics is a highly successful research programme. This can also be proved by looking at citation records. But in European competition circles, transaction cost economics as an important part of the New Institutional Economics has probably not received the attention it deserves. Policy implications resulting from the programme have not entered the pages of many textbooks. Therefore, we propose to describe the methods used by representatives of the programme to get to policy implications here. Many approaches in competition theory have traditionally drawn on some ideal state of the world, perfect competition being the most famous such state. Real-world results were then compared with theoretically derived states of the world. Needless to say, reality often appeared as utterly bad compared to the theoretical ideal. For many economists, the next step was then a small one: demand that the state intervene to make the actors behave in such a way that would at least approximate the theoretically derived ideals. Basically, this notion should have been discredited ever
DEVELOPMENTS IN COMPETITION THEORY
33
since the concept of Second Best was published (by Lipsey and Lancaster in 1956). It is shown that attempts to emulate the prerequisites for the ideal world can lead to outcomes that are worse yet. To take an example from competition theory: an oligopoly is supposed to be an untransparent market structure, theory assumes a transparent market structure. At times, it has then been demanded that official price offices be founded that collect and publish prices by all oligopolists in the market. This could, however, lead to worse outcomes since parallel behaviour by oligopolists would be facilitated. Transaction cost economists only compare realised states of the world with other realised states of the world. At best, they only take realizable states of the world into consideration. This means that no abstract ideal is painted any more but that one asks for marginal improvements that take the current situation that one finds itself in as the starting point. This approach is today called “comparative institutional analysis” and the idea was first coined by Ronald Coase (1964). When representatives of the welfare-economic approach demand state interventions as corrections to market failure, they often commit a logical mistake: after having identified some “market failure” (as compared to a theoretically derived ideal) they demand state interventions and assume that the state functions perfectly. This is, of course, a dishonest procedure: if market failure is taken into account, then government or constitutional failure should also be taken into account. The state and its representatives do not function without cost. Williamson (1996, 195) tries to take this into account and proposes the concept of “remediableness”. Iff one proposes a new policy, one better take the costs of getting from the current status quo to the proposed policy explicitly into account. Getting there might be costly (necessary investment but also political opposition are just two possible cost components). The proposed policy only constitutes an improvement if the returns from the new policy are higher after the costs of getting there have already been reduced from the expected benefits. This leads Williamson to redefine the notion of efficiency y (1996, 195): „An outcome for which no feasible superior alternative can be described andd implemented with net gains is presumed to be efficient.“ Taking both the relevance of transaction costs as well as the modified definition of efficiency into account, a number of policy implications can be derived: There are conditions under which vertical integration can enhance efficiency. This will be the case when transactions cost savings outweigh additional organisation costs. Under such circumstances, the prohibition of mergers would be detrimental to overall efficiency. They should thus be allowed. There are conditions under which other forms of governance such as long-term contracts or exclusive dealing contracts can enhance efficiency. If a certain service quality can only be upheld given some exclusive contracts, this might be a case in point. A similar point can be made with regard to geographical restraints. The policy implications are obvious: investigate whether the conditions are fulfilled. f If so, do not prohibit the specific restraints because that would decrease overall efficiency. Furthermore, conglomerate concentration may be inexplicable from the point of view of technology, but may very well be explicable by looking att the firm as a governance structure. If conglomerate concentration can be reconceptualised as a result
34
CHAPTER II
of economising on transaction costs, it should not be punished anymore because that would decrease efficiency. These policy implications were derived supposing that governance structures are a result of firms’ attempts to economise on transaction costs. Above, some emphasis was put on the method used by representatives of transaction cost economists, namely comparative institutional analysis. Taking this method seriously can have far-reaching policy implications too. According to it, the existence of barriers to entry as such is not sufficient for demanding intervention by competition authorities as long as it cannot be proven that there is a better structure that can be implemented at reasonable cost. Williamson (1996, 282) writes: „... while the mere existence of entry barriers was previously thought both objectionable and unlawful, this noncomparative approach has been supplanted by one in which (as an enforcement matter) the relevant test is not whether entry impediments m exist but whether a remedy can be effected with net social gains. As a result, arguments regarding the mere existence of entry barriers no longer carry the day.” 7. IN LIEU OF A SUMMARY: CONSENSUS AND DISSENSUS BETWEEN THE VARIOUS APPROACHES Looking at the relationship of the various theoretical developments, there are complementarities as well as incompatibilities. Although modern Industrial Economics has come a long way, it seems still firmly rooted in the structure-conductperformance paradigm. Not only have the main questions remained the same; the basic conjectures have also largely remained unchanged. What has changed is the toolkit: whereas the Harvard paradigm used to be primarily inductive, modern Industrial Economics has turned deductive, being based on game theory. Moreover, modern Industrial Economics does not solely focus on structural factors anymore, but tries to incorporate the behavioural incentives of the relevant actors. Transaction Cost Economics also takes the behavioural incentives of the actors explicitly into account. Many of its representatives also draw heavily on game theory and there is thus substantial u overlap with modern Industrial Economics. This is, e.g., documented in the textbook by Tirole (1988) who is one of the leading representatives of The New Industrial Organisation; for a number of chapters, he draws heavily on Joskow’s lectures delivered at MIT. Joskow is, of course, one of the leading representatives of Transaction Cost Economics. Yet, there are a number of incompatibilities between NIO and TCE. The most important one seems to be the underlying standard of reference: the NIO remains within traditional neoclassic thinking: define an abstract welfare standard, compare reality with it, if reality diverges substantially, devise some policy in order to get reality closer to the theoretical standard. Transaction Cost Economics believes that such an approach is of little help. Based on the notion of Comparative Institutional Analysis, it redefines efficiency in a way that takes the specific constraints explicitly into account. Traditional welfare economics has identified a host of so-called “market failures”. TCE explicitly recognises that it is nott only the market that can fail but also government and bureaucracies. Taking these failures into account, one can of-
DEVELOPMENTS IN COMPETITION THEORY
35
ten not improve the current situation. If that is the case, it is called “efficient”, even though it does not fulfil the tight criteria named by more traditional approaches. Though representatives off both approaches talk of “efficiency”, they mean completely different things. Representatives of TCE start from the assumption that the borders of a firm are the result of transaction cost minimising strategies. Contracts and agreements that have generally been subsumed under behaviour “in restraint of competition” need to be re-evaluated: these can be horizontal, vertical, and conglomerate ones. It was representatives of TCE who were able to show that these are often entered into with the goal of saving on transaction costs and that they were thus not necessarily restraining competition. Market structure in the traditional sense does thus not play a decisive role in TCE anymore. In recent decades, new forms of cooperation between firms have emerged. Structural approaches towards competition policy still seem to be the dominant ones. One possible reason is that it is still very difficult to measure transaction costs as well as other central notions of TCE such as asset specificity, uncertainty, and frequency. In order to gain further ground, representatives of TCE should thus think hard about hands-on approaches how to deal with these issues. After having delineated the relevant market, structure-oriented economists need to do some simple number games to come up with t concentration ratios and the like. So far, only three approaches – namely the traditional structure-conductperformance paradigm, the New Industrial Organisation, and Transaction Cost Economics – have been mentioned. As fully-fledged approaches, they seem indeed to be dominating discussions on competition policy. But what about the other two approaches and their relationship to these three more important ones? “Chicago” did not only develop as a critique to Harvard but also to antitrust policy US style. Its representatives had the impression that there were many inconsistencies in antitrust policy as practiced in the US during the 60s and 70s. This is the reason why Bork (1978) gave his book the subtitle “A policy at war with itself.” Many of the shortcomings pointed att by representatives of “Chicago” have been corrected in the meantime: state-mandated barriers were not only recognised as a serious impediment to competition but were dismantled to a large degree during the privatisation and deregulation policies observed in many countries in the 80s and 90s (see also Chapter III for more on this). The representatives of contestability theory did not carry out an attack against Harvard as sweeping as Chicago did. Yet, on theoretical grounds, they were often taken more seriously than the Chicago boys as they argued out of the same paradigm and came to the conclusion that under certain, carefully specified conditions, structure did not matter at all for the results (“performance”) to be expected in a market. Although contestability theory has been criticised because these conditions seem very seldom – if ever – to apply in reality, it has also made an important effect on competition theory: according to it, the effectiveness of potential competition crucially depends on the significance of barriers to entry. Part of the message is almost identical to that of Chicago, although the representatives of contestability come from a different theoretical background: (competition) policy should focus on reducing
CHAPTER II
36
state-mandated barriers to entry, as this will increase the likelihood of beneficial results of the competitive process. In order to make the various points of consensus and dissensus among the various approaches even more concrete, we will discuss the ways in which they deal with one issue that all approaches that pretend to be applicable to policy issues need to deal with somehow: the recognition that mistakes can occur and how one deals with that possibility. Two types of mistakes can be distinguished: – –
Type I errors: Efficiency-increasing and thus welfare increasing mergers are wrongly prohibited. Type II errors: Mergers that are not efficiency-enhancing and thus not welfare-increasing are wrongly allowed.
f economic grounds. From a This classification of possible errors is based on welfare welfare economic point of view, the gains of any merger can be expressed in terms of increased productive efficiency. But mergers can also cause allocative inefficiencies if they enable a firm to be powerful.11 The competition authority thus needs to make a trade-off between gains in productive efficiency and losses in allocative efficiency. It can only commit two mistakes. It can either exaggerate the expected allocative inefficiencies (and turn notified mergers down although they should be passed) or it can overestimate the gains in productive efficiency (and clear the merger although it should be prohibited). These two types of errors thus reflect the welfare economic approach towards mergers. For competition policy this does, however, not mean that only those mergers should be allowed that explicitly generate efficiencies. In competition policy, all mergers should be passed as long as they do not overly restrain competition. Problems only arise if a merger threatens to overly restrain competition. Only in that case should efficiency considerations play an explicit role. From a welfare economic point of view, one would then ask whether allocative inefficiency can be expected to be overcompensated by gains in productive efficiency. Any competition authority faces the dilemma of having to trade off the two types of errors against each other. If the authority decides to take a tougher stance on mergers, thus letting fewer mergers pass, it reduces the probability of committing type II errors but simultaneously increases the probability of committing type I errors. The inverse relationship also holds: if a competition authority decides to take a more relaxed stance on mergers, thus letting more mergers pass, it reduces the likelihood of committing type I errors butt simultaneously increases the probability of committing type II errors. The choice is thus a genuine dilemma.
DEVELOPMENTS IN COMPETITION THEORY
37
Table 3: The Trade-off Between Type I and Type II Errors
Decision
Prevented Allowed
Efficiency Enhancing Reducing Type I error Correct Decision Correct Decision Type II error
It is, of course, tempting to think of the “optimal” decision concerning the trade-off that would supposedly consist of minimizing the overall costs expected. The costs caused by type I errors consist of the unrealised efficiency gains that would have resulted had the mergers that were in fact forbidden been implemented. But these are not the entire costs: Every y decision by a competition authority contains signals concerning likely future decisions: if it takes a tough stance on a particular merger, it can be expected to take a similarly tough stance on similar mergers. This will most likely lead some potentially welfare-increasing mergers to never be seriously pursued because every merger prohibited is connected with huge costs for the notifying parties. These could be called the dynamic effects of Type I errors. The costs of type II errors are primarily caused by allowing welfare-reducing mergers. Allocative inefficiencies will be reflected in higher prices and lower quantities. But there is also a dynamic aspect to type II errors: if companies expect a liberal decision-practice in merger control, this will affect the number and quality of the mergers notified. It is possible that mergers will be attempted for other reasons than for improvements in efficiency such as market power. Here, the competition authorities do not send signals that would reduce an adverse selection in mergers (Basenko/Spulber 1993, 11). For identifying an optimum, the costs of both error types need to be compared. It is in the evaluation of the costs expected with committing the two error types that the approaches presented in this chapter differ: Representatives of the Chicago approach would rather commit an error of type II than type I because they believe that errors of type I are – at least on average – more costly. Type II errors can be corrected ex post but there is no clearly identifiable ex post correction mechanism with type I errors: mergers that are not efficiency-enhancing but that are passed nevertheless are still subject to the market test: if other producers are more productive or meet consumer preferences better than the merged company the new company will lose market shares – and profits. If it is too large, capital markets are expected to correct for this (Manne 1965). In many jurisdictions, competition authorities can let mergers pass but can check the behaviour of firms that are supposed to dispose of a market dominant position. This is thus an additional channel to keep the costs of type II errors low. But if efficiency-enhancing mergers are wrongly prohibited there is no ex post market test. Efficiencies can simply not be realised. Representatives of the Chicago approach tthus argue that costs of type I errors regularly outweigh costs of type II errors. Judge Easterbrook, e.g., argues (1984, 15): „... the economic system corrects monopoly more readily than it corrects judicial errors ... in many cases the costs of monopoly wrongly permitted are small, while the costs of competition wrongly condemned are large.“
38
CHAPTER II
Representatives of the Harvard approach seem to be more likely to argue in favour of committing type I rather than type II errors. Traditionally, representatives of the Harvard approach have been much more critical with regard to the market than have representatives of Chicago. This is obviously reflected in their evaluation of the costs due to type I errors in comparison to type II errors. Representatives of Transaction Cost Economics have also explicitly dealt with the issue of wrong decisions in merger policy. In a recent paper, Joskow (2002, 6) writes: „The test of a good legal rule is not primarily whetherr it leads to the correct decision in a particular case, but rather whether it does a good job deterring anticompetitive behaviour throughout the economy given all of the relevant costs, benefits, and uncertainties associated with diagnosis and remedies.“ The dynamic effects of errors are clearly recognised here. Moreover, Joskow clearly recognises that our knowledge concerning cause-effect-relationships is very limited and that enforcement agencies have only very limited knowledge at their disposal. Rather than taking a clear stance on what type of error rather to commit, the policy advice seems to point to broad and general rules. This can, however, not be easily reconciled with the specific type of efficiency defence that TCE stands for, namely efficiencies based on asset specificity, uncertainty, and frequency. In concluding, it can be said that with regard to competition theory, a lot has been learned over the last couple of decades. In theory, a competition policy based on sound economic reasoning should thus be possible. The problem to be solved is to devise rules that allow taking the intricacies of a specific case explicitly into account but is yet general and robust enough to allow for a high degree of predictability. Succumbing to economic trends is not a good advice here as they have often turned out to be short-lived fads. Before we develop some proposals how this could possibly achieved in chapter IV, we turn to the description of some business trends that an up-to-date merger policy should probably take explicitly into account.
CHAPTER III TRENDS IN THE BUSINESS ENVIRONMENT
Optimal firm behaviour depends on the specific f business environment within which firms act. If the business environment is subject to fundamental and rapid change, this will most likely trigger modified strategic behaviour by firms. This could also have consequences for merger policy: behaviour u evaluated to be restricting competition within the business environment at one point in time might not be restricting competition anymore after circumstances have fundamentally changed. Possible policy implications with regard to competition policy will then be drawn in chapter IV. Many of the important and fundamental changes in business environment have been subsumed under the heading of “globalisation”. This concept has been able to generate angry opposition by many who directly benefit from it. But it has also been met with scepticism by many economic historians who point out that today’s integration levels are not dramatically higher than those achieved at the end of the 19th century, but that, in fact, they are often lower than those of more than a hundred years ago. Still, it is a widely used concept and many of the changes in business environment can be subsumed under this heading. We define globalisation as the integration of production and distribution processes concerning goods and services beyond the borders of nation-states. Two indicators are often used to demonstrate the underlying developments: (1) The increase in international trade; this has been a long-term trend since the end of World War II that has picked up considerable speed since the 1980s. Since then, growth in world trade has by far outnumbered growth in world income. It is particularly noteworthy that trade in services has been consistently growing faster than trade in goods. (2) The increase in foreign direct investment; the growth in foreign direct investment has been consistently higher than the growth of world income. In this chapter, some of the causes that have brought about these changes will be dealt with. Due to them, the costs of acting beyond the borders of the nation-state have decreased. These costs will hitherto be called international interaction costs (for the term and many similar arguments, see also Erb et al. 2000, chapter A). They consist of two categories, namely the costs of overcoming space and (nation-state) borders on the one hand and the costs of foreign direct investments on the other. The first category can be divided into three sub-categories, namely (1) the costs of trans39
CHAPTER III
40
porting goods between two countries, (2) the transaction costs that have to be incurred when contracting with foreigners,12 and (3) the costs for complying with state-mandated barriers to entry such as tariff and non-tariff barriers to trade. Similarly, the costs of foreign direct investment t can be broken down into three subcategories, namely (1) the costs of financing a a foreign direct investment, (2) the transaction costs that have to be incurred, and (3) the costs of overcoming statemandated barriers to entry. Some of these costs must have fallen to make globalisation happen.
International Interaction Costs
Costs of Foreign Direct Investment
Costs of Overcoming Space and Borders
Transport Costs
Transaction Costs
Completely Globalised
Costs of Overcoming StateMandated Barriers
Costs of Finance
Transaction Costs
Costs of Overcoming StateMandated Barriers
Partially Globalised
Figure 6: Overview of International Interaction Costs
We propose to distinguish between decreases in cost that are due to policy changes (liberalisation, deregulation) and those that are due to economic or technological changes. Section one will deal with policy factors and section two with economic and technological factors. Section three contains some conclusions. 1. LIBERALISATION AS A DRIVING FORCE OF GLOBALISATION 1.1. General Trends Since the 1980s, the world has seen an unprecedented degree of liberalisation. It has taken place within nation-states as well as in regional agreements and on the world
TRENDS IN THE BUSINESS ENVIRONMENT
41
level. Liberalisation was not confined to specific sectors but could be observed across the board, i.e., with regard to goods, services, capital, but also the right to work and invest in countries other than the home country. 1.1.1. Liberalisation within nation-states Substantial deregulation was initiated by the Reagan administration in 1981. It led, inter alia, to a drastic deregulation in the airline and telecommunications industries as well as substantial cuts in welfare programs. Margaret Thatcher in the UK initiated a similar policy. In the UK and continental Europe, it did not only lead to the first steps in deregulation but also to far-reaching privatisation programmes comprising all sorts of utilities such as post and telecommunications, the transport sector, and other utilities such as energy. Many of these had traditionally been state-run in Europe. Privatisation was often accompanied by deregulation leading to the emergence of new suppliers and dramatic changes in prices as well as quality. In order to make one’s location as attractive as possible, attempts at reducing corporate taxes were made in many industrialised countries. The countries of Central and Eastern Europe have been subject to fundamental transition processes. The creation of an institutional framework for a market economy was often accompanied by mass privatisation programmes as well as the opening of markets to international trade and to foreign direct investment. Both of these factors have led to the rapid integration of the countries of Central and Eastern Europe into the world markets. This development will be even speeded up by their membership in the EU. Many developing countries fundamentally changed their development strategies since the early 1980ies. Whereas the countries of Eastern Asia had decided relatively early to implement an export diversification strategy, thus trying to get integrated into the world economy by focusing on their respective comparative advantages, the countries of Latin America had stuck to the strategy of import substitution, i.e., trying to prevent integration into the world economy by trying to produce as many goods as possible at home. The result of this natural experiment could not be any clearer: whereas the countries of East Asia have substantially increased their average standard of living and are often called the Asian Tigers, the standard of living has at times even deteriorated in Latin America. This has led to a reorientation towards liberalisation and deregulation in many less developed countries. This reorientation is in no way confined to Latin America but can, e.g., also be observed with regard to India and other important states among the less developed countries. Some figures in the GATT/WTO membership can highlight the recognised importance of integrating one’s economy into the world economy: in 1980, the GATT had 84 members, in 1990 (i.e., before the creation of many new states as a consequence of the demise of the Soviet Union), it had already 95 member states. This number had further increased to 135 by the year 2000 and has reached 146 in 2004. This is a significant development because China is not only the most populous country, but has also been the economy with the largest growth rates for quite a number of years.
42
CHAPTER III
Among developing countries, it is by far the most important recipient country of foreign direct investment. 1.1.2. Liberalisation by regional integration The number of regional trade zones and unions notified with GATT substantially increased during the 1980s and 90s. From the point of economic theory, their effect on world trade is ambivalent, their net effect depending on the size of trade-creating effects within the regional agreement and the size of trade-diverting effects in relationship to trading partners who are nott members of the zone (Viner 1950). Every regional trade association needs to be notified with GATT because – due to its regional focus – it is not in compliance with the GATT principles of most-favourednation clause and non-discrimination. Regional trade associations are built on the principle of discriminating between members and non-members on the basis of geography. When GATT was founded, it was hoped that regional trade associations would be an intermediate step towards worldwide integration. This is why an exception was built into GATT rules right from the outset. In practice, regional trade associations have led to substantial liberalisation between member states in the past two decades. The European Union is clearly the most remarkable example, which has had enormous effects with regard to liberalisation and deregulation: the completion of the Common Market with the four basic freedoms (with regards to goods, services, capital, and settlement) was a very important step. By further reducing state-mandated barriers to trade, it has led to a significant decrease in international interaction costs. The introduction of a single European currency has further reduced relevant transaction costs. The trade-creating effects are further amplified m by the recent round of accession. In the medium term, they might further increase by the possible membership of Bulgaria, Romania, Croatia, the former Yugoslavia, Albania and possibly even Turkey. Additionally, many of the integration effects also apply to the EEA (i.e., to Norway, Iceland and Liechtenstein). In North America, the North American Free Trade Association (NAFTA) has led to intensified economic integration between Canada, the U.S. and Mexico. In South America, Mercosur (currently comprising Argentina, Brazil, Paraguay and Uruguay) has not been quite as successful. There are various plans for regional integration associations combining the states of the Americas in one organisation. In other parts of the world, activities to reach higher levels of regional integration have also intensified everywhere over the last decade. 1.1.3. Liberalisation on a worldwide scale On a worldwide scale the most important development of the last decade surely was the establishment of the World Trade Organisation (WTO) that extends the General Agreement of Tariffs and Trade (GATT). Besides further reducing tariff and nontariff barriers with regard to goods (the traditional focus of GATT), a number of additional agreements were integrated into the WTO. These are the General Agree-
TRENDS IN THE BUSINESS ENVIRONMENT
43
ment on Trade in Services (GATS), an agreement on trade-relatedd investment measures (TRIMS), and an agreement on trade-related aspects of intellectual property rights (TRIPS). Additionally, a number of more narrow changes in world trade were agreed upon as a consequence of the so-called Uruguay round. These include, e.g., substantial improvements with regard to trade in textiles and fibres (the so-called multifibre agreement). All these steps have helped to decrease various a components of international interaction costs. They have been decreased further by improvements in some of the internal decision-making procedures used in that international organisation. Before the Uruguay-Round, disputes could, e.g., only be settled unanimously, i.e., the state suspected not to have complied with GATT rules had to agree to the sanctions carried out against it. Needless to say, chances of ever being convicted were thus extremely low. This has considerably improved after having changed the procedures accordingly. Over the last ten years, GATT/WTO have become a truly global organisation with membership now being 146 up from m 95 in 1990. After having looked at liberalisation and deregulation as factors influencing international interaction costs from a geographic point of view, we will now turn to look at recent developments taking a sector-specific view. 1.2. Sector-Specific Liberalisation 1.2.1. Liberalisation of goods markets When the GATT was founded in 1947, tariffs on industrial goods averaged around 40%. Today, average import tariffs are lower than 5% with further reductions to be implemented. About half of all industrial products traded across borders are not taxed at all anymore. Nevertheless, there was a countervailing trend: more and more non-tariff trade barriers were substituted for tariffs. This problem m has long been recognised, however. One result of the Uruguay-round was that all non-tariff trade barriers should be transformed into tariff equivalents and that these should subsequently be reduced. These trends have led to a reduction in the costs that need to be incurred in order to comply with state-mandated barriers to trade. As just pointed out, the protection of intellectual t property rights was improved by establishing TRIPS as a part of the WTO. The more effective protection of intellectual property rights means that the costs of foreign directt investments have been reduced because companies investing – and producing – abroad need to spend fewer resources on protecting their rights. The substantial reduction in international interaction costs is the result of the described liberalisation. For many firms, the decomposition of the value chain into many parts that are produced in various countries becomes the cost minimising strategy. To the degree that the liberalisation off the goods markets has led to a decrease in transaction costs, the answer to the make-or-buy question will be modified. Looked at from the other side, this trend means that for a number of companies, en-
44
CHAPTER III
try into markets that had until now been protected by barriers becomes attractive. This is, of course, also true on the procurement side with global sourcing being the pertinent catchword. The WTO also includes an agreement on the harmonisation of standards (The Technical Barriers to Trade Agreement). Its consequences are primarily relevant for industrial goods. Standards have been a traditional non-tariff trade barrier, their effect often being the protection of domestic producers. The harmonisation of standards will often lead to increased competitive pressure precisely because the protection awarded by way of the standard has ceased to exist. But the harmonisation of standards can have yet another effect: it can also accelerate the decomposition of the value chain. If suppliers off various inputs can guarantee that their products comply with certain international standards, this can substantially decrease transaction costs (in particular monitoring costs) and this can make supply from independent suppliers attractive. In addition to leading to lower monitoring costs, another component of transaction costs is also affected, namely the search costs that have to be incurred in order to find a contracting partner. 1.2.2. Liberalisation of capital markets International financial markets have been growing at an unprecedented pace since the beginning of the 1970s. Various measures have been used as a proxy for this growth: the outstanding volume of treasury bills as well as that of bonds negotiated by private corporations are two possible indicators, the amount of currency trade is another. These changes have been made possible by structural changes in the financial markets such as the development of new financial instruments, the emergence of new intermediaries, and fierce and often global competition among various suppliers of financial services. But these changes would have been impossible had there not been a liberalisation of capital market and currency constraints. In 1970, only 35 countries had accepted the IMF standards of capital convertibility, in 1994 it was 90, and today 106 countries have accepted these standards (IMF 2004). For the individual company, these developments mean substantially lower costs for financing equity, but also for taking up venture capital. It has often been argued that large firms have advantages in financing new activities. Increased competition in capital markets means that this advantage has decreased if not vanished entirely. It has even been argued that convincing product- and export strategies t are highly welcomed and can be a source of advantages in financing quite independent from the size of the respective company. 1.2.3. Facilitation of Foreign Direct Investment Foreign Direct Investment can be impeded by a host of non-tariff barriers: lengthy permission procedures are only the mostt obvious example. Needless to say, these can make foreign direct investments unattractive and they thus function as a protection of domestic producers. Over the lastt number of years, a host of bilateral and regional agreements putting foreign direct investmentt on firm ground have been
TRENDS IN THE BUSINESS ENVIRONMENT
45
concluded. It has been estimated (Koch 1997, 219) that currently, some 16 regional and some 1,100 bilateral investment agreements exist. The focus of some of the WTO agreements is broader. In TRIMS, e.g., the contracting parties commit themselves not to establish regulations that are incompatible with the principle of national treatment and the prohibition of quantitative restrictions. GATS – which will be mentioned in the next sub-section – also contains some rules pertaining to the establishment off subsidiaries and the free movement of personnel. The mindset of many governments of less developed countries has fundamentally changed with regard to foreign direct investment over the last decade: It was often interpreted as an attempt of big capital to become even richer to the detriment of the less developed countries. In the meantime, many governments seem to have realised that foreign direct investment is crucial for their domestic development and that it can make all parties involved better off. These developments have led to a reduction in international interaction costs. It can thus be conjectured that they are one of the reasons for the considerable increase in foreign direct investment that has been taking place since the mid-1980s. 1.2.4. Liberalisation of service markets The importance of the service sector has steadily grown over the last couple of decades. In the most advanced economies, its share of the gross national product accounts for 67,2% of gross national product (for the OECD members as of 2001, OECD 2004). Many service activities have been liberalised on the level of the nation-state since the 1980s. These efforts have been complemented by efforts to liberalise the international trade in services. The General agreement on Trade in Services (GATS) that is part of the WTO is probably the most important of these agreements. 2. ECONOMIC AND TECHNOLOGICAL FACTORS After having looked at policy changes that led to a reduction in international interaction costs and that made globalisation thus possible, we now turn to have a look at economic and technological factors that have had similar effects. It is, of course, possible that the groups of factors reinforce each other. This will shortly be dealt with in the conclusion of this chapter. In this section, six economic and technological factors will be presented. Factors that can be grouped as supply-side factors are (1) rapid technological change, (2) increasing mobility of supply, (3) reduction in transport costs, and (4) the Internet as a multi-purpose tool for business. After having presented them, we will turn to two demand-side factors, namely (5) the homogenisation of preferences, and (6) the rapid change in consumption patterns. 2.1. Rapid Technological Change Compared to the 1970s, the length of product and innovation cycles has been cut in half (Leker 2001). This has increased competitive pressure immensely: in order to
46
CHAPTER III
make the investments on research and development profitable, large quantities of new products need to be sold fast on a global scale. Management consultants have termed this the “multi-domestic” strategy (Leontiades 1985). Another attempt to economise on R&D costs has been to share them with competitors (“strategic alliances”). These have been evaluated critically by a number of competition authorities. Drawing on the insights of transaction cost economics, many of them can be explained as an attempt to economise on an important cost component, in this case R&D (Voigt 1993 for an evaluation of strategic alliances from a competition policy point of view; Lopez 2001 for recent attempts of competition authorities to evaluate them as an attempt to monopolise). The cutting in half of product and innovation cycles has far-reaching consequences for the competitive process: innovative leads cannot be conserved anymore. It is often highly unlikely that today’s dominant position that is due to superior technology will persist for long. The time dimension has thus increased in importance. To quote an often-cited example: innovation cycles in the chip industry are so short that once a firm has a certain lead, its competitors will not even try to imitate it on that cycle but immediately invest into the next generation in order to gain a lead there. It has been observed that this is closer to the notion of competition for the market than competition within the market. If relevant markets were narrowly defined, one would regularly find market shares of 100%, then. Shorter product cycles also mean that newcomers or incumbents from related markets have better chances to enter into these markets. The importance of these trends is amplified by the very rapid development of the communication and data processing industries. In the past, product cycles were often characterised by an extended degeneration phase. At times, competitors knew each other and there was not much technological change. These are often good prerequisites for successful collusion. This has changed for two reasons: the degeneration phase tends to become ever shorter. If there is a degeneration phase of considerable length, competitors from low-cost countries will be attracted into the market. Chances for successful collusion have thus often become rather slim. 2.2. Increasing Mobility of Supply It has been observed that expertise in a number of basic production techniques can at times enable firms to enter familiar markets without having to invest much in terms of learning costs. If this is true, this wouldd mean that the relevance of supply-side substitutability has to be taken more explicitly into account: should the price-cost ratio increase in a certain market, firms that have the capability of entering into this market without having to sink substantial amounts of costs would be credible potential entrants into that market. 2.3. Developments in Transport Costs Transport Costs are one component of international interaction costs. As a consequence of the liberalisation and deregulation described in the previous section, they
TRENDS IN THE BUSINESS ENVIRONMENT
47
have decreased substantially over the last couple of decades. Another factor contributing to their decrease was the trend to uniform standards, a third factor being innovative logistics concepts. Baldwin/Martin (1999) have estimated the development of various categories of transport costs for the period between 1940 and 1990. According to their calculations, costs of sea freight have dropped some 50%, airfreight has dropped some 80%, information transmission via satellite has dropped 90% (here, the relevant period is the one between 1980 and 1990), and the transatlantic transmission of data has decreased some 98%, all in real terms. Note that their estimates end in 1990. In the meantime, a number of important developments have taken place that should have led to further substantial cost cuts: railways have been partially privatised or at least deregulated in many countries. This has not only led to decreases in transport charges but also to improvements in service quality. The liberalisation of the road freight business within the European Union only occurred in 1993. It also led to additional reductions in transport costs. The reduction of transport costs has been so important that many firms have been able to reduce the number of plants and increase average plant size. The reduction of transport costs is thus an important determinant of a firm’s plant structure. Traditionally, transport costs could be an important cost component preventing the entry of new competitors. Due to dramatic decreases, this effect should have vanished largely. 2.4. The Internet It could be argued that the Internet is a medium for transporting information and that it should therefore have been discussed in the last subsection. This is, of course, true. Yet, we have decided to discuss the Internet as a separate factor because too many of today’s fundamental changes in business are closely related to the development of the Internet. Today, all digitalised information can be transported via the Internet at almost no cost. The Internet is, of course, not a one-way medium and thus does not only allow for the transport of information, but also for communication, i.e., the mutual exchange of information. From the point off view of firms, three main channels of communication can be distinguished meaningfully: (1) intra-firm communication, (2) communication with other firms (Business-to-Business, B2B), and (3) communication between the firm and its customers (Business-to-Consumer, B2C). Although the Internet allows firms to exchange information with anybody who has access to a computer, some communication is intentionally restricted to the boundary of the firm. So-called intranets that are based on the Internet can be an important tool for saving on organisation costs. It is important to note that the Internet does not only allow for savings on transaction costs – to be discussed in a minute – but also for savings on organisation costs. This means that ex ante little can be said about the direction in which the optimal size of the firm develops to the diffusion of the Internet. Whether firms become larger or smaller depends on the relative magnitude of the savings in transaction costs comparedd to those in organisation costs.
48
CHAPTER III
During the Internet boom, B2C was expected to grow rapidly. In the meantime, B2B has grown much faster. Itt has helped many companies to substantially save in their sourcing activities. The Internet allows for both firms and private consumers to gain more transparency with regard to (1) the range of available products, (2) their quality, and (3) their price. Thanks to search engines such as Google or Altavista, many products can be searched for on a global scale. Markets that used to be very untransparent have turned into transparent ones. Take the market for antiquarian books: a couple of years ago, many used book dealers did not even have an incentive to publish lists of the books they had in stock. Now, the stock of many antiquarian bookstores can be browsed through within seconds. The transparency-enhancing effect is, however, not confined to products whose existence one is already aware of. The Internet can also be used to gain an overview over the specifications of many different products. In many cases, this will lead to increased demand substitutability. The Internet can also be used to get information concerning the quality of goods one is considering to acquire. If I look for a certain book on “amazon.com”, I will not only get a description of the book, its title page, some sample pages and so forth, but also a number of critiques from people who have (supposedly) read the book. If I am uncertain as to what travel guide to buy, this information can be decisive for my purchase. In the meantime, some portals have specialised on collecting and providing customer reports on a wide range of products (“ecomment.de”). Suppose a potential consumer has collected some information on the products available for her needs and has compared qualities using the Internet. She would now like to buy the product, of course, at the lowest possible price. Again, some portals have specialised in finding the lowest price on the net (“BestPrice.com”). Increased transparency can be expected to channel demand to low-cost suppliers. It will therefore increase price competition. The paradigmatic textbook case for monopoly power is this: a supplier has a monopoly and is facing atomistic, unorganised consumers. Traditionally, consumer interests have indeed been notoriously difficult f to organise. The Internet has not only decreased the costs of running an existing hierarchical structure (as discussed above) but has also increased the chances of people with similar interests to coordinate their behaviour for the first time. Examples are portals that offer lower prices as the number of customers increases. In the pre-Internet days, costs of finding consumers interested in buying identical products used to be prohibitive. Due to the low cost of using the Internet, this has changed. These costs of getting organised have not only been decreasing for final consumers. Producers who co-operate in their sourcing activities are able to establish buyer power and will often be able to substantially reduce costs. The best-known example for this is Covisint (Covisint.com), a venture of a number of automobile companies. Cost-savings are, however, not confined to the demand side of the market: Thanks to the Internet, suppliers can often penetrate markets without having to incur heavy investments in the creation of a distribution network, a brand, etc. Many orthodox classifications become outdated due to developments induced by the Internet. Synthetic creations like “prosumer” or “coopetition” are evidence of
TRENDS IN THE BUSINESS ENVIRONMENT
49
that development. “Prosumer” is a mixture t of consumer and producer and shows that this traditional division tends to become flawed. “Coopetition” is a mixture between “cooperation” and “competition” with a similar bent. It has been observed that the traditional distinction between “markets” and “hierarchies” is not sufficient for describing the complexities of real life anymore: Markets tend to get ever more organised (“ebay.com”), hierarchies tend to get an increasing number of market-like elements. This is a trend that has long been analysed by representatives of Transaction Cost Economics under the heading of “hybrid forms” (Williamson 1985). The Internet can be expected to lead to additional demand concerning policy changes: as a distribution channel, national borders are often negligible (the sale of software products that are distributed via the net is an example). At other times, they do play a certain role. One can order from the Internet pharmacy Doc Morris via the Internet. Delivery of the product, however, still has to rely on traditional distribution channels and national borders can still function as barriers to trade. But the possibility of lower prices can be expected to lead to further demands concerning liberalisation and deregulation. In many cases, it can be expected to lead to lower prices and higher consumer welfare. 2.5. Homogenisation of Preferences The hypothesis that preferences will become ever more similar on a worldwide scale has been around for quite some time (see, e.g., Levitt 1983) and has been highly disputed. It can be decomposed into two seemingly contradictory trends: on the one hand, the trend toward ever more similar preferences, and on the other the trend toward ever more specific preferences. As long as they both occur simultaneously and on a worldwide scale, they might have far-reaching effects on firm behaviour. There are three arguments in favour of consumer preferences becoming more alike beyond cultural or nation-state borders: Firstly, due to the improvement and the worldwide availability of mass media, information is shared on a worldwide scale. Knowledge concerning the consumer habits of others could, from that point of view, influence one’s own preferences. Secondly, the growth in tourism over the last decades has brought people belonging to diverse cultures into direct contact with each other. And thirdly, higher income levels in many countries have led people to lead more similar lifestyles. For firm behaviour, the trend towards the homogenisation of preferences can have far-reaching consequences: Certain marketing strategies are not confined to specific countries anymore but can be used in a variety of world regions and economies of scale in promotion and advertising can be realised. This means that international interaction costs have been decreasing. Use of such strategies is not confined to large companies anymore but small and medium-sized companies can also apply them. On the other hand, trend experts have long reported a tendency towards an individualisation of lifestyles. The number of archetypal consumers, whose preferences and desires market research companies are after, has substantially increased. The individualisation of lifestyles means a trend towards an ever-finer differentiation of
50
CHAPTER III
products. This means that niches for specialised producers open up and the danger of a strategically induced market foreclosure seems rather negligible. 2.6. Rapid Change of Consumption Patterns Many industries have experienced rapid changes in consumption patterns over the last couple of years: margarine turned from a cheap substitute for butter into a fancy lifestyle product, Adidas footwear and textiles transformed from old-fashioned sportswear to a hip product to be worn in cool clubs on Saturday nights. There are many more examples. Many of these changes were hardly predictable for the companies concerned. They can radically change a firm’s business environment: all of a sudden, marketing can become crucial etc. But many of these changes are also unpredictable for competition authorities. These changes mean that demand substitutability becomes even more difficult to ascertain: at least some products that used to belong to some sportswear market have become part of some fashionwear market. 3. CONCLUSIONS In this chapter, we have had a look at recent changes in the business environment. Today, it is often described as being shaped by y globalisation. Globalisation was defined as the integration of production and distribution processes concerning goods and services beyond the borders of nation-states. Two aspects of globalisation were reiterated: the increase in international trade and the increase in foreign direct investment. Globalisation occurred because the benefits of international interactions increased relative to those of intranational interactions. The causes for these changes were divided into two groups: namely changes caused by policy factors and changes caused by technological and economic factors. In order to spell out the changes in international interaction costs they were divided into three cost categories having to do with international trade and three cost categories having to do with foreign direct investment. With regard to policy changes, liberalisation in the form of deregulation and privatisation played an important role. These changes caused state-mandated barriers with regard to both international trade and foreign direct investment to shrink considerably. The liberalisation of capital markets has helped to decrease the costs of financing foreign direct investment. International interaction costs have thus been reduced due to policy changes. The changes caused by technological and economic factors have an effect in the same direction. The secular decrease in transport costs has made international trade much more attractive compared with intranational trade. The Internet has reduced both organisation and transaction costs on a worldwide scale, it has thus also made international trade and foreign direct investment more attractive. Some of the economic and technological changes can be expected to lead to demands for further liberalisation. As an example, the Internet that makes new kinds of
TRENDS IN THE BUSINESS ENVIRONMENT
51
border-crossing trade possible that is currently still prohibited was named. Globalisation can thus be expected to keep its pace. Firms will have to adapt their strategies in order to remain successful in a radically altered business environment.
This page intentionally blank
CHAPTER IV POSSIBLE CONSEQUENCES OF TRENDS IN THEORY (B) AND DEVELOPMENTS IN BUSINESS (C) FOR COMPETITION POLICY
1. INTRODUCTION In chapter II, some recent theoretical developments were described. Chapter III served to present recent developments in business environment and their consequences for the (strategic) behaviour of firms. In this chapter, possible consequences of the theoretical as well as the factual developments for competition policy will be examined. In the last section of chapter II, it was pointed out that the recent theoretical developments have led to new commonalities between representatives of the new industrial organisation and transaction cost economics, but that consent was far from being complete. Heterogeneity of competition policy will thus remain. This means that economists still do not come up with the one and only policy implication but rather that, depending on the specific assumptions one subscribes to, a variety of different policy implications might be derived. We are here not interested in infighting concerning basic assumptions. Rather, we try to present various possible policy implications that would follow from the various approaches as convincingly as possible. This chapter deals with three important topics of competition policy. Firstly, the procedure used for assessing the incompatibility of a proposed merger with the Common Market. Until April 2004, a merger was prohibited if it created or strengthened a dominant position. Starting in May 2004, proposed mergers will be prohibited if their consummation would “significantly impede effective competition … in particular as a result of the creation or strengthening of a dominant position” (Art. 3 EMR). The substantive criterion to be used in order to assess the compatibility of a merger with the Common Market has thus been expanded. Yet, there are two reasons still to focus on the Commission’s assessment of dominance: Even after the reform, it is supposed to remain the most important reason for prohibiting a merger. And: in describing Commission practice, we have to rely on the old substantive criterion as systematic evidence concerning the application of the new criterion is not available yet. The new criterion as well as the newly passed Guidelines will, however, be explicitly dealt with in various sections of this chapter. Secondly, the procedures used to evaluate the severity of barriers to entry are analysed. This is important in order to determine whether the behaviour of market incumbents can be expected to be constrained by competitors who are not actually 53
CHAPTER IV
54
competing but who could be expected to enterr the market should super-normal profits appear to be realised on a specific market. It is thus the topic of potential competition. Strictly speaking, the assessment of barriers to entry as well as of potential competition needs to be done before dominance can be assessed. Section two is thus really a part of section one. On the other hand, the topics have received increased attention due to both theoretical and empirical m developments. It was therefore decided to deal with them in a separate section. Thirdly, the possibility that not a single firm but a (low) number of firms will jointly dominate a market has played an important role in the decisions by the Commission. Some of its decisions have been highly publicised, the last one being the AIRTOURS/FIRST CHOICE decision that was brought to the Court off First Instance by AIRTOURS. The Court of First Instance decided that the Commission committed a number of grave mistakes and reversed its decision. It is therefore timely to have a closer look at the concept of collective dominance and its potential use in competition policy. As a consequence of the recent reforms, the Commission now prefers to speak of “coordinated effects” rather than of collective dominance. We have decided to deal with this change much in the same way as we deal with the change of the substantive test of Merger Policy: namely to describe and evaluate the way the Commission has dealt with collective dominance in the past and to deal with possible changes as a consequence of the recent changes in the outlook which closes this chapter. The three parts are all organised in an identical fashion: we first set out to present the standard theory and the competition policy consequences that follow. We then present some modifications that might be in order due to theoretical developments (subsections 2) or to developments in business environment (subsections 3). The fourth step consists in having a look at the current practice of the EU and the last one in proposing reforms. Proposals for reform will always be made bearing the importance of predictability in mind. 2. FROM MARKET DEFINITION TO ASSESSING DOMINANCE 2.1. The Standard Approach The overwhelming importance of the structure-conduct-performance paradigm for the development of competition policy was pointed out in chapter II. According to that paradigm, the structure of a market determines the outcome to be expected. In order to be able to assess the likely consequences of a merger, one thus needs procedures to identify the current structure of a market as well as to predict the structure most likely to emerge after a merger has taken place. Simply put, the decisionmaking process of competition authorities can be said to consist of two steps: (1) the first step consists of identifying the relevant market, i.e., of identifying that market whose structure is to be analysed; (2) the second step consists of predicting the structure of the market after a merger has taken place and evaluating the consequences of that structure for the performance to be expected. If a merger is expected
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
55
to significantly impede effective competition, and, in particular, lead to the creation or strengthening of a dominant position, it will be prohibited. We now turn to present the procedures used to carry out the first step, namely to delineate the relevant market. Before being able to calculate market shares or concentration ratios, one obviously needs an evaluation as to what products compete with each other, i.e., what products belong to the same relevant market and what products can be safely ignored, i.e., belong to a different market. The concrete delineation of the relevant market is crucial for assessing dominance: the broader the delineation, the lower the resulting market shares, and the lower the chance that a competition authority will judge a merger to create (or strengthen) a dominant position. Most textbooks mention that the relevant market consists of three dimensions, namely product, geography and time. Time, however, has seldom played a role in the delineation of the relevant market as carried out by competition authorities. In a world characterised by a very y dynamic business environment, this could be a shortcoming. The issue will thus be taken up in section 2.3 below. 2.1.1. The relevant product market Market power is said to be present if a producer can profitably raise prices by, e.g., five or ten per cent. Usually, increases in prices lead to a fall in demand. The price elasticity of a product is defined as the percentage change in demand in relation to a one per cent price change. If a five per cent price increase leads to a decrease in demand of ten per cent, the corresponding price elasticity would thus be –2. If the price elasticity of a product is very low in absolute terms (i.e., is close to zero), this is a sign that no close substitutes for this product are available, otherwise consumers would switch demand to a competing product. d The lower the price elasticity of a product in absolute terms, the higher the market power of its producer. Cross-price elasticities are defined as the percentage change in demand of a good y in relation to a one per cent price change of a good x. If price increases of good x do not lead customers to switch from x to y, then they will not be considered as substitutes. A low cross-price elasticity is thus an indicator that the two products do not belong to the same product market, whereas a high cross-price elasticity would be an indicator that they do belong to the same market. Competition policy experts have developed a variety of procedures that take these basic relationships into account in order to delineate the relevant market. The first and probably most widely used one is based on a “reasonable consumer” and the question whether she would consider two products as substitutable. If she interprets them as substitutable, they belong to the same market. This concept is often referred to as demand substitutability. There is one fundamental problem with this concept, namely its subjectivity. It is simply impossible to define a “reasonable consumer” by a number of abstract standards or criteria. If each of a dozen antitrust experts sits behind his/her desk and thinks very seriously about the products a reasonable consumer will consider as close or adequate substitutes for a particular product, each one of them will supposedly come up with a list for which a number of arguments can be named. But the
56
CHAPTER IV
chance that any of the twelve lists are identical to any other list should be pretty slim. In other words, as long as research concerning substitutability is done only by introspection, its results will remain very subjective. Economists often pretend to know all the relevant preferences of actors without ever actually asking them. One could now be inclined to think that actually surveying consumers might lead to less subjective results. Surveys might indeed help to come up with some quantitative measure. Yet, the subjectivity of the “reasonable consumer” can never be completely eliminated. Take as an example the invited guest to a dinner party who wants to give a little present to his host. For him, a bottle of wine might be just as good as flowers, a compact m disk or a new book. The list is surely incomplete. But how exactly it is extended depends on individual preferences. Explicitly recognising this is difficult even when actual consumers are asked: In order to take this into account, surveys would have to be constructed as open-ended surveys. Depending on time constraints and incentives of the interviewed, one can expect very long individual lists. Whether they contain substantial overlap with each other is a second question. Until now, we have implicitly assumed that other suppliers of a product will not react if one firm changes its prices. This is, however, not necessarily true. A given price increase could make the production of close substitutes, and thus market entry, profitable. This is often referred to as supply substitutability. 2.1.2. The relevant geographic market Very similar notions are used to delineate the relevant market geographically. In competition policy the relevant geographical market has traditionally been defined as that geographic area that is characterised by the absence of trade barriers and within which one can therefore f expect prices to converge after explicitly having taken transport costs into account (Elzinga 1994, 29). Therefore, all those actors are supposed to belong to one geographic market whose individual behaviour determines the collective supply and demand conditions on a given market and who therefore determine the market price (Shugart 1990, 140). Actors not belonging to a geographical market are thus assumed not to influence prices (and quantities) on a given market. It is worth emphasising that the criterion for delineating the relevant geographical market is nott the equality of prices but rather their interdependence, which is created by arbitrage. The existence of transport costs and other related costs (such as tariff and non-tariff barriers to trade) as such are therefore not sufficient to conclude that one is dealing with different geographical markets. As long as these costs are not prohibitive and a tendency for arbitrage thus exists, there is price interdependence and only a single geographically relevant market. The inverse is, however, not true: from the absence of trade in goods, one cannot conclude that one is dealing with two different markets.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
57
2.1.3. Defining relevant markets in practice: the hypothetical monopolist test Many competition authorities all over the world use the hypothetical monopolist-test in order to assess demand substitutability, which has also been coined the “SSNIPt intest”. SSNIP is an acronym standing for “small but significant non-transitory creases in price”. It has been used by antitrust agencies in Australia, Canada, New Zealand, Great Britain and the U.S. In its Notice on the definition of the relevant market for the purposes of Community competition law (97/C 372/03), the Commission explicitly mentions the SSNIP test as a tool for delineating relevant markets. The test is very simple: it is argued that a group of products offered in a specific area can only be considered a market if a hypothetical monopolist could increase its profits by raising his price significantly, usually 5 or 10 per cent. In other words, one tries to predict the effects of a “SSNIP” on the firm considering such a price increase. If many consumers switch to other suppliers, the SSNIP would not be profitable. This is taken as an indicator that (demand) substitutability is high and that the product market needs to be delineated more broadly. The product market is therefore broadened step-by-step by incorporating those products the consumers would switch to. Then, the test is repeated until a SSNIP would be profitable. The test is used to delineate markets in both their productt and geographic dimension. The economic rationale underlying the test is based on a well-known tool of microeconomics, namely the (own) price elasticity and the cross-price elasticity. In general, elasticity is defined as a percentage change of a dependent variable as a consequence of a change of the independent variable, also stated as a percentage change. The price elasticity would thus inform us how the demand of a good changes given that its price increases (decreases) one per cent; or, in the case of the SSNIP changes five or ten per cent. The cross-price elasticity informs us how the demand in good y increases as a consequence of an increase in the price of good x. 2.1.4. Predicting the post-merger structure After having delineated the relevant market, the expected post-merger structure needs to be predicted and evaluated. Besides barriers to entry, the most important factor playing a role is the concentration of the market. As already mentioned above, barriers to entry will be dealt with in the next section of this paper. We therefore constrain ourselves to shortly deal with the concentration of the market here. It is determined by (1) the number of competitors and (2) their market shares. The general assumption with regard to concentration ratios is that the lower the number of competitors and the higher their market shares, the higher their oligopolistic interdependence. With it, the capacity to constrain competition either by explicit agreements or by tacit collusion might increase. The most important concentration measure used in antitrust policy is the Hirschman Herfindahl Index, which is defined as the sum of the squared market shares in per cent of the firms in the market. It can thus take on values ranging between 0 and 10,000 (1002). In principle, drawing on clearly identified values of the HHI might help to improve predictability in merger control. But it should not be overlooked that the sim-
58
CHAPTER IV
plicity of the indicator also has some setbacks. The degree of concentration in a given market, which the indicator is to express is, of course, not directly linked to a certain behaviour by the competing firms. Depending on the propensity to act competitively – which has also been called the “spirit of competition” – a high degree of concentration can lead to highly competitive behaviour as well as to attempts to circumvent competition. The strategies chosen by competing firms are not exclusively determined by concentration ratios. If “conduct” is not exclusively determined by the concentration of a given market, then the “performance” to be expected in that market cannot be reliably predicted drawing on concentration ratios. It thus remains unclear whether higher concentration ratios will lead to worse results – or to better ones. The following example might illustrate this problem. Compare a symmetric Cournot duopoly with an asymmetric Stackelberg duopoly13 (Neumann 2000, 148): In the symmetrical Cournot game, both firms have a market share of 50% but the aggregate quantity supplied by them is only 2/3 of the quantity that they would supply under perfect competition. In the Stackelberg game, the leader has a market share of 2/3 and the follower correspondingly of 1/3. The aggregate quantity supplied by the two firms would be ¾ of the quantity that they would supply under perfect competition – and prices would be lower than in the Cournot game. From a welfare economic point of view, one would thus prefer a Stackelberg outcome to a Cournot outcome. Higher values of the HHI indicate trouble. In the example, the value for the Cournot outcome should thus be higherr than that for the Stackelberg outcome. The exact opposite is, however, the case: the HHI value for the Cournot game is 5,000 (502 + 502) and that for the Stackelberg game is 5,555 (66.62 + 33.32). Exclusive reliance on the HHI can thus lead to wrong decisions. 2.1.5. Assessing Single Dominance After having delineated the relevant markett and predicted post-merger market structure, an evaluation needs to be made: will the merged entities command a dominant position after having executed the proposed merger? Competition theorists talk of single dominance if a firm is capable of enforcing prices that are above the competitive level. Every case in which prices are above marginal costs is taken as an indicator that it has at its disposal some power, or, is dominant. Such a situation presupposes that a firm has some discretionary leeway that it can use to increase prices without having to face substantial reductions in demand. The less tightly a firm is constrained by its competitors, the closer it will be able to set the price close to the profit-maximising monopoly price, where marginal costs are equal to marginal returns.14 Such leeway can be caused by the uniqueness of the product offered, the preferences of the consumers for just this product, or absolute (cost) advantages on the input side. The figure illustrates the economic effects of a dominant position drawing on an extreme case, namely the monopoly case. If the firm is able to set the monopoly price, the price will be pm instead of pc, which would result in the (perfectly) competitive case. Accordingly, the quantity sold is qm instead of qc. The triangle ABpc is
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
59
called “consumer rent”. It represents the increase in utility of all consumers given the price pc. Many consumers are ready to spend more than they actually have to spend and a “rent” in the economic sense is the result. By reducing quantity and increasing price, the consumer rent gets smaller. Parts of it are transformed into producer rent, i.e., the utility increase of producers d that they enjoy because they are able to sell above (marginal) costs. The main reason why monopoly situations are deemed welfare reducing is the triangle CDB: this is the loss in welfare that is the result of switching from a competitive to a monopoly situation. It is this triangle that forms the underlying rationale of fighting dominance in competition policy. p MC MR
A
Dead-Weight Loss C pm
B
pc
MC D MR DD
qm
qc
q
Figure 7: The consequences of a dominant position
The description of the standard approach used in merger policy is, of course, very coarse. There are numerous quantitative methods to delineate the relevant markets used by various competition authorities and there are also various ways to assess market structure. This section was not aimed at giving a detailed account of them but an overview. The tools used by the European Commission will be dealt with in more detail in subsection 4. Before, however, we turn to some modifications that could be the consequence of recent theoretical developments as well as those that could be the consequence off changes in business environment.
60
CHAPTER IV
2.2. Consequences of Recent Theoretical Developments A highly concentrated market structure was the central most important indicator for the presence of market power within the Harvard approach. The New Industrial Organisation on the one hand, and Transaction Cost Economics on the other, have pointed to some problems that might arise when structure is the central preoccupation of competition policy. Game Theory stresses the possible divergence between individually rational behaviour and collectively beneficial results: in certain interaction situations, rational individual behaviour will lead to suboptimal results on the collective level, Adam Smith’s “invisible hand” will thus not work in some cases. The most famous of these interaction situations is, of course, the Prisoners’ Dilemma. Game Theory has now shown that even duopolists who could make themselves better off by explicit or tacit collusion will not necessarily be able to reach that result if the incentives not to co-operate with the other player are sufficiently large. Structure as such would thus be meaningless, other factors that increase the likelihood of successfully implementing joint maximisation strategies have to be checked for. Section 4 of this chapter contains an extended checklist that can be used to assess the likelihood of collusion taking insights from game theory explicitly into account. Representatives of Transaction Cost Economics point to the fact that obsession with structure might lead one to overemphasise the dangers of strategies pursued in attempts to monopolise and to underestimate the benefits of strategies pursued in attempts to economise. Williamson has often referred to the Harvard approach as the “inhospitality tradition” (e.g., 1985, 369f.). As was shown in more detail in chapter II, Williamson (1968) has made it very clear that trade-offs are inevitable. Suppose two merged companies are able to pursue the strategy of a monopolist (i.e., increase price to marginal revenue and reduce output accordingly). Focusing exclusively on market shares, such a merger would have to be prohibited. Williamson now points out that this conclusion might be blatantly wrong from a welfare-economic point of view: Before drawing conclusions based on having had a look k only on the cost side of the merger, one should take a look at possible benefits, namely possible cost savings based on economies of scale or of scope. If these outweigh the welfare losses, the proposed merger should be waved through. This is thus an argument in favour of an efficiency defence. In a study commissioned by DG Comp, Röller et al. (2000) have dealt with efficiencies and proposed to take rationalisation, economies of scale, technological progress, purchasing economies and reduction of slack explicitly into account. Before an explicit case in favour of the introduction of efficiency as a merger defence into merger policy can be made, one fundamental problem must be convincingly solved. If efficiencies are a merger defence, firms willing to merge have an incentive to grossly overstate the amountt of efficiencies generated as a consequence of the merger. Similarly, directly affected third parties that expect synergies to be substantial and who are, correspondingly, afraid of tougher competition have an incentive to belittle their expected size. None of the concerned parties thus has an incentive to reveal information truthfully.15 What is thus needed is a mechanism to
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
61
reveal the information as truthfully as possible. Two general paths are possible: (1) improving the incentives off the directly involved parties to reveal their private information at least to a certain degree and (2) having the revealed information monitored by outside experts. These two ways can be combined and are thus complementary. Williamson was not only the first to point to the nature between welfare-gains (efficiencies) and welfare-losses (market power) of mergers but also the one who stressed that the realisation of efficiencies was not necessarily confined to cost savings in production but could possibly be extended to cost savings in transaction costs. In order to take this possibility adequately into account, one should therefore analyse the three determinants that Williamson believes to be decisive for an optimal firm structure, namely the degree of asset specificity, the degree of uncertainty present in those activities that two firms aim to incorporate into one hierarchical structure, and the frequency with which these transactions will take place. Here, size might be the consequence of economising on transaction costs and not an indicator for market power or dominance. In addition to some theoretical developments, progress in empirical techniques might also impact upon the necessity to delineate relevant markets. The disadvantages of market definition are widely acknowledged k by now. If one could assess dominance without having to resort to market definition, this could be an improvement. It has been argued that the widespread use of scanner technology allows exact estimates of own-price and cross-price elasticities (the original contribution is Deaton and Muellbauer 1981).16 Based on these estimates, the extent to which a planned merger would reduce competition could be calculated. 2.3. Consequences of Recent Trends in Business Environment Chapter III was divided into two parts, namely trends in business environment induced by policy changes and those induced by economic and technological factors. With regard to policy changes, it was observed that liberalisation in the form of deregulation and privatisation has taken place on the nation-state level almost everywhere and that regional and global agreements have often substantially increased possibilities to do business beyond borders, either by trade or by foreign direct investment. With regard to delineating relevant markets and assessing dominance, this means that the geographic dimension of the relevant market has often considerably increased in scope. This would often translate into lower market shares of firms willing to merge and thus to a lower number of mergers to be prohibited by the competition authorities. The curious gap between textbook delineations of the relevant market and actual practice was already mentioned in chapter II: whereas textbooks often mention the necessity to consider time as a relevant dimension, this is hardly ever done in actual practice. If one looks at the underlying theory, this is in no way surprising: The Harvard approach is inherently static, changes as dynamic processes in historical time simply do not occur.
62
CHAPTER IV
The world is, however, not a static place. Some of the recent developments in business environment mentioned in Chapter III have an important time dimension. Rapid technological change on the supply side and rapid changes of consumption patterns on the demand side mean thatt a position a firm has at time t0 will not secure it a similar position at point in time t1. If significant product or process innovations are under way, high present market shares might therefore not be a good indicator for future market shares, and therefore – according to the Harvard approach – for market power. Similar considerations follow as a consequence of the rapid changes in consumption patterns. The three trends mentioned – shorter product cycles, rapid technological change, and rapid changes in consumption patterns – all mean that it has become more difficult to make reliable predictions concerning the structure off a market. Within the Harvard paradigm, the structure of a market is, however, a necessity for making competition policy. We are thus dealing with t a dilemma, namely the necessity and the impossibility of reliable predictions concerning the structure of a market. In section 2.5 of this chapter, a possible way out of the dilemma will be sketched. In Chapter III, the notion of international interaction costs was introduced. It was shown that they have substantially decreased in many of their components. Transport costs, e.g.,, have decreased due to liberalisation and technological progress. The same can be said with regard to transaction costs. What some representatives of competition theory perceive as the most important constraint to competition, namely state-mandated barriers to trade was also shown to have substantially decreased. In sum, international interaction costs have been on a secular decline. This long-term trend ought to be reflected in the delineation of the geographic dimension of relevant markets. In the absence of relevant international interaction costs, the relevant geographic market should often be the global market. Business strategists (Porter 1990, 53ff.) have proposed to distinguish between (1) completely globalised markets, (2) partially globalised markets, and (3) regional markets. Completely globalised markets are characterised by only marginal international interaction costs with regard to both the trade and the investment dimension. Partially globalised markets, in turn, are those where there are (still) relevant costs of overcoming geographical space but where costs of carrying out foreign direct investment have become marginal. Regional markets would be those in which both costs of overcoming space and of doing foreign direct investment are still prohibitive. Examples for globalised industries are the car industry and its supplier industries, the chemical industry, the machine tool industry, the component industry for electronic products, the consumer electronics industry, the computer software as well as the hardware industry, and the aviation industry. The pharmaceutical industry is a good example for an industry that is currently moving from partially to completely globalised17. The last aspect with regard to the delineation of the relevant market is its product dimension. At least two aspects encountered in the chapter on trends in business environment deserve to be mentioned here again. The observation that command of one basic technology enables companies to supply a variety off products and the liberalisation of capital markets. If the first observation is correct, then potential com-
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
63
petition has considerably increased because barriers to entry for firms who command the one basic technology crucial for the market under consideration have decreased. Barriers to entry will be discussed extensively in the next section of this paper. They are mentioned here anyway because it is often argued that potential competition should already be recognised when the relevant market is delineated and not only afterwards, when counterbalancing factors for concentrated market structures are examined. A very similar argument can be made with regard to the liberalisation of capital markets. Raising equity as well as outside capital has become easier for companies that are considering entry into another jurisdiction as well as for entrepreneurs contemplating to newly establish a firm. All these aspects should lead to a broader delineation of the relevant market in both its product and geographic dimension. 2.4. Current EU Practice After having described the standard approach towards identifying relevant markets (and assessing dominance) and having pointed out some consequences of trends in theory as well as in business environment, we now turn to the description of the current status quo in European merger policy. This is the last step that needs to be taken before proposals for improvement can be generated in the next subsection. Broadly speaking, two approaches appear possible in order to delineate current EU practice: The first one consists in looking at official documents such as the Merger Regulation itself, but also at the Commission’s “Notice on the definition of the relevant market for the purposes of Community competition law (97/C 372/03)”. The second approach consists of an analysis of the decision practice as observed during the first decade of the Merger Regulation. The first approach could be called a de jure approach, the second one a de facto approach. As it is the factual decisionmaking of the Commission that is relevant for firms wanting to merge, the second approach is pursued here. In the description of the current EU practice, we start out by looking at the product dimension first as this seems to be the most important factor. The general statements concerning EU practice should be read with care: as just explained, we tried m of decisions. Yet, a to identify EU practice primarily by looking at a large number single, coherent practice might simply not exist. Examples drawn from specific cases appear at times more as examples of a EU practice and not the EU practice. A number of selected cases will be dealt with in considerable detail in chapter V. 2.4.1. The Relevant Product Market Although the concepts of demand-side substitutability and supply-side substitutability are widely recognised by the Commission (see the Commission’s Notice) and the Community Courts, they have not been applied consistently in the past. The test of demand-side substitutability employed by the Commission seeks to determine which products are sufficiently similar to be regarded by users as reasonable substitutes for one other. In the case of AÉROSPATIALE-ALENIA /DE HAVILLAND, the Com-
64
CHAPTER IV
mission stated that a relevantt product market comprises, in particular, all those products that are regarded as interchangeable or substitutable by the consumer with regard to their characteristics, their prices and their intended use. This Commission’s practice concerning the recognition of supply side effects has been subject to harsh criticism. Neven et al. (1993, 77) wrote: ”The procedures used for market definition are frequently inconsistent; in particular supply substitution is sometimes taken into account at the market definition stage and sometimes at the stage of assessing dominance, with usually no clear rationale for the difference in treatment. Supply substitution is also sometimes counted twice. The failure to take supply substitution into account will probably tend on average to result in excessively narrow market definitions. Although it is not possible to point with confidence to particular cases in which this bias has made a difference to the market definition adopted, we indicate one or two instances where it could have been significant.” This evaluation was written some two years after the European Merger Regulation had been passed. Here, we set out to ask whether the Commission’s practice has been modified during the last ten years. At times, the Commission points to supply-side substitutability. If this is the case, it is often, however, not incorporated into the relevant market. An example for this practice is VOLVO/SCANIA. While the concept of supply-side substitution is recognised in practice, experience suggests that the Commission’s delineation of the relevant market focuses principally on demand-side considerations – supply-side substitution, if it is considered at all, tends to be more of an after-thought. Moreover, supply-side considerations play no role in the product market definition according to Form CO. The Commission’s view of whether two products or regions should be included in the same relevant market thus depends almost exclusively on their substitutability from the perspective of consumers. A worrying aspect of the rare use of supply-side u considerations by the Commission is that the Commission defines separate markets on supply-side where demandside considerations would suggest a single market. In these cases, the Commission used differences in conditions of competition and distribution to identify different product markets, although the products themselves were fully interchangeable or even identical. In the case of VARTA/BOSCH, e.g., the Commission introduced a distinction between starter batteries supplied to the original equipment market and those supplied to the replacement market. However, the Commission did not consider the relevant question whether suppliers of replacement batteries would switch to supplying original equipment batteries if prices of the latter were to rise. Another example is the case of SCA/METSÄ TISSUE: here, the Commission defines separate markets for branded and private label products of toilet tissues and kitchen towels, although the products were produced by the same producers using the same technology and identical machines. This approach is problematic and cannot be easily reconciled with the concept of the relevant market. From the perspective of economic theory, a relevant market is defined as a set of products worth monopolising. This has entered into antitrust practice by way of the SSNIP test that was described above. As long as a market conjec-
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
65
tured to be the relevant one is nott sufficiently attractive for being monopolised, it can thus not be the relevant market and needs to be broadened. When defining the relevant product market, the Commission has focused its analysis primarily on three factors: (1) Physical characteristics of the product and service The European Commission has stated that iff two products are physically very different to the extent that that they cannot in fact be used for the same end use, they will not be considered to be substitutes. In the cases of RENAULT/VOLVO and VOLVO/SCANIA, e.g., the Commission defined separate markets for trucks (below 5 tons, between 5 and 16 tons, and over 16 tons), on the grounds that they were technically very different and had different f end uses. If consumers in the segment between 5 and 16 tons can, in fact, not easily use larger trucks for the same purposes as the smaller ones, then the delineation of two distinct product markets seems to be justified. However, it appears at least not unlikely that there are also consumers who would substitute between the two segments under some circumstances. For instance, some consumers, in response to an increase in the price of 17-ton trucks, might r were choose to buy a 15-ton truck instead. Conversely, if the price of a 15-ton truck to increase, some customers might switch to buying a 16 or 17-ton truck instead. Therefore, the question is whether enough consumers would switch demand in order to make an attempt to increase prices in one category unprofitable. In the cases of MERCEDES-BENZ/KÄSSBOHRER, VOLVO/SCANIA and MAN/AUWÄRTER, the Commission distinguished three individual segments of bus markets: the market for city buses, for inter-city buses and for touring coaches. The Commission defined the market narrowly, in part, because there were differences in the specific use of the bus. Hence, it was argued that there was no demandside substitutability between low-floor city buses and double-decker touring coaches with toilet, kitchen and video. The various types of buses can be produced using exactly the same machinery. Hence, the possibility of supply-side substitution imposes constraints on the pricing of the various types of buses. This suggests that the relevant product market did indeed encompass all three specific types of bus. (2) Product prices The Commission has often inferred that two products are not reasonably substitutable if they have substantially different prices. Examples for this line of reasoning can be found in NESTLÉ/PERRIER, PROCTER&GAMBLE/VP SCHICKEDANZ, AÉROSPATIALE-ALENIA/DE HAVILLAND; KIMBERLY CLARK/SCOTT PAPER. Price differences have therefore been used to distinguish between products that are perceived as different products by consumers but that may be functionally substitutable. If differences in price are caused by different characteristics of the good, they can, indeed, justify the argument for the existence of more than one relevant market. But at times, differences in price are completely unrelated to functional substitutability such as between branded labels and distributors’ own labels. Differences in price are thus not a sufficient condition for delineating two or even more different markets. Were the price of a premium product to raise substantially, con-
66
CHAPTER IV
sumers might be willing to change to another good, which is not in the premium segment of the market but which has been cheaper all along. The argument can be highlighted by drawing on a real-world example: In the cases of KIMBERLY-CLARK/SCOTT and SCA/METSÄ TISSUE, branded products compete with unbranded, i.e., private label equivalents. A question central to the assessment of competition was whether private label toilet tissue was in the same relevant market as branded tissues. Often, branded tissue costs more than private label tissue, despite the fact that there is little to distinguish between them on a physical basis. Indeed, manufacturers of branded products produce many private label products. Using differences in absolute prices to delineate relevant markets would place the branded product and private label product in separate relevant markets. It is obviously the case that part of this price differential is due to perceived quality differences, which is valued by consumers. Still, one would have to ask whether the pricing of branded label products d was constrained by the prices of private label products. This is clearly the case; hence, price differences are not sufficient for identifying two different markets. Rather, it is the interdependence of prices that matters. (3) Consumer Preferences The Commission also regards consumer preferences as relevant to delineating relevant markets. Despite the existence off substitutes at similar prices, the Commission may hold that consumer loyalty will limit substitution away from the product concerned following a price rise. Whilst question marks can be put against using consumer loyalty for the purpose of either market definition or barriers to entry, issue with this fundamental question is not taken here. If consumer loyalty is particularly high, then a price rise by a hypothetical monopolist may not induce substitution, suggesting separate markets. But in most decisions it is unclear how the Commission could even measure the extent of consumer loyalty. Therefore, without an empirical test about the importance of brand or consumer loyalty, this argument introduces high subjectivity into the assessment of competition. In section 2.5 of this chapter, we develop a proposal how the relevance of customer loyalty for the delineation of the relevant product markett can be assessed on a systematic basis. 2.4.2. The Relevant Geographic Market The next important step in delineating the relevant market is to take the geographic dimension into account. Factually, the different dimensions are ascertained sequentially: first the product and then the geographic dimensions. The EU is no exception here. This is why we now turn to the geographic dimension. The general concept was already described in section 2.1 above. In a summary fashion the geographic market can be described as that area in which “the conditions of competition applying to the product concerned are the same for all traders” (Bellamy/Child 1993, 614). In the current practice of the Commission, the delineation of relevant geographic markets is dominated by four factors:
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
67
(1) Regional differences In delineating the relevant geographic market, the European Commission often cites regional differences as a basis for a narrow delineation. Differences in national procurement, the existence of cross-border import duties, the need to access distribution and marketing infrastructure, and differences in languages are all cited as reasons why competitors from abroad can be disregarded. It can thus be claimed that in these cases, the Commission deemed regional differences as so important that it chose to delineate markets as national in scope. The Commission considers differences in market shares as an important indicator for separate geographic markets (MERCEDES-BENZ/KÄSSBOHRER; VOLVO/SCANIA). Remember that a market is defined as a set of products worth monopolising. The assumption that different market shares indicate different markets thus needs to be connected to this definition of relevant markets. But there does not seem to be a straightforward linkage. Moreover, there is no linkage between the similarity of market shares and the substitutability in demand or supply. (2) Prices If two regions are in the same relevant market, then the prices charged in one region should affect the prices charged in the other region. However, this is not the same as saying that prices in both regions must be precisely identical. The geographic extent of the relevant market should reflect similarity of price movements rather than the similarity of price levels. In VOLVO/SCANIA, however, differences in the prices for heavy trucks indicated different geographic markets, according to the Commission. Since prices differed from member-state to member-state, the Commission assumed that the markets for heavy trucks had to be delineated on the Member-State level. (3) Consumer Preferences According to the Commission, different consumer habits are another important indicator for the assumption that one is dealing with different geographic markets. Different consumer habits are interpreted as an important barrier to entry, which could inhibit interpenetration of markets. These could be based on different tastes, e.g., concerning beer or different user habits, e.g., concerning female hygiene products. Additionally, language and culture differences are taken as an indicator for different geographic markets (RTL/VERONICA/ENDEMOL; NORDIC SATELLITE DISTRIBUTION; BERTELSMANN/KIRCH/PREMIERE). (4) Transport Costs High transport costs can be an important cause for little trade between regions. According to the Commission, high transport costs are thus a reason for assuming different geographic markets. Firms are regularly asked concerning the maximum difference that distribution of their products seems worthwhile. The answers received serve to delineate the relevant geographic market. They played an important role in a number of cases, e.g., in CROWN CORK & SEAL/METALBOX and SCA/METSÄ TISSUE. But the Commission should recognise that differences in transport costs –
CHAPTER IV
68
transport cost disadvantages if you will – do not always justify the delineation of different markets. This can be the case if these disadvantages can be compensated by other advantages like economies of scale, which could be the result of higher output. 2.4.3. Assessing Dominance After having delineated the relevant market, an assessment has to be made whether a merger would significantly impede effective competition and in particular whether it would create or strengthen a dominant position. As already mentioned in the introduction to this chapter, our discussion here will be confined to the way the Commission has been assessing dominance in the past. According to the European Merger Regulation, mergers that create or strengthen a dominant position that prevents effective competition from taking place in the Common Market (or in a substantial part of it) need to be declared as incompatible with the Common Market [Art. 2 (3) MR]. The European Merger Regulation does not provide an explicit answer to the question of when a dominant position is created or strengthened. The notion of a dominant position was around long before the Merger Regulation was passed. It is part of Art. 82 TEC. There are a number of decisions by the Court of Justice which the Commission explicitly recognises in its own decisions. According to the Court, a firm is considered to have a dominant position if it has the capacity to develop market strategies independently from its competitors. This implies that the firm has at its disposal some leeway that is not controlled by either its competitors (horizontal aspect) or by its suppliers or consumers (vertical aspect). This means that the Commission has to analyse a number of aspects in order to assess whether a firm will command a dominant position. The Regulation [Art. 2 (1) lit. b MR] explicitly mentions the economic and financial power of the firms participating in the merger, the options available to suppliers and customers, their access to supply and outlet markets, legal and factual barriers to entry, the development of supply and demand as well as the interests off intermediate and final consumers. In assessing horizontal mergers, the Commission has regularly analysed four structural factors (European Commission 1992, 408; Neven/Nuttal/Seabright 1993, 101; and Schmidt/Schmidt 1997, 184): (1) (2) (3) (4)
The market position of the merged firm; The strength of remaining competition; Buyer power; Potential competition.
With regard to the market position of the merged firm, the Commission has been reluctant to publish degrees of concentration that it considers to be critical. It seems nevertheless possible to form various classes of market shares. If the combined market share is below 25%, single dominance can regularly be excluded (recital 15 of the old Merger Regulation, now recital 32). With a market share of up to 39%, it will only be assumed rarely. If market shares are between 40 and 69%, the assess-
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
69
ment will have to take the relevance of actual and potential competition explicitly into account. But generally, market shares of 40% and more are interpreted as a strong indication that a dominant position might exist (Jones/González-Diaz 1992, 132.). Besides aggregate market shares, the financial strength of the participating firms, their technological know-how, existing capacities, their product range, their distribution nets and long-term relationship with important customers are taken into account. In evaluating the market shares, the relevant market phase is explicitly recognised. High market shares in high-technologyy growth sectors are thus evaluated less critically than the same market shares in markets with slow growth. The strength of remaining competition is evaluated by drawing on the market shares of the remaining competitors, their financial strength, their know-how and production capacities as well as their distribution nets. In case, a new market leader emerges as the result of the merger, the difference in market share between the merged firm and the next largest competitors is considered. Additionally, the number of remaining competitors is counted in order to assess the alternatives in being supplied with the relevant products. In ascertaining buyer power, the Commission focuses primarily on the bargaining strength of the other side of the market. Itt is assumed to have considerable bargaining power passed relatively small market shares (5-15%). Potential Competition is recognised if there is clear evidence for a high possibility that quick and substantial entry by either established competitors or entirely new firms will take place. Potential competition must be perceived as a threat by the merged firm in the sense that it is sufficient to prevent it to act independent from competitive pressure post-merger. Moreover, the Merger Regulation prescribes the Commission to take the development of technological as well as economic progress into account as long as it serves the consumer and it does not prevent competition [Art. 2 (1) lit. b MR]. This element has been interpreted as meaning that efficiency aspects should be taken into consideration in evaluating proposed mergers (Noel 1997, 503; Camesasca 1999, 24). But this element is only taken into consideration if the notifying parties claim that their merger serves technological or economic progress. It is the firms who carry the onus of proof. This criterion was checked in AÉROSPATIALEALENIA/DE HAVILLAND, GENCOR/LONRHO and NORDIC SATELLITE DISTRIBUTION. In all these cases, the Commission believed the efficiency gains as insufficient. In the history of European Merger Regulation, not a single case has been passed in which explicit mention of the positive technological or economic effects was made. Efficiency considerations have thus only played a marginal role in the Commission’s decision-making practice (Schmidt 1998, 250). In assessing dominance, market shares are still the single most important criterion, although the Commission insists that market shares as such are not sufficient for the assumption of a dominant position. This would be the case, if the other criteria spelt out above reduce the relevance of market shares. The Commission in its decisions in ALCATEL/TELETTRA, MANNESMANN/HOESCH and MERCEDES-BENZ/KÄSSBOHRER stressed the compensating effects. What is problematic with these offsetting factors is the enormous leeway that the Commission has in ap-
70
CHAPTER IV
plying and interpreting them. This is especially noteworthy with regard to the evaluation of potential competition. In some cases, it was liberally assumed to be relevant (MANNESMANN/VALLOUREC/ILVA, MERCEDES-BENZ/KÄSSBOHRER, SIEMENS/ITALTEL), in others, it was interpreted quite restrictively (RTL/VERONICA/ENDEMOL, ST. GOBAIN/WACKER CHEMIE/NOM, VOLVO/SCANIA and METSO/SVEDALA). This enormous leeway has thus led to inconsistencies in the Commission’s decisions. The revised Merger Regulation (139/2004) has introduced a new substantive criterion in order to ascertain the compatibility of a notified merger with the common market. Before the revision, a merger that created or strengthened a dominant position had to be prohibited. This criterion was extended through the revision. Now, mergers that threaten to significantly impede on effective competition won’t be accepted (SIC test for short). The Commissionn has thus moved most of the way towards the Substantial Lessening of Competition (SLC) test prevalent in Anglosaxon countries. That the EU uses a different acronym for the new test seems to be an issue of semantics rather than of substance. The introduction of the SIC test has substantially increased the Commission’s power to prohibit notified f mergers. It is the purpose of the new criterion to include mergers that do create additional market power but that do not quite reach the threshold of a dominant position yet. In its Guidelines, the Commission names a number of factors whose presence would lead it to expect that competition would be substantially impeded. The Commission proposes to distinguish between coordinated and non-coordinated effects. Following the enactment of the Guidelines on horizontal mergers, market shares as an important criterion for ascertaining whether a proposed merger is compatible with the common market have gained additional ground. This implies a further strengthening of the very traditional Structure-Conduct-Performance paradigm. The introduction of the Hirschman Herfindahl Index attests to that. According to the Commission, a merger is supposed to raise serious doubts if it leads to an aggregate HHI value of 2000 and if the merger leads to an increase in the value of the indicator of at least 150 points. Although these threshold values are above those used in US merger control (1800, 100), they might well lead to a tougher merger policy. Suppose two companies with a market share of 8 and 10 per cent want to merge and there are four other companies that each hold 20.5 per cent of the market. In that case, the post-merger HHI would amount to 2005, its increase would be 160. The Commission would thus raise serious doubts of the proposed merger being in accordance with the Merger Regulation. Notice that recital 32 of the MR declares that mergers that create a combined market share of below 25% are regularly not suspected to be problematic. In the example, a new firm with a combined market share of only 18% would make the Commission raise serious doubts. This means that the Regulation and the Notice are partially contradictory. What are the possible effects of switching to SIC on predictability? It seems to make sense to distinguish two cost categories that can accrue as a consequence of the switch: (1) costs that arise due to reduced predictability during the transition to the new test and (2) costs that arise because predictability is lower under the new test even after the transition uncertainty has vanished.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
71
It is certain that transition costs will accrue. These arise because the Commission has leeway in interpreting the new Regulation. This also holds for the Court of First Instance and the European Court of Justice. Ex ante, it is unclear how much precedence will be worth under the new Regulation. Firms willing to merge will therefore have problems predicting the likely actions of the European organs. If the costs of switching to another Regulation are higher than the discounted value of the improvements brought about by the new Regulation, than it would, of course, be rational not to switch criteria.18 It is argued in this section that predictability of European merger policy might not only suffer due to transition costs but due to a lower level of predictability even after transition. Many people believe that European merger policy will become more restrictive after the switch to the SIC-test whereas the Commission insists that nothing substantial will change and the switch only serves to increase legal certainty. The competing expectations concerning the future of European merger policy are part of the transition costs just mentioned. We believe that the new Regulation does indeed contain the possibility to establish a more restrictive merger policy: –
–
–
–
Non-coordinated effects can now be taken into account even if they do not create or strengthen a dominant position. One could even argue that this amounts to an introduction of oligopoly control. Non-coordinated effects will apply to all kind of oligopolies. The Regulation does not constrain its application to “highly concentrated” oligopolies or the like. Numerals 61 to 63 of the Merger Guidelines are entitled “Mergers creating or strengthening buyer power in upstream markets”. This might lead to an increase in the number of cases in which vertical or conglomerate issues are named as a cause for concern. There has even been discussion whether the change of the substantial test will lead to a lowering of the threshold with regard to the SSNIP value used.
What is relevant with regard to predictability is not that merger policy will necessarily become more restrictive but simply that such a possibility exists and that it is unclear what will, in fact, happen. 2.5. Proposals Towards Enhancing Predictability In this section of chapter IV, we have so far dealt with the standard approach used to delineate relevant markets and to asses dominance, we have pointed to some possible consequences of theoretical developments as well as some consequences following from business trends and we have described and critically evaluated the current practice of the European Commission with regard to these issues. We now set out to make some proposals how the current practice of the European Commission could be improved. These are grouped into three sections: simple steps to improve predict-
72
CHAPTER IV
ability, improvements based on theoretical developments, and improvements taking the changes in business environment explicitly into account. 2.5.1. Simple Tools Delineate Relevant Market Taking Both Demand and Supply Side Into Account The above analysis has shown that in order to delineate the relevant product market, the Commission relies heavily on the demand side. This has often led to too narrow a definition of the relevant market. The consequence of this narrow approach – which is also documented in the Commission’s Notice on the definition of the relevant market – is that the application of the SSNIP test remains incomplete: if prices were raised by 5 or 10 per cent, new competitors might be induced to enter the market and the price increase would thus turn out to be unprofitable. The current practice is thus incoherent and should be modified. Some of the business trends described in Chapter III clearly pointed to the increased relevance of supply-side substitutability. For example, many steps in the value chain of a firm have been outsourced over the last years. Often, this has been achieved via management buy-outs or spin-offs. The newly arisen independent suppliers are frequently not confined to work exclusively for their former “parent company” anymore, but move as an independent supplier in the market. Their products can thus be bought by anybody. This possibility to outsource many steps of the value chain means that it has become easier for entirely new firms to enter into a market, as they do not have to make heavy investments. Supply-side substitutability has therefore increased in relevance and should be taken into account regularly by the Commission in its delineation of the relevant market. Take the case of VARTA/BOSCH, already shortly alluded to above. In this case, the Commission identified two different product markets, namely the one for starter batteries on the original equipment market and the one for starter batteries on the replacement market. With regard to the goods’ characteristics, there is no difference between these batteries. Nevertheless, the Commission identified two different groups demanding batteries: car producers on the one hand, and car maintenance and repair dealers on the other. But the Commission did not check whether the suppliers of replacement batteries were capable to provide batteries for the original car equipment. If this had been the case, it would have constrained the merged entity VARTA/BOSCH considerably. This is an example in which the Commission did not test sufficiently for supply side substitutability. It is proposed here that it should be taken into consideration if a small but non-temporary price increase would enable producers to modify their supply and to offer it on the market within a reasonable time span. This was the case in VARTA/BOSCH and supply-side substitutability should, therefore, have been taken into account.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
73
Reliance on Quantitative Methods to Delineate the Relevant Market This proposal essentially remains within the Harvard paradigm, i.e., the overwhelming importance attributed to the structure of markets is in no way questioned. The delineation of the relevant market is often the single most important step in merger analysis: if it is delineated broadly, chances that a proposed merger will be approved are high and vice versa. The quest for “objective” procedures that allow the firms participating in a merger to predict the way the Commission will delineate the relevant market can therefore be easily understood. Quantitative methods are believed by some to be more objective than the currently used procedures. We thus set out to describe some quantitative tools before evaluating them critically. In determining the geographically relevant market, the Commission has primarily relied on three criteria: (1) transport costs, (2) differences in prices, (3) differences in market shares caused by regional differences or differences in customer behaviour. At times, the last two criteria appear to be questionable: there can very well be single geographic markets with different prices and different market shares of the competitors in different parts of that market. What is important, however, is that prices are interdependent. Interdependency can be checked for using quantitative techniques. A variety of quantitative methods are discussed in the literature (Bishop/Walker 1999, 167ff.). Two of them appear especially noteworthy, namely price-correlation analyses and shock analyses. Price-correlation analysis looks at price changes off various products over a period of time. In case one observes high correlations between two or more products, they are grouped into the same market. In NESTLÉ/PERRIER, the correlation between still and sparkling mineral waters was high whereas the correlation between mineral waters and other softt drinks (such as Coke) was low. It was concluded that still and sparkling waters belong to the same product market whereas mineral waters and other soft drinks do not. As is well kknown, there are many spurious correlations. A high correlation coefficient f as such is thus not sufficient for grouping two or more products onto the same market. Other potentially important variables such as the business cycle, inflation, seasonality, etc., need to be controlled for. In order to group two or more products into one single market, a value judgment is necessary, namely above what levels of the correlation coefficients one would group the products into a single market. Correlation coefficients can take on any value between –1 and +1, +1 indicating a perfect positive relationship and –1 a perfectly inverse relationship. A correlation of 0 means that the two time series are not correlated in a statistically meaningful way. It has been proposed not to use a fixed number (say .75) beyond which a single market should be assumed but to draw a different line depending on the individual case. The line could be established by taking the (average of the) correlation coefficients of products that everybody agrees on belong to one single market as a benchmark. If the correlation coefficients of the products whose grouping is questionable were sufficiently close to this benchmark, they would also be counted as being part a of the same product market. Of course, there might be quarrels as to (1) agreeing on the definition of the benchmark products, and (2) agreeing what “sufficiently” close means in an individual case.
CHAPTER IV
74
Application of price correlation analysis is not restricted to determining the relevant product market but can also be extendedd to ascertaining the relevant geographic market: if price levels move together in various regions or states, they would be grouped into a single market. Price-correlation analysis is, however, not without problems. Its results crucially depend on the level of aggregation chosen. The higher the level of aggregation, the more average values enter the analysis. Drawing on average values will, however, blur the results. Carrying out price-correlation analyses, one should therefore choose a rather low level of aggregation. Another problem is that a low correlation of prices can, e.g., be caused by time lags that occurr only with regard to one product due to production-specific reasons. Inversely, a high correlation can be caused by common influences that occur although the products are on two different markets. Energy prices in energy-intensive production processes could be an example. In order to control for this problem, price-correlation analysis should therefore be combined with so-called “unit root tests”. The underlying idea is very simple: one checks whether the time-series under consideration converges to a constant at least in the long run. Following a shock, one would observe deviations from the long-term trend but these would only have a transitory character and the time-series would return to its long-term path. Relative prices
time
Figure 8:Price ratio between goods 1 and 2
In the long run, the price of good 1 is constant relative to the price of good 2. Deviations from the long-run trend are only temporary. In such a case, both products should be grouped into the same market. Shock-analysis looks at price changes subsequent to exogenous shocks that were unpredictable for the market participants. Examples for such shocks are political crises, wars, currency crises, etc. Shocks can change the business environment for market participants. Reactions to them can reveal information about how firms perceive of relevant markets themselves. Take an unanticipated change in exchange rates. This can lead to price disparities between currency areas. If the relevant geographic market extends beyond currency areas,
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
75
this should be reflected in modified trade flows and a tendency of prices to converge. Entry of a new product sometimes comes as a “shock” to competitors. Their reaction to new products can help ascertain whether they perceive themselves to be acting in the same market. This could be reflected in price reductions, but also in increased spending for advertising. It would, however, not make sense to demand that the Commission use shock analysis on a regular basis simply for the reason that a shock might not always be readily available. As the European economies have moved together ever closer, dramatic differences have become less likely. The most obvious example for this is the introduction of the euro, which makes currency shocks within Euroland virtually impossible. A problem with the application of shock analysis is that all analysts have to agree that a certain event was really unpredictable and did thus constitute a genuine shock. The so-called Elzinga-Hogarty Testt can be used as a tool to further clarify the correct delineation of the relevant geographic market. It is an example for trade pattern tests. It is based on the presumption that if there is either substantial export from a region or a substantial import into an other region, regions are closely intertwined and can be grouped together as forming a single geographic market. It should be stressed from the outset that turning the test around could mean to commit a fallacy of reversed causation: From the absence of any interregional trade, one cannot convincingly conclude that one is dealing with two (or more) separate geographical markets. The Elzinga-Hogarty procedure establishes two criteria, namely LIFO (little in from outside) – a low import share and LOFI (little out from inside) – a low export share. If either of the criteria is not fulfilled, there is the presumption that one is dealing with a single geographical market. In applying the test, one needs, again, to make a value judgment: Here, it needs to be decided beyond which level import or export can be called “significant”. At times, use of quantitative methods can be of help in delineating the relevant market in its various dimensions. Their use by the firms willing to merge should thus be encouraged by the Commission. On the other hand, all available quantitative methods display some shortcomings (which are dealt with quite extensively by Froeb and Werden 1992). They should thus only be one instrument for the delineation of the relevant market. Reliance on Quantitative Methods to Assess Dominance The current practice assesses concentration by drawing on concentration ratios (particularly CR1, CR4, and CR8; see Schmidt 2001, 137). These indicators only comprise information concerning market structure before and after a notified merger. As such, they do not enable the Commission to make any predictions concerning the intensity of competition, the use off competitive parameters such as prices etc. Assessing dominance solely on the basis of these indicators is therefore of little help.
CHAPTER IV
76
Price
HHI
Figure 9: A hypothetical example for price concentration analysis
Drawing on price concentration analysis could enhance predictive power. The assumption underlying the entire structure-conduct-performance approach is that high concentration is a good proxy for market power. High degrees of concentration are thus expected to lead to higher prices. Price concentration analysis compares different geographical markets in which differentt degrees of concentration are observed. It is asked whether higher degrees of concentration tend indeed to lead to higher prices. Figure 9 is a hypothetical example. The horizontal axis serves to denote the degree of concentration found on a specific geographic market; the vertical axis contains the price found there. Every icon represents one country. Simple econometric analysis reveals that – in the example – higher concentration rather tends to lead to lower prices. If that is observed, a merger in one of these markets cannot be expected to lead to higher prices. It should thus be passed. Some Critical Remarks Concerning Quantitative Techniques Quantitative techniques are aappealing. They seem to indicate exactness, which is conducive to predictability. Yet, the precision that can be gained by drawing on quantitative techniques should not be overestimated. One still needs decisions what to count – and what not. But simply counting certain entities will not do trick: One needs criteria by which to judge the numbers that one has come up with. Value judgments concerning the definition of threshold levels and the like will thus remain necessary. Still, one could argue that value judgments are necessary in any event and that having made them, quantitative techniques can increase the objectivity of the assessments made by the Commission.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
77
The Commission operates under tight resource constraints. Demands that it should regularly carry out a certain number of tests tthus seem to have little chance of implementation. It is established practice that notifying parties offer one or more quantitative techniques in order to make their case. Predictability could be improved by publishing a list of quantitative tools that the Commission will consider. But the Commission will need to keep the competence of deciding whether the tools have been adequately used. Such a published list should not be interpreted as being enumerative though as that would inhibit the use of innovative techniques unknown at present. Assessing the Importance of Customer Loyalty for Delineating the Relevant Market More Systematically The existence of brand labels and customer loyalty has been a crucial factor in the definition of product markets and the ascertainment of barriers to entry in the decision practice of the European Commission. Given that the brand labels and the noname labels are functionally equivalent, the central question is whether these products are substitutable or not. In the case of SCA/METSÄ TISSUE, the Commission denied the existence of substitutability between brand and no-name labels and hence defined two different product markets exactly along these borders. But this has certainly not been the only case in which the Commission believed that the existence of a brand label constituted a serious barrier to entry. Quite frequently, the European Commission decides to delineate separate product markets if customer loyalty with regard to certain products exists. This leads often to too narrow a delineation of markets. The way customer loyalty is taken into account often appears unsystematic. Based on sound economic reasoning one would ask in what situations customer loyalty leads to significant gaps in substitution given that products are functionally equivalent. Iff gaps in substitution are substantial, then the delineation of separate product markets seems reasonable. We propose to take customer loyalty only into account if a transaction cost rationale for them can be named. We thus propose to distinguish between rationally explainable and rationally unexplainable customer loyalty and argue that only the first kind should play any role in merger policy. The emergence of consumer loyalty can be explained with the presence of asymmetric information between suppliers and buyers. In general, one can expect uncertainty to be present on the side of consumers with regard to the quality and other characteristics of the goods offered by the suppliers. The supplier can be assumed to be informed on the characteristics of the goods offered ex ante, the buyer, on the other hand, will often only be able to ascertain the quality after having consumed the good, i.e., ex post. It has been shown that such information asymmetries can lead to the breakdown of markets (Akerlof 1970). One consequence of uncertainty with regard to quality is higher transaction costs that are caused by additional information-gathering activities. In such a situation, consumer loyalty can be interpreted as a mechanism to reduce uncertainty and the transaction costs accruing as a consequence of uncertainty. As soon as a consumer has found a good that conforms to his/her preferences, s/he will repeat consuming the good even if other suppliers
78
CHAPTER IV
offer other equally good products. d Consumer loyalty can thus be conceptualised as a device to save on transaction costs. With regard to information asymmetry, economic literature distinguishes between three kinds of goods, namely search-, experience- and trust goods (Nelson 1970). With search goods, consumers can acquire information on the relevant characteristics of the good before buying it. After having bought it, the information can be verified. With experience goods, reliable information on the characteristics is not obtainable before buying the good. The characteristics of the good have to be “experienced” after having bought it. With trust r goods, the relevant characteristics are neither ascertainable before nor after buying the good. In merger policy, the good’s characteristics with regard to information asymmetries should thus be taken into account. A rather narrow delineation based on substitution gaps can only be justified if one deals with long-lived experience goods or trust goods. In ascertaining the relevant information asymmetries, focus should be on the relevance of screening activities on the part of the consumers and signalling activities on the part of the supplying firms. The harder screening for the consumers, the more relevant will signalling activities be. Customer loyalty is a ubiquitous phenomenon. Based on the approach proposed here, it could be taken into account in a more systematic fashion than was hitherto the case. It is thus proposed that customer loyalty is only used in order to delineate different product markets if one is dealing with durable experience goods or trust goods. For search goods and non-durable experience goods, reliance on customer loyalty will lead to rather spurious delineations. A representative of the traditional welfare economic approach might object that the creation of brand labels can enable firms to set prices above marginal costs and that this would mean losses in allocative efficiency and the transformation of consumer into producer rents. But this objection overlooks that the utility of some consumers can be increased if they are offered differentiated products that better cater to their preferences than standardised ones. For some consumers, it is even utilityenhancing if they can buy brand labels for a higher price and can thus differentiate themselves from other consumers (Veblen-Effect). It can hence be asked if it should be the task of competition policy to protect consumers from this kind of market power as it would prevent some consumers from reaping additional utility. This argument should be considered in the light of obvious resource constraints present in competition authorities. It is thus argued that competition policy should rather focus on those cases that could mean a monopolisation of entire markets. In defining relevant productt markets, recognition of consumer loyalty should thus play a marginal role at best. It should also be recognised that the creation of brand labels is a genuine entrepreneurial task, which necessitates substantial investment and is connected with important risks. A producer will only be able to create customer loyalty if he offers a product with convincing quality characteristics or other traits that cater to the preferences of the consumer. From an economic point of view, the ensuing increase of his price-setting range also has advantages: it entails incentives to offer products that best meet consumer preferences. As long as func-
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
79
tional substitutability is guaranteed, firms successful in creating a brand and customer loyalty should not be punished for successful entrepreneurial activity. The relevance of consumer loyalty should not be overestimated either. It is true, a newcomer would have to invest in the creation of a new brand. But it is also true that an incumbent has to invest in conserving a brand label and reputation. Although the creation of new brands often involves the necessity of substantial investment, there are exceptions: the firm producing the energy drink Red Bull, e.g., established its product with very little advertising expenditure successfully against brands like Coca Cola and Pepsi. This case proves that consumer loyalty does not necessarily constitute a substantial barrier to entry. 2.5.2. Improvements Due to Theoretical Developments Take Efficiencies Explicitly into Consideration As spelt out above, mergers may increase market power but still be welfareincreasing if they enable the merging companies to realise efficiency gains. In U.S. merger policy, the efficiency defence has been around for a while: Efficiencies are explicitly mentioned in the U.S. Merger Guidelines of 1992, but they had already been applied based on a publication of the Federal Trade Commission from 1984. The 1997 revision of the Guidelines meant a more restrictive application of efficiencies. The Guidelines now demand that the proof of cost advantages be clear and unequivocal; they specify more precisely what is meant by efficiencies. Some other legislatures have followed suit and have also incorporated an efficiency defence into their merger law. It can now be found in the U.K., but also in Australia, Canada, and New Zealand. The Guidelines on horizontal mergers that went into effect in May 2004 also contain an efficiency defence. Factually, efficiency considerations had, however, already played some role even before (see, e.g., NORDIC SATELLITE, AÉROSPATIALE/DE HAVILLAND, and HOLLAND MEDIA GROEP). Efficiency Defence in the US Drawing on the various versions of the US Merger Guidelines issued between 1968 and 1997, it is possible to describe the development of the relevance that efficiencies have played in US Merger Control Policy. The Guidelines that were issued in 1968 (U.S. Department of Justice 1968) are very restrictive concerning the possibility to take efficiency effects explicitly into account. Cost advantages as a reason to justify a merger were basically not accepted by the Department of Justice, except if exceptional circumstances were present. The realization of product and process innovations were the prime candidates for such exceptional circumstances. In its decisions, the Department of Justice consistently denied the presence of such exceptional circumstances. The main reasons offered by the Department of Justice for its restrictive stance on accepting efficiencies as a justification for mergers were that cost savings could also be realized through
80
CHAPTER IV
internal firm growth and that efficiency claims were notoriously difficult to verify. This critical stance was retained in the 1982 version of the Guidelines (U.S. Department of Justice 1982). They reiterated that efficiencies could only play a role in extraordinary cases However, these extraordinary cases did not play a significant role in merger policy. In 1984, the Department of Justice explicitly introduced efficiencies into the Merger Guidelines (U.S. Department of Justice 1984). According to them, significant efficiencies could play a role if there was “clear and convincing evidence“ in their favour and it was impossible to realize them through alternative means. This was thus the first time that efficiencies appeared as a criterion of their own for justifying mergers. But simply pointing at expected efficiencies was never sufficient for getting a merger passed. They could only play a role in conjunction with other reasons. The 1992 revision of the Merger Guidelines did not lead to substantial changes with regard to efficiencies (U.S. Department of Justice/FTC 1992). But the precondition of “clear and convincing evidence” was cancelled. This was not, however, accompanied by a reversion of the burden of proof. It is thus still with the merging firms. All in all, this made it easier to draw on the efficiency defence (Stockum 1993). Finally, the 1997 version of the Guidelines made the rather general formulations with regard to efficiencies more concrete (U.S. Department of Justice/FTC 1997). The Revision of the Merger Guidelines of April 1997 served the purpose of making the recognition of efficiencies more concrete. Clearly specified criteria are to make it easier for firms to advance efficiency arguments and easier for courts to decide cases in which efficiencies f play a role (Kinne 2000, 146). The Merger Guidelines (1997, 31) demand that “... merging firms must substantiate efficiency claims so that the Agency can verify by reasonable means the likelihood and magnitude of each asserted efficiency, how and when each would be achieved (and any costs of doing so), how each would enhance the merged firm’s ability and incentive to compete, and why each would be mergerspecific.” And further: “Efficiency claims will not be considered if they are vague or speculative or otherwise cannot be verified by reasonable means.” Relevant from a competition policy point of view are the so-called „cognisable efficiencies.“ These are savings in marginal costs, which can be realised in the short run. Cost savings in the areas off Research and Development, procurement or management are classified as not being sufficiently verifiable and in that sense not „cognisable.“ Efficiency aspects are estimated by following a three-step procedure: (1) Determination of merger specific efficiencies Only cost savings caused by the merger itself are recognised. Efficiency advantages caused by increased market power must not be recognised. (2) Analysis of the relevance of merger specific efficiencies The merging firms must document
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
81
– when, how, and to what degree and having to incur what costs efficiency gains will be realised, – why the claimed efficiencies can only be realised by a merger, and – how these efficiencies will affect the competitiveness of the merging firms. If the firm’s documentation is to the satisfaction of the competition authorities, the efficiencies will be evaluated as “cognisable.“ (3) Evaluation After having estimated the size of the expected efficiencies they will be compared with the disadvantages that the consumers will most likely have to incur. The higher the expected disadvantages (measured by post-merger HHI, possible unilateral effects, and the relevance of potential competition), the more “... extraordinarily great cognisable efficiencies would be necessary to prevent the merger from being anticompetitive” (Merger Guidelines, 32). The creation of a monopoly as the consequence of a merger can never be compensated by efficiency arguments. The 1997 version of the Guidelines clarified under what conditions efficiencies could be taken into account in notified mergers. Compared to the vague formulations of the 1992 Guidelines, this meant a substantial improvement. It is thus encouraging that the European Commission used the 1997 version of the U.S. Guidelines as a model rather than the earlier versions. An increase in the predictability of European Merger Policy is possible if the basis on which efficiency considerations are taken into account are spelt out explicitly. Firms can then form expectations on whether the efficiencies that they expect to realise as a consequence of a merger will be taken into account by the Commission or not. But given that predictions concerning the size of realisable efficiencies rest on shaky grounds, there will be quarrels as to realistic levels. This will limit the gains in predictability. The inclusion of efficiencies in the recently published Guidelines is thus welcomed. There, the Commission demands that the efficiencies have to “benefit consumers, be merger-specific, and be verifiable.” The Commission loosely follows the US-Merger Guidelines here. Unfortunately, the Guidelines contain a number of indeterminate legal terms, which need to be made more concrete. The terms just mentioned are described but concrete conditions that need to be met for efficiencies to be taken into account are not named. The US-Merger Guidelines are more concrete here and predictability could be further increased if the Commission were to follow the US Guidelines in this regard too. The main issue to be decided for the inclusion of efficiencies are the criteria that need to be fulfilled. In the US, it is three criteria, namely (1) that gains in efficiency need to be passed on to consumers in the form off lower prices or increased quality, (2) that efficiencies must be merger-specific, and (3) that they must be verifiable. As just pointed out, the Commission is to use similar criteria in Europe. In order to ascertain possible consequences, these three criteria will be dealt with in a little more detail.
82
CHAPTER IV
The probability that efficiencies will be passed on to consumers appears to be higher if they originate in lower marginal costs rather than in lower fixed costs. Expected cost savings in, e.g., the administrative overhead, do not have an impact on the marginal costs of a company. Reductions in its fixed costs will, however, only be passed on to consumers if the degree of competition is sufficiently high on a given market. But the reason for drawing on efficiencies as a justification for letting a merger pass is exactly that the creation of a market dominant position is suspected. It makes thus sense to weigh reductions in marginal costs higher than reductions in fixed costs. Reductions in marginal costs are to be expected if the merged entity is expected to be able to realise efficiencies on the supply side as well as during its production process, e.g., via economies of scale or scope. The more important the input factors on which the merged entity can save, the more important can the efficiencies expected to be. This is reflected in the Guidelines which attribute more weight to savings in marginal costs but do not exclude that savings in fixed costs might also lead to efficiencies that might benefit the consumer. Additionally, efficiencies that can be realised in R & D are also highly welcome. Improved R & D will not only increase efficiency of the undertakings concerned, but will put other companies under pressure to improve their R & D. Overall welfare gains are thus to be expected. The criterion that efficiencies need be merger-specific is more problematic than estimating whether efficiencies will be passed on to consumers. The basic notion is, of course, quite straightforward: efficiencies are only to be recognised as an offsetting argument if the merger is the only way to realise them. If there are other ways, such as internal growth, joint ventures, licensing agreements or other ways of cooperation between the firms, efficiency-gains are not to be used as an argument offsetting the creation of a market position which gives rise to serious concerns. From an economic point of view, the notion is not as straightforwardd as it first might seem. In order to make the point, we draw on transaction cost economics, whose representatives are used to think in alternative institutional arrangements. Suppose that the management of an undertaking has incentives to choose that institutional arrangement that promises to maximise profits. f It is then unclear why management should opt for a merger if an institutional alternative such as a joint venture or a license agreement could do the trick. A merger regularly involves high organisation costs. These costs will only be incurred if institutional alternatives are expected to be less beneficial. Moreover, it remains unclear why a joint venture is to be preferred over a merger. Joint ventures have often been the source of some cartel-like arrangements. Merger-specificity thus appears a as a problematic criterion that should play no role in merger policy. Critics might object that the criterion of merger specificity primarily serves to prevent the genesis of allocative inefficiencies that are due to market power resulting from a merger. But the efficiency defence will only trump a prohibition if the merger is expected to make the consumer better off, i.e. if it increases consumer surplus. But if post-merger prices need to be lower than pre-merger prices, the probability of allocative inefficiencies appears to be negligible. If, on the other hand, the
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
83
criterion of merger specificity is applied rigidly, the could mean that mergers will be prohibited – or not even projected – although they would increase overall welfare. In order to protect consumers against profits due to market power, checking for the change in consumer surplus is sufficient. Empirical Evidence on Efficiencies The effects of mergers on the efficiency of merged firms have been subject to extensive empirical analysis. The main results can be summarized as follows (Gugler et al. 2003, Mueller 1997, Tichy 2001): Mergers often lead to higher profit levels while turnover decreases. This would mean that mergers increase market power but not efficiency. According to the available empirical studies, these effects appear independent of the sector or the country in which the merger occurs. The reduction in turnover is more pronounced in conglomerate mergers than in horizontal mergers (Caves/Barton 1990). On the other hand, and in contrast to the results just reported, one half of all mergers lead to a reduction in profits and in turnover which would mean that mergers lead to reduced efficiency (Gugler et al. 2003). If profits increase, these increases can be explained with increased market power as well as with increased efficiency. Market power effects are correlated with large mergers, efficiency effects with small mergers (Kleinert/Klodt 2001). Concerning the relationship between mergers and technical progress, some studies did not find any significant correlation between the two (Healy/Palepu/ Ruback 1992). Some studies have found a negative correlation (Geroski 1990, Blundell, Griffith, Reened 1995), while others found a weakly positive correlation between mergers and technical progress (Hamill/Castledine 1986). The authors pointed at the possible relevance of the acquisition of patents as the consequence of mergers. The empirical results are thus not unequivocal and it is difficult to draw policy implications. The few cases in which positive efficiencies as a consequence of mergers appear to be unequivocal seem to have occurred r in mergers in which the merging parties were producing very similar products. Efficiencies are thus more likely in horizontal mergers with the merging firms producing close substitutes (Ravenscraft/Scherer 1989, Gugler et al. 2003). On the other hand, conglomerate mergers seem to be rather unlikely to lead to additional efficiencies. With regard to vertical mergers, no significant savings in transaction costs were found. Rather, the increased difficulty of entering onto a market and the extension of dominant positions in upstream or downstream markets seem to dominate (Gugler et al. 2003). The empirical evidence with regard to the efficiency-enhancing effects of mergers is thus mixed at best. Yet, this mixed empirical evidence is not a sufficient reason for not relying on efficiencies in merger control. It is not entirely clear whether these studies do indeed measure what they pretend to measure. The
84
CHAPTER IV
most frequently relied upon indicator for the success – or failure – of a merger is the development of the profits, share prices or the return on turnover. The connection between these two indicators and the efficiency of a firm is, however, everything but clear-cut. Moreover, it is in the nature of these tests that the development of these indicators in the absence of a merger must remain systematically unknown. This means that it would be premature to conclude from lower share prices or reduced profit margins that the merger must have been inefficient. The available studies provide some important insights into the conditions under which mergers are likely to be a success or a failure. But they appear to be far from conclusive. This is why they do not provide sufficient evidence against the incorporation of an efficiency defence into merger control. The critical issue in the recognition of efficiencies certainly is the capacity to assess them. It is quite comprehensible that the burden of proof should be with the entities that want to merge. But that does not solve the problem of information asymmetries. It was already pointed out above that the main problem with the explicit consideration of efficiencies is that none of the concerned actors has any incentives to reveal them according to their true expectations. The pragmatic question thus is whether any second-best mechanisms can be thought of. Over the last number of years, a literature on the virtues of independent agencies has developed. It started out with the analysis of the effects of independent central banks but has been extended to a number of areas (Voigt and Salzberger 2002 is an overview). It is conceivable to delegate the task to evaluate the realisable efficiencies of a proposed merger to an independent “efficiency agency” that would specialise in such estimates. A similar suggestion has already been made by Neven et al. (1993). Unfortunately, the Guidelines on horizontal mergers did not realize any of these proposals. A number of indeterminate legal terms is used, but they are not made sufficiently concrete. Assess Importance of Asset Specificity When describing the insights of Transaction Cost Economics, it was pointed out that (1) asset specificity, (2) uncertainty, and (3) the frequency of interactions all played into the optimal governance structure. It was assumed that firms tried to economise on transaction costs and that unified governance – i.e., large firms – could be the result. This means that transactions cost arguments are basically efficiency arguments. They are dealt with separately here because they are intimately connected with one specific theoretical development, namely Transaction Cost Economics. Once invested, highly specific assets make the firm that has invested them subject to opportunistic behaviour by its interaction partners. This might lead to investment rates below the optimum level. It can therefore be in the interest of both sides of an interaction to agree on specific governance structures in order to reduce the risk of being exposed to opportunism. This insight has potential consequences for merger policy: the higher the degree and amount off specific assets involved, the better the justification for a unified governance structure, in this case for a merger.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
85
In order to be able to take asset specificity explicitly into account, one either needs to measure it or to use proxies for it. As described in the part on Transaction Cost Economics in chapter II, four kinds of asset specificity are usually distinguished, namely (1) site specificity (costs of geographical relocation are great), (2) physical asset specificity (relationship-specific equipment), (3) human asset specificity (learning-by-doing, especially in teams comprising various stages of the production process), and (4) dedicated assets (investments thatt are incurred due to one specific transaction with one specific customer). Physical proximity of contracting firms has been used as a proxy for site specificity (e.g., by Joskow 1985, 1987, 1990 and Spiller 1985) and R&D expenditure as a proxy for physical asset specificity. With regard to both human asset specificity and dedicated assets, survey data have been used. It is thus possible to get to grips with asset specificity empirically. Since the merger rationale in cases of high asset specificity is quite straightforward, it should be taken into account explicitly. Assess Importance of Uncertainty The theoretical argument concerning uncertainty has great similarities with the argument concerning asset specificity: if interactions could be made beneficial for all parties concerned, they might still not take place if too high a degree of uncertainty is involved. In cases like that, welfare could be increased if the interested parties are allowed to form a common governance structure t in order to cope with uncertainty. With regard to merger policy, this means that in mergers in which uncertainty plays an important role, the evaluation should be somewhat less strict than in cases in which uncertainty is marginal. Getting to grips with uncertainty empirically is no mean feat. In the literature, various proxies have been discussed; volatility in sales is one of them. Others (Walker and Weber 1984, 1987) have proposed to focus on one specific kind of uncertainty, namely “technological uncertainty”, measured as the frequency of changes in product specification and the probability of technological improvements. Given that technological uncertainty seems to have dramatically increased, it seems worthwhile to take it into account explicitly. The argument is that mergers are more likely in markets with high uncertainty as proxied by high volatility in sales or high technological uncertainty. These mergers are potentially welfare-increasing and should thus be passed. Assess Importance of Frequency Frequency is the last component to the Transaction Cost Economic Trias of asset specificity, uncertainty, and frequency. The argument is that the more frequent interactions between two specified parties take place, the higher the potential benefits from a unified governance structure. The implications for merger policy are obvious: assess the frequency with which parties willing to merge interact. The more frequent it is, the higher the chance that efficiencies could be realised, the more relaxed the competition policy stance should be.
86
CHAPTER IV
2.5.3. Improvements Due to Trends in the Business Environment In chapter III, it was shown that the business environment of many companies has fundamentally changed. This holds true for both the supply and the demand side of the market. The value of brands can be subject to quick erosion; consumer loyalty seems to have become fickle in many cases. This means that high market shares can also erode quickly. Predictions concerning future market shares have become virtually impossible. Merger policy should react to these changes, but what is the adequate response? In this section, two proposals how merger policy could take the globalised, and rapidly changing business environment explicitly into account are advanced: one is concerned with the time dimension of the relevant market, the other one with its geographic dimension. Taking the time dimension adequately into account In a rapidly changing business environment, reliable predictions concerning future market shares have become virtually impossible. But a responsible merger policy needs to take likely developments into account: if today’s high market share resulting out of a merger is not very likely to persist tomorrow, the merger should be passed. Some observers have proposed that the time length that competition authorities should recognise in their decisions should be extended to five or even more years. This is, however, mission impossible: if short-term predictions are impossible, long-term predictions are just as unfeasible. Drawing on market shares in merger analysis is based on the hypothesis that they can be translated into market power and can thus be used to the detriment of consumers. But what if the equation “high market share = low consumer rent” does not hold anymore? Rapid change induced either by supply or by demand side factors (or by both) can prevent the emergence and the use of market power because it leads to the possibility of unstable market structures. Rapid change is thus the crucial variable. Structure could only be consolidated – and probably used in order to raise price and restrict output – if change is slowed down. Firms with a dominant position might thus have an incentive to try to slow down change. But often, they will not be in a position to be successful in that endeavour: if they have competitors who expect to gain by innovating, they will not be successful. If consumption patterns are subject to rapid change, they will not be successful either. We thus propose that competition authorities analyse (1) the speed of change in a given industry and (2) who controls the factors that are responsible for rapid change in a given industry. Ascertaining the speed of change in a given industry is, of course, not easy. Counting the number of patents will not do as some innovations never get patented and as new products do not necessarily have to rely on patentable knowledge (SONY created the Walkman drawing on readily available techniques). As already mentioned, rapid change can be due to supply-side factors, but also to demand-side factors. Demand side factors are certainly not beyond any influence from the supply side (marketing), but are difficult to control. In some cases, control over necessary inputs (resources, patents, public orders etc.) can seriously constrain the capacity to be innovative. In such cases, merger policy should be restrictive. If
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
87
parties willing to merge do, however, not control the process, the merger should pass even if a highly concentrated market structure is the – temporary – result. The Guidelines on horizontal mergers deal with the issue of markets in which innovations are important. Notified mergers can be prohibited if the two most innovative firms want to merge even if they do not command important market shares. According to the Commission, mergers between the most innovative firms in a market can lead to a substantial impediment of effective competition. But there is no empirical evidence which would prove that mergers between innovation leaders do indeed result in a slowing down of the speed of innovation in the market (Orth 2000). At the end of the day, the implementation of this new criterion can mean that leaders in innovation are effectively sanctioned for being “too” innovative. Since innovation is the key to competition in these markets, it would rather be the competitors than competition, which is protected by this proposal. Taking the geographic dimension adequately into account In chapter III, it was shown that deregulation and privatisation have occurred on a worldwide scale. It was concluded that international interaction costs have in many industries been reduced to such an extent that one can talk of truly global markets. This does not seem to be adequately reflected in merger policy practice. Very often, markets are still delineated on a much narrower scale. What is crucial for the geographical distinction is the possibility of bordercrossing trade, in particular imports. It is thus insufficient to look at actual trade statistics. What should, instead, be taken explicitly into consideration is the sheer possibility of trade. This can be ascertained by analysing the relevant transport costs as well as costs due to state-mandated barriers to entry. This procedure thus explicitly acknowledges the close relationship between defining the relevant market and ascertaining the relevance of barriers to entry. Predictability could be further advanced if the Porter classification of completely globalised, partially globalised, and regional markets were taken into consideration by the Commission in the definition of geographical markets. If the firms knew ex ante how their industry was classified, predictability would be greatly increased. 3. A CLOSER LOOK AT BARRIERS TO ENTRY AND CONTESTABILITY 3.1. The Standard Approach Barriers to entry are probably the single most important component in the analysis of market structures. The underlying conjecture is straightforward: if there are substantial barriers to entry into a market, incumbents will often be able to reduce quantity, increase price and reap supernormal profits because the threat that their behaviour will be sanctioned by newcomers is low. Barriers to entry reduce the likelihood that factual entry will take place as they make entry less profitable. They also reduce the credibility of threatened entry, as the incumbents know that newcomers would first have to incur substantial costs. Quite schematically, competition policy – and
88
CHAPTER IV
economic policy more generally – has two options to deal with the issues of barriers in merger cases: either take care to reduce the barriers or prohibit the merger, as a reduced number of actual competitors will reduce competitive pressure. In economic theory, two approaches in defining barriers to entry can be distinguished: The first one can be traced back to Bain (1956); a more recent advocate is Gilbert (1989). In the approaches of both Bain and Gilbert, barriers to entry lead to profits in the sense of rents that incumbents m can secure without attracting entry. A different approach has first been proposed by Stigler (1968) and more recently by Baumol and Willig (1986). In their approach, barriers to entry are costs that are only borne by entrants but not by incumbents. Bain’s concept of barriers to entry is an excellent example for the structureconduct-performance paradigm at work: it is interested in a statistical relationship between barriers to entry and profits, with the hypothesis that high barriers will be correlated with high profits. In this approach, causal questions get marginal attention at best: the micro-economic reasons for the existence and the possible sustainability of barriers are not inquired into. Stigler’s concept is based on cost asymmetries. As long as entrants have access to the same inputs as incumbents and have to pay the same amount of money for them, no barrier exists. According to Stigler, economies of scale do thus not constitute a barrier as long as an entrant could buy the underlying production machines at the same prices as the incumbent. Applying Stigler’s approach, advertising would not be a barrier either as advertising space can be bought by anybody. We will here not opt for one of the proposed definitions but try to tackle the issue from the point of view of a potential entrant: She will only think about entering into a market if this promises returns that are higher than the next-best use of her resources. One could be attracted to a particular market because one observes that the firms active in it all reap above-average returns. The decision whether to factually enter it will be primarily determined by two factors: (1) the amount of irreversible investments necessary and (2) the price that the potential entrants believe will be the relevant post-entry price. Barriers to entry heavily influence both factors. Focusing on their origin, three groups off barriers can be distinguished: (1) State-Mandated Barriers to Entry; they are enforced by the state and are thus extremely difficult to overcome by private parties. (2) Structural Barriers to Entry; they cannot be influenced by the incumbents. Production technologies are an example. Structural barriers can be thought of as exogenous to the market participants. (3) Strategic Barriers to Entry; these can be influenced by the incumbents. Expanding capacity could be an example. Strategic barriers can be thought of as endogenous to the process. Sometimes, only two groups of barriers are distinguished, namely absolute cost advantages on the one hand and strategic ones on the other. Absolute advantages would then comprise both state-mandated and structural barriers. We prefer this
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
89
more specific delineation because it reminds us that state-mandated barriers are politically made barriers – and in this sense endogenous. Some circumstances that are often discussed in the literature as barriers will now be shortly presented and discussed. They will be grouped along the lines of the three groups of possible origin just discussed. State-Mandated Barriers to Entry (1) Tariff and Non-Tariff Barriers to Trade; entry for foreign firms is made more costly by these barriers. Non-tariff trade barriers can even make entry outright impossible, e.g., if there are quotas and the allowed number of products has already been imported in a given year. More broadly speaking, regulation that leads to a factual discrimination between different potential suppliers (e.g., domestic and foreign firms) is a barrier to entry. (2) Patents, copyrights, and other intellectual property rights; R&D activities are the source of technological progress. Firms will only invest resources in R&D if they can expect positive returns out of these activities. This is why most states grant a monopoly on the exclusive use of a new production process for a limited number of years, namely patents. We can already draw a conclusion from these two examples: not all barriers need to be welfare reducing. Where economists agree that most forms of trade barriers are welfare reducing in most real-life situations, patents can make it more attractive to invest in R&D. They can thus increase technological progress and welfare in the medium and long term. Structural Barriers to Entry (3) Structural barriers can be caused by fixed costs. If fixed costs are substantial, they will result in declining average costs over a certain output range. If an incumbent sells a substantial quantity of the good in question and a potential newcomer cannot expect to capture a significant market share right away, then he will suffer a cost disadvantage if he uses the same production technology as the incumbent. Economies of scale have therefore been identified as one barrier to entry. (4) Learning by doingg can have similar effects as it can also lead to decreasing average costs. Here, however, they are not due to the “distribution” of the fixed costs on a higher number of goods but due to the productivity increases that result from the growing experience that the workforce gains during the production process. (5) Capital requirements are often interpreted as a barrier to entry. In order to enter into a market, substantial investment is often necessary and it will regularly financed by outside capital. It is now argued that due to their lacking reputation, banks will offer newcomers terms of finance that are worse than those offered to incumbents.
90
CHAPTER IV
(6) Distribution nets, transport costs; it has been said that the distribution of some products presupposes the creation of an extended dealer and service network. This argument has, e.g., been advanced with regard to car dealerships. It is then claimed that even if I have a superior car I might not be able to enter the market because I would have to develop my own distribution network, which could be too costly. Transport costs could also inhibit entry under obvious circumstances. The preliminary conclusion drawn with regard to state-mandated barriers to entry, namely that barriers need not necessarily be welfare-reducing, can be repeated here: for an optimal allocation of resources, we want firms to realise economies of scale for their not doing so would translate into wasting resources. The same is, of course, true with regard to learning effects. The capital requirements argument presupposes a failure of the capital markets. Many governments seem indeed to believe that capital markets fail as they offer subsidised loans to small and medium-sized enterprises. Strategic Barriers to Entry (7) Product differentiation, advertising, d and goodwilll are often counted as barriers to entry. Here, incumbents invest in advertising and other marketing tools in order to create specific brands and high consumer loyalty. Substitution gaps are thus purposefully created. The price elasticity of a specific product is purposefully reduced. In order to enter a market with branded consumer goods successfully, a firm would thus have to invest in marketing in order to lure loyal customers away from “their” brands. Product differentiation can also mean that there are costs in switching from one brand to another. Once a consumer has bought, e.g., a Polaroid camera, s/he might be willing to accept relatively high prices for films because switching to another system would mean to have to make another investment into another camera first. This has been interpreted as a barrier to entry for producers of other t cameras. This aspect is often discussed under the heading of “lock in”. (8) Vertical restraints; if a vertically integrated firm commands exclusive access to some necessary input for a downstream activity, it could sell the input to its (potential) rivals on the downstream production level at unattractive prices. If the input is essential, this could deter entry. A similar argument has been advanced with regard to exclusive dealing and tying contracts. Here, purchase of a good x is conditional upon the purchase of another producty. This has been a short summary of a long discussion. Others have heavily criticised many of the circumstances that have been claimed to inhibit entry. Proponents of the Chicago School of antitrust have been particularly busy in refuting some of the assumptions often made with regard to barriers.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
91
3.2. Consequences of Recent Theoretical Developments Members of the Chicago School have always been very critical with regard to statemandated barriers to entry. In fact, they have been one of their prime targets. But with regard to most other types of barriers, they have often been very harsh critics of their empirical importance or even of the entire logic underlying them. With regard to differentiated products, Bork (1978, 312ff.) observes that the development from standardised to differentiated products is often interpreted as a move from a competitive market to a series of fragmented markets, which would create a number of monopolies for each of the sellers. Bork argues that more differentiated products mean that consumer preferences are better reflected and thus concludes that additional value is created by product differentiation and that it would thus be a clear instance of efficiency. Bork (1978, chapter 16) makes similar arguments with regard to advertising and capital requirements. Bork distinguishes between artificial and natural barriers to entry. Identifying natural barriers is a descriptive exercise. Trying to derive normative statements concerning the necessity to correct them by policy instruments would, however, amount to committing a fallacy. The use of game theory has led to substantial improvement in the theory of barriers to entry. This can most easily illustrated by an example: In 1956, Joe Bain proposed the so-called “theory of limit pricing”. According to that theory, the limit price is defined as that price that makes a potential entrant indifferent between entering into a market and staying out of it. It would thus be in the interests of the incumbents to set a price just below the limit price in order to deter entry. Game theorists have criticised this model as inadequate: why should prices before entry deter potential entrants, if all they are interested in are prices after entry? Instead, what is decisive is the commitment of incumbents to punish entrants once they have entered their markets. Such behaviour could, however, not be made plausible within Bain’s model. Dixit (1980), reinterpreting an old oligopoly model (Stackelberg) in a gametheoretic fashion, has shown that investment in additional capacity can indeed be a credible way to deter entry and thus an effective barrier to entry. Incumbents can thus have first mover advantages at their disposal. They might be able to use sunk costs as a credible commitment to stay put in a market. If this were signalled to potential entrants they might not be interested in entry anymore. The more recent literature has identified two aspects as central to the barriers to entry debate, namely (1) the amount of irreversible investment or sunk costs that have to be incurred in order to enter the market, and (2) the expected intensity of competition post-entry. Sunk costs play such a central role because they are that part of the investment that cannot be recouped should the entry not be successful and should the firm thus wish to leave the market. Therefore, sunk costs can also be called costs or barriers to exit. Before entering into a market, rational actors will be interested in ascertaining the amount of money they would lose should they want to leave the market again. The second aspect – expected intensity of competition postentry – means that low barriers to entry are per se not a sufficient condition for ex-
92
CHAPTER IV
pecting entry: if potential entrants expect prices not or only barely to cover (marginal) costs, they will not enter into a market. With regard to sunk costs, a distinction between exogenous and endogenous sunk costs has been widely accepted in the literature. Exogenous sunk costs are thus costs that have to be incurred independently of the particular strategy chosen by the entrant or the incumbent. Endogenous sunk costs depend, on the other hand, on a particular choice made by both the entrant and the incumbents. These can encompass R & D costs, advertising costs or costs for the creation of a distribution net. It has been shown that the existence of endogenous costs has been detrimental to entry, especially when the newcomers anticipate that their entry will lead to an intensification of competition (Sutton 1991). The contribution of Transaction Cost Economics to the barriers to entry debate has rarely been made explicit. Yet, representatives of TCE have important arguments to bring to the debate. Let us start with the obvious, namely Williamson’s modified approach towards “efficiency”: if some status quo cannot be improved upon with net gains, it is called “efficient”, no matter how much it deviates from theoretically derived measures of efficiency. With regard to barriers: if they are irremediable, it is not worth making them the object of competition policy (Williamson 1985, 368f.). This position is thus related to the position of the Chicago School described above. It is often said that our knowledge concerning entry has considerably increased with recent theoretical work. Yet, a host of basic questions remains to be answered. An implicit assumption of the old structure-conduct-performance approach seems to be that in an ideal world, barriers to entry y would be zero. Any costs that newcomers have to incur are seen here as deterring entry and as providing incumbents with a leeway that they will use in order to increase profits and to decrease overall welfare. If there were no barriers, competitors would thus be constrained because they would have to take the possibility of entrance into account at any point in time. But zero barriers to entry would only make sense if (1) there are no costs of reallocating input factors, and if (2) transaction costs are zero. Otherwise, the absence of any barriers would mean that the slightest amount of slack in an enterprise would lead to its instantaneous demise and that newly emerged firms could already be gone two or three days later (on the concept of slack, see Leibenstein 1966, on the consequence of assuming zero reallocation costs, see Hirschman 1970). Neither reallocation nor transaction costs are, however, ever even close to zero so that zero costs with regard to barriers to entry would incur a high price. That there are instances in which we want barriers to exist has already been shown with regard to technological progress, i.e., dynamic effects. All this means that we would need some criteria according to which the optimum level of barriers could be determined. Theoretical contributions on this issue are, however, very scarce.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
93
3.3. Consequences of Recent Trends in the Business Environment In this section, the impact of various recentt trends in business environment on the barriers to entry introduced in 3.1 above is analysed. It was shown that the relevance of state-mandated as well as structural barriers to entry has substantially decreased. This means that the constraining effect f of potential competition has increased. If this can be convincingly shown, then this should be reflected in European Merger Policy. In Chapter III, the global trends towards deregulation, privatisation, and liberalisation were described. They all translate into reduced (state-mandated) barriers to entry. We now turn to the economic and technological trends in order to ascertain their consequences on the relevance of barriers to entry. (1) Rapid technological change Rapid technological change means that product d and innovation cycles have substantially shortened. On the one hand, this means that firms will try to realise economies of scale as fast as possible, i.e., within one product generation. On the other hand, it means that, as time periods between product generations have become ever shorter, it is possible for newcomers to enter the market more often than in the past. As products are constantly remodelled, traditional barriers to entry have lost much of their relevance. (2) Increasing Mobility of Supply This is equivalent to the increase in supply side flexibility, which means that incumbents always have to fear entry of newcomers and especially of firms in neighbouring product markets that use the same technology. (3) Developments in Transport Costs The drastic reduction in transport costs means that one barrier to entry that has traditionally been very important has become much less relevant. The reduction in transport costs makes (1) entry by fringe firms more likely. Furthermore, the reduction in transport costs has also facilitated (2) sourcing from factories that are located outside of the geographical market as traditionally defined and (3) outsourcing. All three phenomena have increased the relevance of potential competition. (4) Internet In some markets, the Internet has led to substantially reduced distribution costs. Markets in which the necessity of an established distribution network constituted important barriers to entry can be entered by the establishment of a website. (5) Homogenisation of Preferences This trend can also reduce distribution costs: as long as markets were national in scope, a variety of marketing was necessary in order to enter the various markets. With the homogenisation of preferences, one campaign will often do. This means that barriers to entry have shrunk.
94
CHAPTER IV
But product differentiation has also become more important. This means that some profitable niches open up for smaller producers. As large firms are not able to cover all possible niches, this development in consumer preferences has led to a reduction in entry barriers. (6) Rapid Change of Consumption Patterns Rapid changes in consumption patterns make the prediction of market developments and market shares ever more difficult. This means that it has become more difficult for established firms to conserve their advantages. It also means that newcomers are given chances and that barriers to entry have become less relevant. Sunk costs are a big issue in barriers to entry. Have sunk costs been reduced as a consequence of recent trends in business environment? We do think so. The single most important change in business is the possibility to slice up the value chain and to buy many components from independent producers. This means thatt entry is often possible without having to incur huge amounts of sunk costs because expensive production tools or the like do not have to be purchased any more. Instead, the necessary products can be bought from independent suppliers. The impact of recent trends in business environment on barriers to entry is clear: they all seem to indicate that barriers have lost relevance over the last number of years. 3.4. Current EU Practice In assessing the question whether a notified merger will create or strengthen a dominant position, the Commission has always taken the probability of potential competition into account. It has thus also taken the relevance of barriers to entry into account. The Commission declared that potential competition would always be recognised in its decisions if there were strong evidence of a high probability of strong and quick market entry either by an incumbentt expanding capacity or by completely new competitors. Potential entry must constitute a threat that is sufficient to hinder the incumbent from behaving independently from market pressures (Commission of the European Communities 1992, 406ff.). Until the recent changes in Merger Policy, the Commission has used three criteria, namely (1) the likelihood of entry, (2) its timeliness, and (3) its effectiveness. After May 2004, the last criterion was modified to require sufficient entry. It remains, however, open how the Commission defines “strong evidence of a high probability of strong and quick entry”. This is an indeterminate legal term that needs to be made concrete in the decision practice of the Commission. Independently of the concretisation chosen by the Commission, it can already be stated that the Commission has substantial discretionary power at its disposal concerning speed and strength of entry. With regard to predictability, this is unsatisfactory. This is even more so if the Commission uses its discretionary powers inconsistently. We believe to have some evidence for inconsistent decision-making. In its evaluation concerning the probability and the strength of entry, the Commission’s central focus is on the existence of barriers to entry (and to exit). Regard-
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
95
ing the evaluation of the severity of barriers to entry, we also observe substantial discretionary powers (Schultz/Wagemann 2001, 298.). Prima facie, it seems possible to identify two phases in the Commission’s decision-making practice. The first phase comprises the period from the start of the merger policy in 1991 until approximately 1996. During this phase, even substantial barriers to entry did not hinder the Commission to emphasise the importance of potential competition. Such a “liberal” decision-practice can be found in the cases MANNESMANN/VALLOUREC/ILVA, MERCEDES-BENZ/KÄSSBOHRER, and SIEMENS/ITALTEL. In MANNESMANN/VALLOUREC/ILVA, the Commission pointed at potential competition from producers in Central and Eastern Europe although they neither had sufficient technological know-how at their disposal nor sufficient production capacity. The average quality of stainless seamless steel tubes was far below West European standards and substitutability was more than questionable from the point of view of West European buyers. Consequently, the liberal interpretation concerning potential competition was subsequently criticised (Ebenroth and Lange 1994). In MERCEDES-BENZ/KÄSSBOHRER, the Commission identified the existing barriers in the form of established dealer- and service networks and customer loyalty as not insurmountable and specifically emphasised the relevance of public procurement procedures with regard to city and inter-city buses, which would be sufficient to generate potential competition. The transition to a more restrictive interpretation concerning the effectiveness of potential competition is connected with the case of ST. GOBAIN/NOM/WACKER CHEMIE, which was decided upon in 1996. The Commission explicitly denied competitive pressure stemming from Chinese, Russian, and Ukrainian producers of silicon carbide due to missing production capacities and technological know-how (Schmidt/Schmidt 1997, 89). In the meantime, the Commission has switched to a restrictive interpretation of potential competition. This is well documented in a number m of cases such as ENSO/STORA, VOLVO/SCANIA, AIRTOURS/FIRST CHOICE, and SCA/METSÄ TISSUE. It is noteworthy that the Commission has repeatedly pointed not only at high necessary investment costs for expanding production facilities and vertical integration but also at customer loyalty and the ensuing missing substitution possibilities from the point of view of the consumer. All in all, the Commission has tended to interpret the probability and the strength of entry much more restrictively than in the early days of European merger policy. This means that consistency – and thus also predictability – of European merger policy are a problem. This can, e.g., be seen if one compares the decisions in MERCEDES-BENZ/KÄSSBOHRER one the one hand with VOLVO/SCANIA on the other. Barriers to entry that were quite similar in scope were evaluated quite differently with regard to their effect on potential competition. Trying to ascertain how the Commission concretises the indeterminate legal terms of probability and strength by looking at the decision-practice is therefore a failure. Looking at the first twelve years of European Merger Policy as a whole, no consistent picture emerges.
96
CHAPTER IV
A major cause for this unsatisfactory result seems to lie in the fact that the Commission still has not developed a frame of analyses based on economic theory for the evaluation of barriers to entry and for potential competition (Harbord/Hoehn 1994, 441). The current decision-making practice is determined by the question whether barriers are present. Subsequently, their relevance is ascertained by drawing on plausibility (Bishop/Walker 1999, 157). This procedure clearly lacks sound economic foundations. In its decision-making practice, the Commission tended to draw conclusions concerning the relevance of potential competition from successful entry in the past (as, e.g., in the cases NESTLÉ/PERRIER; MERCEDES-BENZ/KÄSSBOHRER or BASF/BAYER/HOECHST/DYSTAR). This is economically unsound: the fact that there has not been successful entry in the past is in no way indicative for the absence of the effectiveness of potential competition. The absence of factual entry can have either of two causes: (1) the degree of irreversible investment is indeed prohibitive or (2) the market is characterised by a high degree of competition in which the incumbents’ behaviour is effectively constrained by the threat of entry. This would mean that factual entry would indeed be unprofitable. The absence of historical entry can thus mean two things: that there is a low degree of competition or that there is a high degree of competition. Factual historical entry as such is thus not an adequate indicator for the relevance of potential competition. It thus seems necessary to take the profit margins of the incumbents explicitly into account. If high profits come along with no entry, this would seem to indicate high barriers to entry. But if there are no supernormal profits, potential competition might be working quite effectively. The Commission should thus modify its decision-making practice: as shown, past entry is an inadequate indicator. The decisive question is whether potential entrants can credibly threaten entry for the case that a merged firm misuses the market power that it enjoys. As just shown, factual entry as such is irrelevant. For evaluating whether the entry threat is credible, the Commission draws on two factors that are not entirely convincing. The first one is a survey among potential competitors where these are asked whether they are considering or planning entry (as in ENSO/STORA). The incentives of the surveyed firms to reveal information truthfully are, however, dubious, as they know exactly for what reasons the Commission asks them. If the surveyed firms believe the merger to be efficiency enhancing, they have an interest in its not taking place. They thus have an incentive to overestimate the relevance of entry barriers in order to achieve as many remedies or an outright prohibition of the merger. The second factor the Commission draws upon in order to ascertain the credibility of an entry threat is the existence of idle capacity. This is problematic as idle capacity is not necessarily a precondition for entry. If rational actors are assumed to sell their products where they can gain the highest returns, it is at least conceivable that firms use their limited capacity to redirect their sales to an area where higher profits can be made. This seems especially likely if there are no long-term delivery contracts. This example proves that idle capacity is thus not a precondition for entry.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
97
The Guidelines that have been applied since May 2004 deal with potential competition under the heading “entry.” There, the Commission endorses its rather restrictive stance in recognising potential competition by demanding that entry “must be shown to be likely, timely and sufficient”. In its evaluation, the Commission will also take the history of the particular industry into account. Successful past entry is taken as a positive sign for the possibility for successful entry in the future. As already pointed out above, this procedure seems overly restrictive. If barriers to entry are low and entry promises high profits, rational actors can be expected to enter into a market. Given that barriers to entry are low, then the absence of past entry can be an indicator for the functioning of successful competition. Because incumbents are aware of the “danger” of new entry, they have kept their prices at low levels. Because prices have been low, entry has been unattractive. Demanding names and addresses of potential competitors neglects that new entrants cannot only come from related product and geographical a markets, but also from entirely different markets and completely new firms. Naming them is close to impossible. But implementation of the new Guidelines could even lead to a more restrictive stance. The market shares of two firms that have not been active on the same market but are evaluated as potential competitors and that want to merge can be added according to the Guidelines. This is like adding apples and oranges. But that is not all: Over the last couple of years, so-called “conglomerate effects” have been hotly debated. “Conglomerate effects” are supposed to be present if a merger allows the transfer of a dominant position from one markett to another productt or regional market through leveraging, bundling, portfolio effects or financial a strength. The theoretical foundation of this concept remains, however, controversial (Kühn 2002). US merger policy introduced conglomerate effects in the late 1960s, but got rid off them with the Merger Guidelines of 1982. It would be ironic if the EU were to introduce an instrument that has proven to protect competitors m rather than competition in the US (Department of Justice 2001). It can thus be concluded that the economic aspects in the evaluation of the relevance of potential competition are currently not sufficiently taken into account. The evaluation of the probability and the strength of entry seem incoherent if one compares a number of cases. This means that there is substantial legal uncertainty for firms considering a merger, which is connected with high market shares. We will therefore turn to outline a number of proposals how the current procedure could be improved upon. 3.5. Proposals Towards Enhancing Predictability In principle, two possibilities to take potential competition into account can be distinguished: (1) A one-step procedure: potential competition is already recognised when defining the relevant product and geographical markets. Producers from neighbouring markets – both product-wise and geography-wise – who could, at least in principle, enter to produce the market in question would then already be
CHAPTER IV
98
counted as part of that market. This would often result in low market shares of the firms willing to merge and the chances that a merger would lead to the creation or strengthening of a dominant position would be correspondingly low. (2) A two-step procedure: here, the relevant market is defined first not taking any potential competition into account. If a merger would seem to create a dominant position, then the relevance of potential competition is ascertained in order to check whether it has any effects, offsetting the danger of a dominant position. Both procedures should be constructed in such a way that the use of either procedure leads to identical results. In practice, this condition is, however, often not met. We want to show this drawing on the example of the case VOLVO/SCANIA: Table 4: The procedure used to evaluate the relevance of potential competition often influences the outcome
Sweden Finland Denmark Norway Ireland EEA Average
Volvo 44.7 34.3 28.7 38.0 22.0 15.2
Scania 46.1 30.8 30.2 32.2 27.1 15.6
Sum 90.8 65.1 58.9 70.2 49.1 30.8
The market shares for the Scandinavian markets and Ireland show the domestic strength of the involved firms. If one starts, however, with the EEA as the relevant geographical market, market shares are substantially lower due to the effect that the EEA is more encompassing than the “home markets” of the firms who wanted to merge. The sum of the market shares in the EEA is some 31%, which is a market share for which the Commission would not automatically conclude that a dominant position existed. As is well known, the merger was prohibited on the basis of high market shares in the Scandinavian markets. The cause for this can be found in the reluctance with which potential competition is taken into account in European merger control. Once the Commission has ascertained high market shares on markets defined on the grounds of nation-state borders, the firms willing to merge often have a hard time proving that these are not equivalent with a dominant position. In this case, Volvo and Scania could not convince the Commission that its strong position in Scandinavian countries was sufficiently controlled by potential competition. Predicting potential competition with a high degree of reliability is almost impossible. In merger policy, pragmatic but robust solutions are often better than ones which are highly sensitive to specific assumptions. In merger policy, a pragmatic way of taking potential competition into account could be to delineate the relevant market rather broadly. This procedure promises to be more successful than the twostep procedure, in which the crucial question is whether a dominant market position is sufficiently controlled by potential competition.
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
99
But in competition policy, the two-step procedure is used more frequently than the one-step procedure. This is why we now turn to describing how potential competition can be evaluated within the two-step frame. The assessment of the relevance of potential competition in a two-step procedure could be separated into four elements:
Assessment of Absolute Advantages enjoyed by Incumbents
Assessment of Relative Costs
Assessment of Sunk Costs
Reaction Times and Contract Length
Figure 10: Four Elements for Assessing Barriers to Entry
The first element consists in ascertaining the absolute advantages of the established firm or firms. The primary focus will be on absolute cost advantages that are the result of exclusive access to some needed input or state-mandated barriers to entry. With regard to state-mandated barriers to entry, it should be noted that it is one of the main tasks of the Commission to help remove barriers that prevent the completion of the common market (see, e.g., VOLVO/SCANIA and the relevance of the socalled Cab-Crash Test). The implementation of the Court’s Cassis jurisdiction should ensure that state-mandated barriers will not be a barrier to mergers. The second element could consist in checking whether the potential competitors have at their disposal any cost advantages over the established producers that would allow them to (over-)compensate some possible disadvantages in other areas. The source of such cost advantages could be a more cost-efficient capacity utilisation or economies of scope in research and developmentt or distribution. Lower labour and (or) finance costs are another potential factor off relevant cost advantage. The next element would consist in an evaluation of sunk costs. A logical first step would seem to estimate the price certain goods could secure on a second hand market, a simple test that the Commission does not carry out on a routine basis. The evaluation should make a distinction between exogenous and endogenous sunk costs. Moreover, the expected profits need to be compared to the endogenous sunk costs. The possible effects of post-entry price competition need to be factored in. If highly intensive price competition is expected, endogenous sunk costs can be ex-
100
CHAPTER IV
pected to be highly relevant. If important price reductions seem highly likely, a potential newcomer cannot expect to make any profits in the short run. Endogenous sunk costs such as these combined with a high intensity of competition can therefore impede market entry. The last element would consist in estimating the time that the incumbents would need in order to react to the entry of a new competitor. If the incumbents can immediately react, market entry is less attractive. If the market is characterised by longterm contracts between the incumbents and the consumers, this constrains the shortterm flexibility of incumbents to react to a possible entry. The newcomer enjoys an advantage because it can serve customers at better conditions. The existence of longterm contracts is therefore not always a barrier to entry because it reduces the possibilities of incumbents to quickly react to newcomers. This is easy to see as long as one is talking about a growing market in the sense of an increasing number of consumers. But even in stagnating markets with long-term contracts, not all contracts will end at the same point of time. Rather, a number of contracts will terminate every single year, which means that a number of customers are able to change suppliers every year. This schematic four-element procedure could be incorporated into the evaluation of potential competition within merger control. The advantages of such a standardised procedure are that economic considerations would be given their due place and that firms willing to merge could make an educated guess concerning the expected decision of the Commission. It would thus lead to an increase in predictability. 4. ASSESSING COLLECTIVE DOMINANCE Collective dominance belongs to the most challenging topics in competition theory. It is thus no wonder that competition policy based on the notion of collective dominance remains hotly debated. A first difficulty already lies in adequately defining collective dominance – or coordinated effects, as this phenomenon has been called in the 2004 Guidelines on horizontal mergers. If one uses the notion of collective dominance, one is interested in identifying situations in which a number of firms are able to act collectively as iff they were a monopolist or, at least, a dominant single firm in a market. The issue of joint profit maximisation is thus at stake in discussions concerning collective dominance. A precondition for dominating a market collectively is the capacity of the participating oligopolists to co-ordinate their behaviour. The co-ordinated behaviour is also called collusion. Any behaviour constraining competition in order to achieve advantages collectively is called collusion. The co-ordination of behaviour constraining competition can be brought about either explicitly (“explicit collusion”) or implicitly (“implicit collusion” or “tacit collusion”). Before beginning with the analysis of collective dominance and collusive behaviour, a distinction between explicit and implicit collusion should be made (Stigler 1964, Hay/Kelley 1974). Firms can collude in two ways: on the one hand, collusion takes place through meetings between members of various firms and their representatives explicitly agree on certain behaviour. This is called explicit collusion. On the
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
101
other hand, collusion can also take place through the recognition of oligopolistic interdependence. Tacit or implicit collusion exists where in the absence of any formal attempts to implement a collusive outcome, firms understand that if each firm competes less vigorously, they might all be able to enjoy higher prices and higher profits.
Collective Dominance
Collusive Behaviour Behaviour constraining competition in order to attain collective benefits
Explicit Collusion
Implicit Colussion
Co-ordinate parallel behavior via cartels, price leadership and consciously arranged co-ordinated behavior
No explicit agreement between participants Conscious, but not explicitly co-ordinated parallel behavior (tacit collusion)
agreement – deviation – detection punishment
understanding* – deviation – detection – punishment * usually also counted as agreement in the sense of implicit collusion
Figure 11: Explicit vs. Implicit Collusion
In both cases, some sort of agreement is necessary. The conditions that determine whether explicit orr implicit agreement is more likely are similar to those, which make sustainable cartel behaviour more likely. This is why industries in which cartels – or attempts to form them – have been observed are regularly suspected to be fertile ground for collective dominance. Factors that facilitate explicit or implicit collusion are hence systematically discussed together. With regard to explicit collusion, there are often per se prohibitions. Competition authorities thus have little problems dealing with it. Dealing with implicit collusion is often considerably more difficult. Implicit collusion is closely connected to what is called oligopolistic interdependence. This means that in narrow oligopolies, mutual interdependence between the market participants is high. If one of the incumbents changes his use of one (or more) of his action parameters, these changes have direct effects on his competitors so that they will regularly change their behaviour too. In case the reactions of these competitors will make the firm contemplating a change of behaviour worse off, the firm does not have any incentive to change its behaviour if it can correctly anticipate the reactions of its competitors. It has been said that firms in such situations develop a common “understanding” (Bishop 1999,
102
CHAPTER IV
37) and “learn to play the game to their mutual advantage” (Caffara/Kühn 1999, 355). A possible consequence is the emergence of parallel behaviour, which leads to inefficiencies comparable to those of a monopoly or of single dominance in the sense that price exceeds marginal cost. But parallel behaviour leading to inefficiencies is only one of a variety of possible outcomes in oligopolistic markets. Fierce competition in which competitors use all the action parameters is often just as likely (Flint 1978). Collusion is in no way the only possible result of oligopolistic structures (see, e.g., Shy 1995, 115). In order to be a little more concrete, we will talk off collective dominance if two or more firms are able to permanently maintain prices on a level that is higher than that which would result in the case of perfect competition (Ysewyn/Caffara 1998). This goal might be reached by using the entire set of action parameters such as capacities, qualities, areas of supply etc. It is only of minor importance whether co-ordination takes place explicitly or implicitly. 4.1. Standard Approach The standard approach for ascertaining collective dominance is based on the structure-conduct-performance paradigm of the Harvard School. According to that paradigm, the structure within a given market determines the conduct of the participants in the market and the performance is, in turn, determined by conduct. Competition policy based on that paradigm is primarily interested in influencing market structure in a way that is supposedly conducive to reaching optimal performance. It is thus based on an instrumentalist notion of competition. It is further based on the so-called concentration-collusion doctrine, i.e., on the assumption that high degrees of concentration combined with high barriers to entry will facilitate collusive behaviour (e.g., Bain 1968, 92). For a long time, this assumption was believed to hold empirically true and it determined policy recommendations with regard to competition policy (see, e.g., Weiss 1974). Competition policy based on the notion of workable competition was therefore interested in identifying those structural elements that facilitate collusion. The factors commonly mentioned include the following: (1) Degree of Concentration High degrees of concentration are supposed to increase the probability of collusion (Bain 1951, 1956). This conjecture is based on the assumption that it is easier for a low number of actors to agree – either explicitly or implicitly – on parallel behaviour because the transaction costs of (implicit) agreement would be lower. On top of it, a lower number of actors in a market increases oligopolistic interdependence, which means that the incentives to collude increase more than proportionally as the number of competitors gets smaller. (2) Transparency of the Market A high degree of transparency in a market increases the probability of collusive behaviour for a number of reasons: it reduces the actors’ transaction costs for tacitly agreeing on collusion. These are not transaction costs in the sense of negotiating any
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
103
agreement with competitors as there is no explicit agreement. Rather, transaction costs in the sense of being able to be informed on the relevant parameters and the actions of others at low cost. This will enable firms to act accordingly without having to incur a lot of effort. It further increases oligopolistic interdependence between competitors. Under high transparency, competitive advances are easily observable for competitors and they can react without much delay so that competitive advances promise little profit. A high degree of transparency substantially reduces uncertainty concerning the behaviour of competitors, which reduces the incentives to behave competitively (Clarkk 1940, 241). (3) Product homogeneity Product homogeneity has a similar effect on the possibility of collusive behaviour as transparency. Homogeneous goods increase oligopolistic interdependence since none of the competitors has any discretion in using action parameters. Homogeneous products facilitate the emergence of explicit or implicit m collusion between their suppliers. (4) Symmetry conditions Firms can face similar market shares, similar cost functions, similar costs of financing, etc. These are sometimes referred to as symmetry conditions. The more symmetrical firms in a market are, the more likely collusion appears to be because the firms are subject to similar problems. If, on the other hand, firms are not symmetrical, collusion will often not be a rational strategy: a highly profitable firm will often be better off outperforming an unprofitable rival and have it disappear from the market entirely. Differences in cost structure lead to substantial problems in determining the co-ordinated price. (5) High barriers to entry The first three factors focus on what has been called the internal stability of an oligopoly and the chances that some sort of collusion between oligopolists will emerge. High barriers to entry focus rather on the external stability, i.e., on the question whether – given that the incumbents manage to collude – outsiders threaten collusion. Theory predicts that successful collusion leads to higher prices and lower quantity. This might make entry attractive for outsiders who thus threaten the stability of collusive behaviour externally (Stigler 1968, 233). The higher the barrier to entry, the less incumbents need to fear the entry of newcomers. High barriers to entry are thus a crucial precondition for successful collusion. (6) Surplus Capacities Traditionally, surplus capacity has been named as a factor facilitating collusion. A low degree of capacity utilisation constitutes an incentive for every company to raise its degree of capacity utilisation to the detriment of its competitors. These can, in turn, react quickly and such behaviour thus entails the danger of ruinous competition. Rational oligopolists would, however, anticipate this danger and no competitor
CHAPTER IV
104
thus has incentives to increase its degree of capacity utilisation unilaterally. This would increase the probability of parallel behaviour. (7) Price inelastic demand Highly elastic demand functions are equivalent with a high degree of substitutability. They are thus an indicator for a high degree of competition. Price in elastic demand, however, increases the incentives for collusive behaviour. If prices are inelastic, incentives for colluding on higher prices are high because the loss in quantities sold will be relatively small. It thus promises higher profits (Kantzenbach et al. 1989). These factors are the most important structural variables that – according to the structure-conduct-performance paradigm – make collective dominance likely. It should be stressed that their focus is clearly on market structure and that they are constrained to a static perspective. Furthermore, individual incentives to behave accordingly are often not taken into consideration at all. 4.2. Recent Theoretical Developments Game theory provides a more precise description of oligopolistic interdependence. Game theory is based on the analysis of strategic interaction situations. Since this is the name of the game in narrow oligopolies, game theory has been very successful in identifying the necessary conditions for successful collusion. In game theory, oligopolists are modelled as rational participants in a game. All the participants choose their best strategy making assumptions concerning the behaviour of others. Economists talk of a Nash-equilibrium in case that no participant of a game has an incentive to change his/her behaviour given the behaviour of all other participants. Let us illustrate the concept using a fictitious example: suppose there are only two active firms and we are thus dealing with a duopoly. Both firms can choose between two strategies, namely to collude with the other firm or to behave competitively. The numbers in the cells represent the profits that both firms can make given their own strategy and the strategy of the other firm. Firm 1’s profits are displayed first, firm 2’s profits second. Table 5: Strategic Interaction in a Duopoly Firm 2
Firm 1
Collusion
Collusion 10; 10
Competition 1; 15
Competition
15; 1
3; 3
If both could agree on the collusion strategy, both firms could realise a profit of 10 units each. But given that firm 2 is expected to collude, firm 1 could make itself better off still if it behaved competitively. No matter what the other firm does, it is always better to behave competitively, even though the duopolists will only reach a
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
105
profit of three units each which is obviously less than the ten units they could secure were they able to collude successfully! The basic game underlying this structure is, of course, the famous Prisoners’ Dilemma. From the welfare economic point of view, society is interested in keeping the dilemma up as it prevents duopolists from securing profits to the detriment of consumers. This outcome is a Nash-equilibrium because given that the other firm behaves competitively, there is no way to make oneself better off by unilaterally changing strategies. Drawing on simple game theoretic assumptions, it can therefore be shown that the collusive outcome is in no way the only possible outcome even if there are only two firms in the market. But if some of the underlying assumptions are changed, collusion is a possible outcome also on the basis of game theoretic reasoning. The competitive outcome is the unique equilibrium only if the game is only played once (in “one shot games” to quote game-theoretic parlance). If the game is repeatedly played, it can become rational to co-operate with the other player. Whether this is the case depends on a number of factors, most importantly the time preference of the concerned actors: if profit today is worth to them a lot more than profit tomorrow, non-collusive behaviour is more likely. If the actors do, however, have a rather long time-horizon and expect to play the game over and over again, collusion becomes more likely. The crux of game theory is that in infinitely repeated games (i.e., in games in which there is no last round or in which the last round cannot be expected with any degree of certainty), almost any outcome is a possible equilibrium (the so-called “Folk theorem”). What is needed is an equilibrium selection theory, i.e., a theory that informs us under what circumstances a particular equilibrium can be expected. To date, no convincing solutions to this problem have been offered. Although game theory is not of great help in predicting specific outcomes, it entails considerable progress in comparison to the more traditional approach: whereas the traditional approach heavily focused on static aspects of market structure, game theory focuses on behavioural incentives of the participants. It is interested in the analysis of the stability of collusive equilibria. In order to do so, the process of collusion is separated into four steps, namely agreement, deviation, detection, and punishment (Kantzenbach et al. 1995). With regard to agreement, the conditions for its coming about are identified. This is thus very similar to the more traditional approach. Again: agreement is not to be misunderstood as implying explicit collusion. Itt contains any form of co-operative behaviour between oligopolists and also includes tacit collusion. The stability and thus the success of agreement are at stake when any of the participating oligopolists has incentives to cheat.19 Rational self-interest cannot only lead to agreement, it can also lead to reneging upon agreement. Under the assumption that all the other oligopolists will stick to the agreement, it might appear profitable for a firm to reduce its own price marginally below the collusive price in order to increase quantity – and profits. Incentives to cheat are, however, not sufficient for the instability of collusive behaviour because those oligopolists complying with the agreement might react to those who do not. This presupposes their possibility to detect cheating. Yet again, the capacity to detect cheating behaviour is not sufficient. What is needed is a high
106
CHAPTER IV
probability that cheating that has been detected will, in fact, be punished.20 Structurally, the problem of punishing a cheater is identical to the Prisoners’ Dilemma as described above: everybody would profit if the cheater were punished. Punishment is, however, costly. One would therefore profit even more if punishment was successful but one did not have to spend any resources on it. Even if all the other firms have an interest that the cheater be punished, it is in no way certain that they will be able to factually punish him. Summarising, the important steps in collusive behaviour are the following:
Agreement
structural factors
Cheating
Detection
dynamic factors
Punishment
Figure 12: Steps in Collusive Behaviour
Agreement will only be sustainable, if all four steps point into that direction. The traditional approach focuses primarily on the first step. Game theory teaches us that the actors need to be able to detect behaviour that is not in accordance with collusion as well as punish such behaviour. That these steps are crucial can be shown by a number of examples: Traditionally, price transparency was believed to be a factor conducive to collusive behaviour because it is connected to a high degree of oligopolistic interdependence and makes the immediate detection of cheating possible. This is especially the case when transparency among suppliers is high. But as soon as one assumes that transparency is not only present among suppliers, but also between the supply and the demand side, this can create high incentives for deviating from collusion because the deviating oligopolist can hope to attract additional quantity fast and to the detriment of his competitors. The incentives to cheat will be higher the more difficult it is
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
107
for the competitors to react, e.g., because they are facing capacity constraints. Different to traditional wisdom, transparency in prices can thus also destabilise collusion (Motta 2000, 19). Similar considerations can be applied to the structural factor of product homogeneity as discussed above. Traditionally, it has been assumed that a high degree of homogeneity will facilitate collusive behaviour. But this is not necessarily true: a high degree of homogeneity can make it especially attractive not to collude as little loyalty due to specific preferences exists, which means that the consumers’ willingness to switch will be high if small advantages are offered. Conversely, if there were considerable heterogeneity, the willingness to switch would be rather lower (Ross 1992). A high degree of homogeneity can thus not automatically be equated with a high propensity to behave collectively. From a dynamic perspective, asymmetries between the oligopolists are important because they constitute an incentive to cheat. But it would be premature to assume that it is only the larger firms who have an incentive to cheat; the smaller firm has more to gain, hence its incentives can be higher. For large firms, punishing the behaviour of others is relatively more costly. Punishment basically means that the punishing firms also increase quantity by reducing price. The losses in turnover – and profit – will ceteris paribus be higher for large firms. If the asymmetries are in different production capacities, the larger firm has more incentives to cheat since the possibility of the small firm to punish the large firm is limited because it cannot expand production infinitely. Sunk costs can also constitute an incentive to cheat. Suppose one firm made some investment involving sunk costs, increased its capacity and tries to cheat the other firms like this. Further suppose that the firm is detected cheating. Now, the other firms on the oligopolistic market can either also invest in additional capacity or refrain from punishment. Additional investment could lead to excess capacity, which leads to reductions in price and eventually profits. It could thus be rational not to punish the deviator. But if a potential deviator anticipates that its competitors will not punish, then he has an incentive to deviate as it has high chances of turning out to be profitable. Uncertainty concerning the development off market demand is another factor making successful collusion less likely. Those oligopolists expecting demand to increase will not be ready to accept any form of agreement, as they will try to secure as large a part of the growing market (Rotemberg/Saloner 1986, 390). On the other hand, one can also argue that the expectation of a shrinking market will make collusion less likely: expecting smaller profits in the future, firms will try to hold on to high market shares today in order to realise profits. In case there is (tentative) collusion, it can be supposed to be very unstable because due to the decrease in demand, punishment would be extraordinarily costly. It is hence unlikely, which means that successful collusion is unlikely in the first place. As should have become clear on the basis of these examples, the prediction whether one can expect collusion in a specific situation is very difficult to make. On the basis of game-theoretic arguments, it has become very clear that concluding inductively from the prevalence of some structural factors to a high probability of col-
108
CHAPTER IV
lusion is highly problematic. Rather, the interplay of the relevant factors should be taken into account as explicitly as possible. This is, however, only possible to a limited extent. As predictions are highly sensitive to a number of assumptions, competition policy should be rather cautious when referring to collective dominance. Formulated as a policy advice to the legislator: the discretionary room given to competition authorities with regard to collective dominance should be rather small. 4.3. Recent Trends in the Business Environment In this section, possible consequences of business trends as described in chapter III on the probability of successful collusion will be highlighted. (1) Rapid Technological Change In many sectors, product and innovation cycles have been substantially shortened. The periods during which oligopolists can learn of their interdependency have thus also shortened. This decreases, in turn, the probability of successful collusion. Rapid technological change is connected with higher degrees of uncertainty. As we have just seen, uncertainty makes collusion less likely. (2) Increasing Mobility of Supply Mobility in supply as described in Chapter III also decreases the probability of successful collusion. Increased mobility in supply makes it less likely that prices above marginal costs are sustainable because new entrants might be attracted to a market. Since new entrants are not bound to any sort of agreement – whether explicit or implicit -, they can realise higher profits by setting price at least a little below the oligopoly price. This will enable the entrant to attract substantial demand and induce competitive pressure on the incumbents. If the threat off entry is credible, it is sufficient to deter the incumbents from behaving collusively. (3) Developments in Transport Costs The secular reduction in transport costs makes parallel behaviour less stable. The reduction of transport costs leads to an increase in the size of the geographical market. The stability of collusion is not only threatened by newcomers – as just discussed – but also by so-called fringe firms. Reduced transport costs make it possible for fringe firms to enter into the markets that have hitherto been dominated by a group of oligopolists and to put competitive pressure on them. This will make it more and more difficult for them to ignore the fringe firms. This argument seems especially relevant with regard to suppliers from less developed countries. Traditionally, they have had lower labour costs but these could not be cashed in due to high transport costs. This barrier has been substantially reduced over the last number of years. It is often in the degeneration phase of the product cycle where fringe firms can enter into markets. This means that collusion during the degeneration phase has often become impossible. Traditional theory expected that this was the phase in which incentives to collude were highest. Drawing on game theory, one can expect
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
109
that the expectation that collusion will no longer be possible in the degeneration phase will already reduce its likelihood in early phases of the product cycle. (4) Internet The effects of the Internet on the probability of successful collusion are rather ambivalent. The Internet reduces the information costs both for communication between firms and for communication between firms and consumers. There are a number of effects. t firms increase transparency and The reduced costs of communication between thus increase the possibility of establishing market information systems. Market information systems can be handy devices to detect non-collusive behaviour, which would mean that collusion should have become easier (Schmidt 2000). On the other hand, transparency is also increased on the demand side of the market. Consumers are better informed concerning prices and qualities. Deviating firms can rapidly attract additional demand from consumers. Thanks to the Internet, barriers to entry y have often been substantially reduced. This means that potential entrants exert higher pressure on the established firms. Ex ante, it is thus hard to tell how the Internet affects the probability of successful collusion. (5) Homogenisation of Preferences As already shown above, the effect off homogenous goods on the probability of collusive behaviour is not unequivocal. Based on more traditional approaches, it was usually assumed that a high degree of homogeneity would increase oligopolistic interdependence and thus also the incentives to collusive behaviour. Game theory shows that homogenous goods can be the cause for deviating behaviour, as consumer loyalty plays no role. Accordingly, collusive behaviour is not always probable with homogenous goods. Due to the observable homogenisation of preferences, collusive behaviour has become less likely in a number of industries. Homogenous preferences reduce the barriers to entry for small and medium-sized firms. Oligopolists are thus subject to more intensive competition by outsiders. But there also seems to be a trend toward ever more differentiated products, such that one could almost draw a very simple distinction: concerning “basic utility”, preferences have become almost identical the world over, whereas with regard to “additional utility”, goods tend to become ever more individualised. Some 100 years ago, a grocery store in a workers’ quarter would offer approximately 250 to 300 products; today, a hypermarkett carries up to 40,000 products. Niche products have thus become more relevant. It is these niches that constitute a business opportunity for many small and medium-size companies since – due to their size – they are often able to react faster to specific customer demands. Due to the high number of niches, it has become impossible for many large suppliers to cover all possible niches. The increased relevance of niches has consequences for the stability of collusive behaviour: oligopolists always have an incentive to react as quickly as possible to changed consumer preferences because that promises first mover profits. The incen-
110
CHAPTER IV
tives to cheat are thus considerable. The increased differentiation of products simultaneously reduces the possibilities to sanction deviating behaviour as it is not salient what products belong to the same markett and what products do not. Collusion thus becomes less likely. (6) Rapid Change of Consumption Patterns If consumer behaviour is subject to rapid change, predictions concerning the behaviour of consumers will obviously become more difficult. As shown above, increased uncertainty is connected with fewer opportunities to implement and sustain collusive agreements because every oligopolist wants to react quickly to changed circumstances. Uncertainty is one of the single most important factors preventing collusion. Uncertainty can prevent oligopolists from recognising whether changed results are a consequence of non-collusive behaviour by y some competitor or are a consequence of changes in consumer behaviour. This makes the detection of non-collusive behaviour more difficult. On top of it, uncertainty reduces the possibility of efficient and credible punishment. But if detection and punishment are questionable, then market participants can be expected to be highly critical of entering into any form of agreement in the first place. Many of the trends in business have thus h made collusion more difficult. Competition authorities should take these trends explicitly into account. It is especially noteworthy that the presence of a number of criteria that have traditionally been sufficient in order to suspect some sort of collusion are not sufficient anymore if the insights of game theory are taken seriously. 4.4. Current EU Practice Twelve years into European merger control, there are a number of cases in which collective dominance has played an important role. At the beginning, it was unclear whether collective dominance formed part of European u merger control, as it is not explicitly mentioned in Regulation 4064/89. The criteria for evaluating collective dominance have thus evolved out of the cases decided by the European Commission. In this section, those criteria are presented and critically evaluated. Until the middle of 1991, the European Commission did not explicitly recognise the oligopoly problem of mergers. Although some of the markets were highly concentrated, the Commission concentrated on analysing single dominance. Mergers between equals were evaluated positively iff they were expected to make the new entity more competitive in comparison to larger rivals. This notion is thus based on traditional thinking in structures. Connections to possible problems of oligopolistic behaviour are usually not drawn (Morgan 1996, 216). This has sometimes been referred to as the „oligopoly blind spot under the EC Merger Regulation“ (Ridyard 1992, 163). In a second phase that lasted until the end of 1992, the problem of collective dominance got more attention. Aspects of oligopolistic parallel behaviour were mentioned in VARTA/BOSCH, ALCATEL/AEG, KABEL, HENKEL/NOBEL and THORN EMI/VIRGIN MUSIC. They did not, however, play a central role. The first
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
111
case in which the European Commission used the Merger Regulation in order to deal with collective dominance was in NESTLÉ/PERRIER. This decision can be identified as the start of the third phase (Aigner 2001, 185). On its basis, the Commission started to systematically analyse not only single dominance but also collective dominance.21 NESTLÉ/PERRIER thus constitutes the beginning of a case practice that dominates until today.22 At first, the Commission analysed the possibility of collective dominance only in duopolies (NESTLÉ/PERRIER, KALI&SALZ/ MDK/TREUHAND and GENCOR/LONRHO). Onlyy recently did it begin to test for collective dominance with more than two actors in a market (PRICE WATERHOUSE/COOPERS & LYBRAND, AIRTOURS/FIRST CHOICE). In its decision, the Commission tried to identify those structural factors, which would make an agreement between the newly created entity NESTLÉ/PERRIER and BSN – the major remaining competitor – likely. Among others, the high homogeneity of the products, high market transparency, the high maturity of the products and existing relations between the oligopolists on other markets. Ever since, whenever the structural factors have pointed to a high probability of collusive behaviour, the proposed mergers have only been passed under conditions and obligations or have even been prohibited altogether. In the assessment of collective dominance, dynamic factors only played a marginal role. In NESTLÉ/PERRIER, e.g., market shares were asymmetric (NESTLÉ/PERRIER 60% and BSN 20%), but incentives for cheating due to this asymmetry were not taken into account. The decision of the Commission in KALI&SALZ/MDK/TREUHAND was taken to the Court of Justice. The Court declared the decision of the Commission as void because it had not given dynamic aspects their due place. Simply considering market shares and structural connections between oligopolists was not sufficient for assuming a high probability of parallel behaviour. Instead, the relevance of differences in capacity as well as the role of fringe firms would have to be taken into account (Venit 1998, Bishop 1999). The Court’s decision notwithstanding, the Commission kept on relying primarily on factors of market structure. From its decisions, one can infer that the Commission regards the following factors as being especially relevant: – – – – – – – –
Slow economic growth Inelastic demand High degrees of concentration High degrees of market transparency High degree of (product) homogeneity Maturity of production technique High barriers to entry, and Structural connections between suppliers.
These structural factors still play a prominent role. Evidence of this can be found in the case of EXXON/MOBIL, in which the Commission published a checklist of structural factors that it uses in order to test for collective dominance.
112
CHAPTER IV
In the third case in which collective dominance played an important role, namely GENCOR/LONRHO, the Commission relied heavily on market structure analysis. From an economic point of view, what was decisive in this case was the question whether the two duopolists would have been able to implicitly agree on the reduction of the quantity supplied and at what costs such a reduction would have been possible. For tacit collusion to succeed, it would have been crucial that both duopolists reduce their quantity simultaneously, i.e., neither of them m should have deviated and used the reduced quantity of his rival to his own advantage. But the Commission stopped short of proving this. Recently, the Commission has, however, displayed a greater willingness to incorporate dynamic aspects into its decisions. Examples can be found in EXXON/MOBIL and PRICE WATERHOUSE/COOPERS & LYBRAND. It was especially in the last case in which the Commission pointed out that there were substantial incentives to deviate from agreements although most of the checklist criteria indicated trouble. The Commission passed the merger without conditions or obligations. The Commission’s decision in AIRTOURS/FIRST CHOICE clearly is a setback in this regard: the merger was prohibited on grounds of collective dominance. Here, neither the structural nor the more dynamic factors were fulfilled. The decision thus collides with decisions that were decided exclusively on structural grounds (such as NESTLÉ/PERRIER) as those in which dynamic factors played a more important role. The assumptions that were made by the Commission with regard to product homogeneity, the transparency of the market and the existence of barriers to entry all appear to be highly controversial. The decision was already questionable solely on grounds of structural factors. On top of that, no convincing argument showing the sustainability of collusive behaviour (i.e., detection and sanctioning possibilities) was offered.23 The decision in AIRTOURS/FIRST CHOICE can serve as an indicator that the Commission has not developed consistent criteria for evaluating the possibility of collective dominance. It has been discussed whether the arguments of the Commission did not really point at the unilateral effects24 of the merger and arguments of collective dominance were only brought in in order to improve the legitimacy of the prohibition (Motta 2000). Unilateral effects used to be a problem for the European Commission, as the 1989 version of the Merger Regulation did not cover them. According to the Regulation, mergers could only be prohibited or allowed under conditions and obligations if they created or strengthened a dominant position in the Common Market or a substantial part thereof. f This explains the tendency to use the notion of collective dominance in order to deal with unilateral effects. From an economic point of view, this is highly problematic, as unilateral effects have nothing to do with collective dominance. This practice was problematic with regard to predictability and has rightly been reformed. It can be concluded that no coherent picture concerning the decision-practice of the European Commission with regard to collective dominance emerges. After first ignoring the issue, the Commission then turned to recognise structural factors. After A the preponderant occuthe Court’s decision in KALI&SALZ/MDK/TREUHAND,
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
113
pation with structural factors was not over but there were first signs for a more wideranging recognition of dynamic aspects, focusing on deviation, detection and punishment. Hence, the door to a more economic approach in the assessment of collective dominance seemed open. The Commission’s decision in AIRTOURS/FIRST CHOICE constitutes a radical break with its own decision practice. After this decision, predictions concerning the ascertainment of collective dominance seem more difficult to make than ever. Admitting that the more recent decisions in UPMKYMMENE/HAINDL and NORSKE SKOK/PARENCO/WALSUM indicate a return to more economically founded decisions, the future of the issue of collective dominance remains wide open. Decisions are thus connected with a high degree of uncertainty. If talk gets to reforming European Merger Policy, collective dominance should rank high on the agenda. In the Guidelines on horizontal mergers, the Commission deals with the issue of collective dominance, which are called coordinated effects by the Commission, in considerable detail. The procedure announced by the Commission is based on solid theoretical foundations. The Commission has indicated that it intends to pay more attention to the issue whetherr tacit collusion can be expected to be stable over time. Here, a direct link can be made to research in Industrial Organisation and Merger Policy would indeed move towards a more economics-based approach. Yet, the Guidelines could still be more concrete. The concrete conditions that have to be met for the Commission to conclude that coordinated effects are supposed to exist are lacking. If at all, uncertainty will only be moderately reduced. Inversely, predictability will only be moderately improved. The following reform proposals thus remain relevant even after the Guidelines have been implemented. 4.5. Reform Proposals The evaluation of mergers that are connected with the danger of collective dominance is highly uncertain. Up till now, no generally accepted economic approach that would be capable of adequately dealing with the complexities involved in cases of collective dominance has emerged. Up till now, the Commission has not been able to deal with the limited available knowledge in a pragmatic fashion. The bias towards structure-based approaches is not appropriate to take sufficiently into account the trends in business environmentt that have occurred over the last number of years. The limited knowledge notwithstanding, legal certainty and predictability are necessary for all parties concerned. That is why we propose to introduce a standardised and transparent procedure. It should tightly constrain the discretionary leeway of the Commission. What criteria should be used in such a procedure? On the basis of both the inductive as well as the game-theoretically based industrial economics, a proposal concerning their possible contents is developed here. We propose a checking scheme that is based on three steps. Firstly, it should be checked whether an agreement among firms, no matter whether explicit or implicit, is likely at all. Secondly, it should be checked whether there is any chance that such an agreement could be stable over time. The first two steps deal with the interactions among the firms possibly participating in an agreement. The third step focuses on
CHAPTER IV
114
possible factors external to them that could effectively restrain their behaviour. The checking scheme could look like this: Step 1: Agreement –
What factors are conducive to agreement among firms?
Step 2: Sustainability – – –
Are there any incentives to cheat, i.e., not to behave as the (explicit or implicit) agreement would suggest? Is it possible to detect deviating behaviour? Can a substantial punishment credibly be threatened?
Step 3: Dominance Reducing Factors – – –
How strong would remaining competitors be? How strong is the bargaining position of the other side of the market (usually the demand side)? How strong is potential competition?
This is only a rough outline off the scheme proposed here. We now turn to present it in a little more detail, focusing on the issues that would have to be dealt with in the form of questions. Step1: –
– – – – – –
Agreement What is the degree of concentration? This should preferably be measured using the Hirschman Herfindahl Index, as this is the best known and most used index. How volatile have market shares been in the past? How high is the transparency among suppliers concerning their prices, their capacities, their quantities produced and their qualities? How high is the degree of product homogeneity? Are production conditions similar among producers and subject to similar constraints? What are the financial resources that the oligopolists have at their disposal? What is the price-elasticity of demand?
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
115
Step 2: Sustainability Incentives to Cheat – – –
What is the distribution of market shares? How are production capacities distributed among the oligopolists? Do the oligopolists enjoy differential access to input markets or with regard to financial resources?
These questions are meant to ascertain possibly existing asymmetries between the oligopolists. As spelt out in greater detail above, asymmetries are an important source for deviating behaviour. – –
– –
– –
How high is the transparency concerning supply conditions on the demand side? If transparency on the demand side of the market is high, this can also constitute an incentive to cheat, as the deviating firm can hope to secure a substantial slice of the market if it sells at lower prices than its competitors. Does product homogeneity constitute an incentive to cheat? High product homogeneity means that consumers do not have preferences concerning specific producers. This means that they will easily switch between producers. This can, in turn, constitute an incentive to deviate since a deviating firm might be able to gain additional market share. How important is uncertainty among suppliers concerning the future development of the market? What is the average length of the product cycle in the industry under consideration?
These questions centre around possible effects of globalisation on collusive behaviour. Shorter product cycles and a high degree of uncertainty reduce the probability of collusive behaviour. Since learning processes concerning oligopolistic interdependence are shorter, the perception of mutual interdependence is lower, which reduces the propensity to collude. Shorter product cycles also mean that the interaction situation the firms find themselves in is nott that of an indefinitely repeated game any more. In the language of game theory: there is a last round and the participating firms know that it will occur sooner rather than later. This means that many of the more co-operative equilibria have become less likely. Similarly, a high degree of uncertainty reduces the likelihood off stable collusion. If, e.g., demand falls unexpectedly but substantially, all participants hope to secure short-term profits by reneging on the agreement which would – due to the fall in demand – not be stable any more in the future anyway (Rotemberg/Saloner 1986, 421). But the opposite also holds: an unexpected increase in demand constitutes a strong incentive for all oligopolists to secure some slice of the larger pie.
116 –
CHAPTER IV Are investments in capacity expansion connected with sunk costs that generate first-mover advantages?
This question deals with the issue whether sunk costs create an endogenous barrier to entry. If additional investment implies the creation of additional overcapacities, f The other oligopolists have thus few incentives to such investment is unprofitable. sanction the deviator by investing themselves. Instead, they will adjust to the overcapacities and do without sanctioning the deviator. Such a situation is thus characterised by a race to be the first mover (in this case: cheater), and sustainable collective dominance seems to be highly unlikely. Detection – – – – –
Are there market information systems that bundle information on the side of the suppliers? If so, are they updated on a daily basis – or with a relatively short time lag? How high are the costs of running these systems? Do most-favoured-customer clauses exist? How high is the degree of concentration on the demand side of the market?
Functioning market information systems are a crucial prerequisite for high transparency between suppliers. High transparency is, in turn, a prerequisite for being able to detect deviating behaviour fast. The inverse does, however, not hold. The existence of market information systems is not a sufficient condition for stable collusion, as long as other factors exist that can offset their transparency-enhancing effects. This can, e.g., be uncertainty with regard to the development of demand. In such cases, successful collusion has empirically been rather irrelevant (Cason 1994). Mostfavoured-customer clauses, however, facilitate collusive behaviour substantially. They reduce uncertainty among oligopolists by making hidden price differentiation to different groups of customers more difficult (Salop 1986). A low degree of concentration among the demand side increases the possibilities of hidden price differentiation, which makes the detection of deviating behaviour more difficult and thus reduce the possibility of stable collusion. Punishment – – – – –
What is the detection-lag, i.e., the amount of time that passes between deviating behaviour by one of the firms and its detection by its rivals? Are capacity modifications possible both on technical and market grounds? How relevant are the costs of short-term capacity modifications? Are modifications in capacity irreversible? What role do long-term contracts play?
POSSIBLE CONSEQUENCES OF TRENDS IN THEORY
117
The focus is here on the possibility that deviating behaviour can be effectively punished. The efficiency of sanctioning depends on the flexibility off the other oligopolists to modify their capacities. If such flexibility does not exist, it appears to be questionable whether a deviating firm can be punished effectively. If this capacity is low, stable collusion is unlikely. Step 3: Dominance Reducing Factors Here, one would look for factors that constrain the exercise of a dominant position given that collusion seems, at least theoretically and after having checked the first two steps, possible. Here, the structure of the demand side is an important factor. It has been shown that even quite modest market shares (10-15%) by actors on the demand side can already have a constraining t effect (Kerber 1994, 69). Additionally, it should be asked whether potential competition has a disciplining effect on the oligopolists. The following questions seem relevant: – – – – – –
Are there state-run barriers to entry? What effects does deregulation have on the industry under consideration? How relevant are sunk costs? How high is the flexibility on the supply side? How important are absolute cost advantages? What role do strategic advantages such as economies of scale, advertising, and capital requirements play?
The Guidelines should determine these three phases of the proposed scheme in a compulsory fashion for all cases in which collective dominance is suspected to be potentially relevant. All questions should be standardised and systematically be checked. Collective dominance is only a danger if there is (1) a high probability of collusion, connected with (2) low incentives to deviate from the agreement, a (3) high probability of fast detection of deviating behaviour, and (4) the availability of effective sanctioning mechanisms. The third and final step consists of a test whether the possibility of collusive behaviour can be imposed on the market, i.e., whether there are not factors beyond the control of the members of the oligopoly that can prevent successful collusion. If this scheme was rigorously applied, transparency, legal certainty, and predictability of the Commission’s decisions with regard to collective dominance – or coordinated effects – could be substantially improved.
This page intentionally blank
CHAPTER V CASE STUDIES
This chapter describes the concrete decision-making of the European Commission, based on a number of cases, and subsequently evaluates the Commission’s approach from an economic point of view. Based on these specific cases, we consider to what degree the actual decision-making diverges from the procedures generated in the last chapter. The discussion of the cases serves to identify weaknesses and inconsistencies. It thereby illustrates the many possibilities for improving European merger policy. At centre stage are cases concerning barriers to entry and collective dominance/coordinated effects. It is in the nature of case studies that not all pertinent cases are discussed, but only a sample. Some criteria guiding the selection of cases are therefore needed. Here, two criteria are paramount: some cases were chosen because they seemed to display great similarities either with regard to the relevant markets or with regard to the competitive concerns stated by the Commission. The second criterion was that the cases displayed some sort of precedent effect, and could have been instrumental in shaping the decision-practice in more than just the specific case under scrutiny. Cases in which barriers to entry played an important role are discussed first. The Commission has been called on to analyse more than one merger involving the markets for buses and heavy trucks. It was decided to look at some of these cases because prima facie the issues analysed seem very similar. The cases dealt with in some detail are MERCEDES-BENZ/KÄSSBOHRER; VOLVO/SCANIA and MAN/AUWÄRTER. Furthermore, two cases dealing with tissue products are also discussed (SCA/METSÄ TISSUE and SCA HYGIENE PRODUCTS/CARTOINVEST). Additionally, the case BASF/BAYER/HOECHST/DYSTAR has been included because in this case, the Commission delineated the global market as the relevant geographic market due to an absence of barriers to entry. Finally, the increased relevance of telecommunications services and the particular conditions present in these markets make a specific assessment of these markets necessary, and hence the decision in TELIA/TELENOR is discussed with regard to the role of potential competition. In the second part of the chapter, the most important decisions with regard to collective dominance/coordinated effects will be analysed. These are the cases concerning NESTLÉ/PERRIER, KALI&SALZ/MDK/TREUHAND, GENCOR/LONRHO, EXXON/MOBIL, AIRTOURS/FIRST CHOICE, UPM-KYMMENE/HAINDL, and NORSKE SKOG/PARENCO/WALSUM.
119
CHAPTER V
120
1. ASSESSMENT OF BARRIERS TO ENTRY IN EUROPEAN MERGER CONTROL: THE CASES OF VOLVO/SCANIA, MERCEDESBENZ/KÄSSBOHRER, AND MAN/AUWÄRTER In March 2000, the European Commission declared the merger between VOLVO and SCANIA25, two Swedish manufacturers of trucks and buses, as incompatible with the Single Market because their combined operations would create or strengthen a market dominant position in the markets for heavy trucks and buses. Five years earlier, in spite of considerable increases to market shares in the German bus market, the Commission had declared the acquisition of KÄSSBOHRER by MERCEDES-BENZ26 as compatible with the Single Market with reference to potential competition, which would lead to a progressive opening of the German market. Although both cases are concerned with the same, or at least similar, markets, in assessing barriers to entry the Commission reaches different conclusions: while in MERCEDES-BENZ/KÄSSBOHRER, the Commission considers barriers to entry into the German bus market as not sufficiently high, in VOLVO/SCANIA, when considering the Scandinavian markets, it reaches the opposite conclusion. The following case study scrutinises the decisions of the Commission. It will be shown that the assessment of barriers to entry can be characterised as inconsistent. The first section describes the arguments presented by the Commission in the case of MERCEDES-BENZ/KÄSSBOHRER. The second section describes the decision in the case of VOLVO/SCANIA. The third section deals briefly with the Commission’s decision in MAN/AUWÄRTER, which is of particular interest because the Commission followed its earlier ruling in MERCEDES-BENZ/KÄSSBOHRER quite closely. Section four contains an economic assessment of the decisions taken in these cases. 1.1. Barriers to Entry in the Case of MERCEDES-BENZ/KÄSSBOHRER In 1995, the Commission cleared the proposed merger between MERCEDES-BENZ and KÄSSBOHRER, finding it compatible with the Single Market. The merger concerned the market for buses, divided into three individual segments: city buses, inter-city buses and touring coaches. With respect to the relevant geographic market, the Commission declared that, on the basis of existing barriers to entry, the conditions for competition were different in Germany from those in other countries or the European Economic Area (EEA). Therefore, the Commission considered the German Market as the relevant geographical market. In its in-depth investigation, the Commission identified the following barriers to entry: – –
Service and repair networks as tangible barriers; and established customer-supplier relationships and brand loyalty as intangible barriers.
It was known that the proposed merger would lead to considerable increases in market shares in Germany (amounting to 44.5% in the segment for city buses, 74% for
CASE STUDIES
121
inter-city buses and 54% for touring coaches). It therefore raised serious doubts as to its compatibility with the Single Market. However, high market shares do not in themselves constitute a dominantt position. They do not allow a dominant position to be assumed if other structural factors are present which, in the foreseeable future, may alter the conditions of competition and justify a more cautious view of the significance of the market share of the merged companies. In this context, the Commission focused its investigation on the conditions for potential competition by assessing entry barriers. It identified foreign bus manufacturers such as Volvo, Scania, Renault, Van Hool and Bova as sources of potential competition. In its assessment, the Commission concluded that the barriers to entry it had identified were not to be seen as insurmountably high. The Commission cited three arguments for this evaluation. Firstly, from a technical point of view, a producer of either city buses or touring coaches would find the entry barriers to the inter-city bus market relatively low. In general, the different types of buses can normally be produced in the same plant using the same machines, and there are many common components between them. It can therefore be concluded that, in the market for buses, no technological barriers to entry exist. All European manufacturers are technologically able to enter into the various segments of the bus markets. Differences between national standards should not be considered as insurmountable barriers to entry, because all manufacturers usually take these differences into account in the design of a new bus. Secondly, the European Commission concluded in MERCEDES-BENZ/ KÄSSBOHRER that distribution and service costs are not to be seen as an insurmountable barrier to entry. The Commission showed that bus markets are contestable by pointing at the example of BOVA, which had successfully entered the German bus market on the strength of seven sales representatives and a number of contracted garages providing service. Thirdly, in considering the relevance of customer loyalty as a barrier to entry, the Commission noted that the purchasing behaviour of private bus operators is based less and less on personal relationships and more on business criteria. Moreover, brand loyalty is becoming weaker inasmuch as the fleet policy of private bus operators is increasingly determined by short service cycles. Additionally, fleets are adapted to changes in market demand on a regular basis. This offers entrants the opportunity to capture market shares fast. Moreover, in the market for city buses – and in part also for inter-city buses – public procurement is becoming an increasingly relevant factor. Public procurement would effectively open up the German market. According to the Commission, it is the key factor for increasing the pressure induced by potential competition. As has been pointed out in considerable detail in the theoretical section on potential competition, all that matters is not that the entrance actually occurs, but that the threat to enter is credible. This alone is sufficient to constrain the behaviour of incumbent firms. In view of these arguments, the Commission concluded that, in particular, potential competition from foreign suppliers would ensure that the new entity MERCEDES-BENZ/KÄSSBOHRER would not be in a position to act independ-
CHAPTER V
122
ently of its competitors and customers. Therefore, the Commission declared the proposed merger as compatible with the Single Market. 1.2. Barriers to Entry in the Case of VOLVO/SCANIA The proposed merger between Volvo and Scania, which the Commission prohibited, concerned two product markets: the market for heavy trucks and the market for buses. We will first describe the Commission’s investigation and assessment of barriers to entry in the market for heavy trucks and will then turn to the investigation in the market for buses. 1.2.1. The Market for Heavy Trucks Following a previous decision in the case of RENAULT/VOLVO27, the Commission identified three market segments according to the truck’s gross vehicle weight: the light-duty segment (below 5 tons), the medium-duty segment (5-16 tons), and the heavy-duty segment (above 16 tons). Trucks with a gross vehicle weight above 16 tons are not normally considered by customers to be interchangeable with or substitutable for trucks belonging to the intermediate or lower segment. Therefore, the Commission concluded that the category off heavy trucks with more than 16 tons could be considered to be a single relevant product market. Neither Volvo nor Scania produces trucks below 7 tons. Volvo is present in the medium-duty segment (between 7 and 16 tons), whereas Scania is not. Both firms are active in the market for trucks above 16 tons. The proposed merger would have led to considerable increases in market shares in the segment for trucks above 16 tons in Scandinavia. In Sweden, it would have meant a combined market share of more than 90%, in Finland of more than 60%, in Denmark 59% and in Norway 70%. The merger between Volvo and Scania would thus have created Europe’s largest producer of heavy trucks over 16 tons with high market shares in the Scandinavian countries. With respect to the definition of the relevant geographical market, the Commission assumed that markets for heavy trucks are still national in scope. The Commission listed a number of reasons for making this assumption: – – – –
Price levels differ significantly across Member States There are considerable differences between customer preferences in Member States Despite a certain degree of harmonisation achieved at the European level, technical requirements vary from country to country Purchasing is mainly carried out on a national basis.
The Commission therefore concluded that the market for heavy-trucks remains national, such that a merged operation between VOLVO and SCANIA would create or strengthen market dominant positions in Sweden, Norway, Finland and Ireland.
CASE STUDIES
123
The Commission considered the pressure from potential competition as not sufficiently high to limit effectively the market power of the merged firms. The markets could be characterised by the existence off high tangible and intangible barriers to entry. In its investigation, the Commission identified d the following barriers to entry. In Sweden there is a specific regulatory barrier to entry, the so-called”cab crash test”. It constitutes a significant barrier to entry into the Swedish market for heavy trucks since the additional costs of modifying f a truck to meet Swedish safety regulations amount to approximately EUR 4,000. Moreover, the Commission pointed out that a strong service network is essential for any truck manufacturer to become truly competitive and that VOLVO/SCANIA would have an additional advantage in this respect, given the extensive coverage of their service network, especially in the geographically relevant markets in which they would secure dominant positions. The Commission thus identified service networks as tangible barriers to entry. They argued that the strength of a service network would be an important determinant for the buying decision. Indeed, the difficulties in establishing an after-sales network offering f sufficient geographical coverage have been described as one of the main reasons for the very limited market entry by non-domestic producers like DAIMLERCHRYSLER, MAN, RENAULT or IVECO. A new entrant to the market would need at least five years to establish a sufficiently large network. The associated costs for instance in Sweden has been estimated at approximately EUR 20 million. Other costs that would have to be incurred by the new entrant to penetrate the market effectively include training for salesmen and workshop technicians, as well as demo vehicles and demo drivers (altogether EUR 5 million). (Note: where a truck manufacturer is already present in other segments of the truck market, such as the medium-duty segment, the cost of extending coverage to a light/medium truck network would amount to ca. 50% of the costs indicated above). The Commission argues that high customer loyalty constitutes an intangible barrier to entry in the market for heavy trucks in the Scandinavian countries. Over time, Volvo and Scania have built up customer loyalty there. Any competitors would face significant difficulties in finding efficient and reliable dealer/service points in this area.28 This would represent a severe barrier because dealer/service points are traditionally linked to their national suppliers who can offer the highest volume of business and therefore a better return on the dealer’s investment. The Commission concluded that it would be highly unlikely that potential competitors like DAIMLERCHRYSLER, RENAULT, IVEVO or MAN would be able to prevent “New Volvo” from exercising increased market power resulting from the acquisition of its only significant competitor in the Scandinavian countries. 1.2.2. The Market for Buses The proposed merger between Volvo and Scania also concerned the markets for buses. Analogous to the former decision in MERCEDES-BENZ/KÄSSBOHRER, the European Commission distinguished three categories of buses: city buses, intercity buses and touring coaches. The differentt requirements of the different types of
CHAPTER V
124
transport service mean that buses are heterogeneous products and belong to different product markets. With respect to the definition of the relevant geographical markets, the Commission assumed that markets for buses, especially for city buses, are still national in scope. The Commission listed a number of reasons for its assumption: – – –
Price levels differ significantly across Member States Purchasing is carried out nationally and purchasing habits differ between Member States Technical requirements vary between Member States.
Therefore, the Commission concluded that the market for buses and its three individual segments still remain national, so that the merger between Volvo and Scania would have created or strengthened a market dominantt position in the following markets: – – –
The market for touring coaches in Finland and the United Kingdom The market for inter-city buses in Sweden, Denmark, Norway and Finland The market for city buses in Sweden, Denmark, Norway, Finland, and Ireland.
In these markets, the Commission considered the pressure of potential competition as insufficient to limit the market power of the merged firms effectively.29 The markets could be characterised by the existence of high tangible and intangible barriers to entry. In its investigation, the Commission identified the same barriers to entry as in the market for heavy trucks: dealer and distribution networks, and high customer loyalty. In the assessment of barriers to entry in the market for buses, the Commission deviated completely from its former decision in MERCEDES-BENZ/KÄSSBOHRER. Dealer and distribution networks, as well as high customer loyalty, should be interpreted as high barriers to entry. The Commission concluded that the widespread nature of VOLVO and SCANIA service networks in the Scandinavian Countries would act as a barrier to entry for foreign suppliers. In particular, the high costs of establishing sales and after-sales organisations, combined with the limited size of the Scandinavian markets, would prevent effective market entry. High levels of brand loyalty would make this effect even stronger. The Commission concluded that the proposed merger between VOLVO and SCANIA would create or strengthen a market dominant position and declared it incompatible with the Single Market. 1.3. MAN/AUWÄRTER In 2001, there was a third case in which the Commission had to make a decision with regard to bus markets, this time concerning the proposed acquisition of Gottlob
CASE STUDIES
125
Auwärter by MAN.30 The proposed merger had effects on all three relevant bus markets, namely city buses, intercity buses and touring coaches. The Commission did not explicitly address the question of the relevant geographical market since it argued that the merger would not lead to a dominant position in any of the three segments. In its assessment concerning the relevance of barriers to entry, the Commission followed its own precedent established in MERCEDES-BENZ/KÄSSBOHRER. An explicit assessment was thus nott carried out. What was probably more important for the Commission was the issue whether the two largest manufacturers of buses in Germany (namely MERCEDES-BENZ/KÄSSBOHRER and MAN/AUWÄRTER) would collectively dominate the bus markets, given that their combined market share in the segment for city buses was 97%, in the segment for inter-city buses 93%, and in the segment for touring coaches 80%. But due to the lack of symmetry between the two dyopolists, the Commission concluded that this would not be the case. The merger was thus passed without remedies. With regard to barriers to entry, the issue that we are dealing with in this section, the case of MAN/AUWÄRTER does not allow us to draw any novel conclusions concerning the approach of the Commission. The evaluation of the Commission’s decisions presented in the next section will therefore be confined to the cases of MERCEDES-BENZ/KÄSSBOHRER and VOLVO/SCANIA. 1.4. Economic Assessment If we compare the Commission’s two decisions, considerable differences with regard to the assessment of barriers to entry can be observed. While in MERCEDESBENZ/KÄSSBOHRER, dealer and service networks as well as brand loyalty are not interpreted as insurmountable barriers to entry, in VOLVO/SCANIA the investigation led to the opposite conclusion. We now turn to economic theory and deal with the question under which conditions dealer and service networks, as well as brand loyalty, can be interpreted as an important barrier to entry. As described in chapter IV in more detail, the academic debate about barriers to entry has focused on two principal approaches. The first is attributed to Bain (1956), who defined entry barriers as encompassing any market condition that enables an incumbent firm to enjoy rents without attracting new entrants. The second model is d costs, which associated with Stigler (1968), who defined entry barriers as production must be borne by firms seeking to enter an industry, but not by incumbents. Bain’s approach includes a broader range of obstacles, particularly by evaluating the market as it exists today rather than by asking whether the barrier is a cost that newcomers must incur and incumbents did not face when they entered. As has been shown in chapter IV, the level of costs that have to be sunk before one can enter an industry is crucial in determining whether or not a potential entrant actually decides to enter a market. In MERCEDES-BENZ/KÄSSBOHRER and VOLVO/SCANIA, the Commission identified dealer and service networks and customer loyalty as barriers to entry in the market for buses and heavy trucks. In VOLVO/SCANIA, the Commission asserted that these entry barriers were to be
126
CHAPTER V
seen as insurmountable. In the following section, this assertion will be examined, taking the insights of economic theory explicitly into account. Dealer and service networks, as well as customer or brand loyalty, are barriers to entry resulting from first mover advantages. They lead to an asymmetry between the incumbent and potential competitors. As a first step, we will discuss the importance of dealer and service networks as a barrier to entry. It is undisputed that dealer and service networks can be seen as a barrier to entry (Spence 1979). But the question is how relevant such networks are in the cases under scrutiny. From the NIO, we have learned that the crucial question to ask, in order to evaluate the importance of such networks as barriers to entry, is whether the necessary investments are sunk. In order to address this question, we need to separate the reversible parts of an investment t from those that are irreversible. In VOLVO/SCANIA, the Commission did not explicitly examine the irreversibility of investments in dealer and service networks. It can, however, be assumed that large parts of such investments are not sunk. The investments in dealer stations, demovehicles and promotion campaigns are only partially irreversible. Since the Commission alleged that dealer and service networks are important barriers to entry, it should have shown which parts of the investments are sunk. Only where investments in dealer and service networks are combined with high irreversibility can it be concluded that the lack of such networks is a serious obstacle to market entry. In VOLVO/SCANIA, it remains doubtful that the investments in networks can be characterised as irreversible. The actual VOLVO/SCANIA decision with regard to the market for heavy trucks should take into consideration that all potential competitors (except MAN) are active in other segments of the truck and bus markets in the Scandinavian countries. For example, DAIMLERCHRYSLER has a market share of 30% in the market for medium-duty trucks, and its market share in respect of touring coaches is even higher, namely 40%. It can safely be assumed that existing dealer and service stations for medium-duty trucks or traveller coaches could be extended to cover servicing heavy trucks at only moderate cost. Moreover, the production of trucks is characterised by a high level of technical standardisation, so that the differences between various kinds of trucks are of minor importance. It is thus possible to extend the dealer and service stations that already exist for heavy trucks. Another option is to create dealer and service stations from scratch. Competitors such as DAIMLERCHRYSLER intend to create mobile service centres for heavy trucks in Sweden. Such mobile centres could be seen as a substitute for service stations. It is important to bear in mind that not only the creation of dealer and service networks is costly but also their maintenance. This means that incumbents with existing networks also have to bear substantial costs and thus the cost differential between newcomers and incumbents is reduced. Turning to the question of customer or brand loyalty as a barrier to entry, we now discuss under what circumstances goodwill and reputation may create such barriers. Schmalensee (1981), for example, argued that pioneering brands enjoy a first mover advantage.
CASE STUDIES
127
The economic effects of goodwill and reputation can be shown drawing on the doubly bent demand curve (see figure 1; Scherer/Ross 1990). Price
Level 1
Level 2
Quantit uantity y
Figure 13: The doubly bent demand curve
The doubly bent demand curve shows the individual demand enjoyed by a firm, which can realise goodwill or reputation effects. Between price level 1 and price level 2, the individual demand curve is less elastic in terms of price than above or below these levels. This area characterises goodwill or reputation effects. Between these levels, the firm can increase prices and will only lose moderate demand because customers remain loyal to the brand. But the loyalty effect only works within the span between price level 1 and 2. As soon as the firm increases prices above level 1, loyalty no longer affects the consumption decision of customers. As soon as the firm decreases its prices below level 2, it can gain new customers. The model of the doubly bent demand curve shows that customer loyalty effects operate within tight limits. In order to analyse whether customer loyalty creates an entry barrier, one needs a more detailed examination of consumer behaviour. What are the critical prices and how large is the inelastic area? If, for example, there were a relatively small area of low price elasticity, customer loyalty would be negligible as a barrier to entry. In this context, it is important to distinguish between consumer and investment goods. With regard to consumer goods, customer loyalty can at times be very important (large area of price-inelastic demand function). With regard to investment goods, it will, as a rule of thumb, be less important (small area of price-inelastic de-
128
CHAPTER V
mand function). The demand concerning investment goods is based on business considerations such as price and economy in operation. There is another reason for being cautious in interpreting customer loyalty as a barrier to entry. In economic theory, the effects of customer loyalty concerning barriers to entry are ambiguous. There are various models showing that where entry actually occurs, incumbents with a large share of loyal customers are less aggressive in price competition than incumbents with a lower share of loyal customers (Schmalensee 1983 and Fudenberg/Tirole 1984, 363). This is based on the assumption that if entry occurs, the incumbent faces two kinds of customers: regular customers with high loyalty and other customers with low loyalty. For the incumbent, the strategy of price discrimination is rational. The incumbent would set high prices for loyal customers (realising monopoly profits) and low prices for other customers (having to compete with the entrant). Should price discrimination between regular customers and other customers be impossible, the incumbent has to set an average price. If the share of regular customers were high, the average price would be correspondingly high. Therefore, entry into such a market would be easier than in cases where incumbents have fewer regular customers. In the concrete case of VOLVO/SCANIA, the assessment of customer loyalty as an entry barrier is ambiguous. We would first have to identify the size of the area between the two bends of the demand curve. With regard to the investment character of trucks and buses, we would not expect this area to be very sizeable. This can be assumed because other potential competitors like DaimlerChrysler, Renault and Iveco also enjoy excellent reputations in the EEA. Secondly, it should be conceded that the incumbent VOLVO/SCANIA is able to realise price discrimination in the market for buses and trucks. Therefore, it cannot be assumed that, if new entry occurs, the new entity would be less aggressive in price competition. Therefore, it still remains open whether customer loyalty should be seen as a serious obstacle for entry into the markets for buses and heavy trucks, or not. It remains unclear why the Commission assessed consumer loyalty in MERCEDES-BENZ/KÄSSBOHRER as less important than in VOLVO/SCANIA. In MERCEDES-BENZ/KÄSSBOHRER, the Commission pointed out that the purchasing behaviour of private bus operators is based less and less on personal relationships and more on business considerations. Furthermore, in the market for city buses, the increasing relevance of public procurement should be taken into consideration. In VOLVO/ CANIA, these arguments were not explicitly addressed. This examination of the Commission’s reasoning and decisions with regard to the planned mergers of MERCEDES-BENZ/KÄSSBOHRER and VOLVO/SCANIA has shown that there are considerable differences in the assessment of barriers to entry in identical, or at least similar, markets. To make European merger control more predictable, some refinement of the economic analysis in the assessment of barriers to entry appears necessary.
CASE STUDIES
129
2. ASSESSMENT OF BARRIERS TO ENTRY IN EUROPEAN MERGER CONTROL: THE CASES OF SCA/METSÄ TISSUE AND SCA HYGIENE PRODUCTS/CARTOINVEST In January 2001, the European Commission prohibited the merger between SCA and METSÄ TISSUE31 because it would lead to the establishment or the strengthening of a market dominant position in the markets for toilet tissue, paper towels and napkins in Scandinavia. Two years later, the Commission had to deal again with a case in tissue products, this time involving the proposed merger between SCA HYGIENE PRODUCTS and CARTOINVEST.32 The merger was accepted during the first phase of investigations. Although identical product markets were at stake, the Commission’s decisions deviate substantially from each other. We propose to describe and compare the two decisions made by the Commission. The first part of this section will address the Commission’s decision in SCA/METSÄ TISSUE. Section two addresses the similar case of SCA/ CARTOINVEST. In the third section, the consistency of the two decisions will be examined. Section four contains a discussion of economic arguments pertinent to the decision. 2.1. Barriers to Entry in SCA/METSÄ TISSUE SCA is a Swedish producer of hygiene products and transport wrappings. It sells a number of hygiene products made out of paper across the entire European Economic Area. METSÄ is a Finnish producer of tissue products that runs production sites in Sweden, Finland, Germany, Poland and Spain. The merger between SCA and METSÄ led to serious concerns with regard to its effect on competition in the markets for toilet tissue, paper towels and napkins. These products are offered as consumer goods but also as so-called “Away From Home“ (AFH) products for hotels, restaurants, etc. In accordance with its earlier decision in KIMBERLY CLARK/SCOTT33, the Commission assumed that it was dealing with two different product markets because consumer goods and AFH products not only have different buyers but also different distribution t channels. With regard to the consumer products, the Commission noted that these were sold under brand labels and private labels. The Commission holds that there are significant differences between the two. Sales of brand labels are determined by consumer loyalty and price, but also by marketing efforts. For private labels, the supermarket chains determine the quality of the product. In this case, the paper companies produce according to the orders placed by the supermarket chains. The products are sold exclusively by retailers. For supermarkets, it is easier to switch between various producers with regard to private labels than with regard to brand labels according to available production capacity. The contracts concerning private labels are the result of a public tender process. Such contracts are typically of short duration, enabling supermarkets to switch quickly between various producers. The number of potential suppliers is primarily determined by quality, transport costs, available capacity and reliability of timely delivery. Taking these differences between producer and private
130
CHAPTER V
labels into account, the Commission concluded that it was dealing with two different product markets. Concerning AFH products, a distinction between producer and private labels was not necessary since private labels were only of marginal relevance in this market. As regards the demarcation of relevant geographical markets, the Commission assumed national markets in both consumer and AFH products. This decision was the consequence of high transport costs, existing price differences between the various national markets and differences in market shares between them. Therefore, the consequences of the merger were estimated one by one for each of the Scandinavian countries. However, separating the Swedish from the Norwegian market was highly controversial. Therefore, the Commission also analysed the consequences of the merger assuming Sweden and Norway share a common market with regard to the relevant products. The Commission concluded that the proposed merger would lead to considerable market shares in the markets for toilet tissue and paper towels in both the markets for branded products as well as private labels a (for Sweden, the Commission predicted combined market shares of 80 to 90%, for Norway 60 to 80%, for both Denmark and Finland 50 to 60%). With regard to AFH products, the predicted combined market shares had similar levels. The Commission was sceptical about any constraining role that potential competition would play with regard to the behaviour a of the proposed merged SCA/METSÄ entity because it believed barriers to entry to be substantial. Well-known foreign paper producers such as Kimberley-Clark and Procter & Gamble figured as potential entrants (outside Scandinavia, these producers command substantial market shares) as did some smaller Italian firms specialised in the production of private labels. But the Commission believed that entry of either group to the Scandinavian markets seemed unlikely. The Commission believed that transport costs were a severe barrier to entry. Since paper products – and in particular toilet tissue – are very costly to transport, any distance over 700 km would not make sense. Since the Scandinavian markets were a long way from the production plants of the other European producers, it would not be profitable to supply them. Market entry would only make economic sense if new plants in Scandinavia were opened. Creating new capacities in Scandinavia is, however, unlikely. On the one hand, the paper-markets there are fully developed and the high level of per-capita consumption indicates satiation – therefore, low growth rates have to be assumed. On the other, costs of establishing new plants are estimated to be around 80 million euros, a sufficiently substantial sum to discourage those considering entry. Moreover, setting-up a new production facility would take 18 months. Entry could not therefore occur on a short-term basis and the threat of potential competition would not be very severe. What would make market entry even more challenging is that in the market for producer labels in toilet tissue and paper towels, there is a considerable degree of brand loyalty. Both SCA and METSÄ have att their disposal very successful brands, which would make market entry yet more challenging.
CASE STUDIES
131
In view of the barriers to entry just described, the Commission argued that any constraining influence of potential competition was rather unlikely. It therefore prohibited the merger between SCA and METSÄ. 2.2. Barriers to Entry in SCA HYGIENE PRODUCTS/CARTOINVEST In the spring of 2002, the Commission had to make a decision in a similar case, this time concerning SCA HYGIENE, a Dutch subsidiary of the Swedish SCA group, which wanted to buy CARTOINVEST, an Italian producer of paper products. The merger was relevant for the consumer goods markets for toilet tissue, paper towels and napkins in the area of private labels. CARTOINVEST supplies producer labels exclusively in Italy, and SCA is not present in the Italian market so that there is no geographical overlap there. What the Commission was interested in was the combined market shares that the merged entity would have in the Spanish markets in the area of private labels. For the three relevant products (toilet tissue, paper towels and napkins), these market shares were expected to be between 50 and 60%. The Commission predicted that the merger would not, however, result in a dominant position, since independent behaviour by the new entity would be impeded by effective and potential competitors. It is noteworthy that, with regard to the role of potential competition, the Commission arrived at a conclusion different from that in the case of SCA/METSÄ. The Commission evaluated transport costs as a relevant barrier to entry. But brand loyalty would not play a significant role since it can generally be assumed to be less important for private labels than branded labels. Concerning the evaluation of potential competition, the main difference in this case versus SCA/METSÄ relates to the nature of the Spanish market, where all potential producers have production facilities no further than 700 to 750 km away, so that the effect of transport costs would not be as severe as in the Scandinavian markets. All relevant producers would therefore be able to enter the Spanish market if they so wished, even if they did not produce in Spain itself. On top of that, the relevant market for “private-label paper products” could be characterised by a higher degree of potential competition and therefore by tighter constraints on the behaviour of the incumbents. This stems from the fact that in the case of private labels, supermarket chains determine product quantity and quality, while the tissue manufacturers only produce on the basis of specific orders. Due to the short contract lengths, it is relatively easy for the supermarket chains to shift demand from one producer to another. The number of potential suppliers is thus primarily determined by quality, transport costs, available capacity and reliability. The Commission believed that major European suppliers of branded labels such as Kimberly Clark or Procter & Gamble, that so far have not been active in the market for private labels could utilise idle capacity to enter that market without delay. Additionally, the Spanish market is characterised by a number of smaller producers of tissue products who would also constrain the behaviour of the merged entity. The Commission thus concluded that the merger of SCA-HYGIENE and CARTOINVEST would not establish a dominantt position with regard to paper products in the area of private labels. The behaviour of the new entity would be suf-
132
CHAPTER V
ficiently constrained by effective and potential competition. The Commission therefore agreed to the proposed merger without entering into a second phase of investigations. 2.3. Comparison of the Decisions Comparing the two decisions, one can conclude that the Commission applied the criteria for potential competition with regard to paper products in a consistent manner. The different decisions in the two cases can be justified as resulting from differences in the relevant market conditions; they are not based on a different assessment of identical conditions. Prima facie, the prohibition in the case of SCA/METSÄ TISSUE is based on the prediction that actual entry by a potential competitor seems highly unlikely, due to the large distances involved and the high transport costs that these entail. Market entry could thus only be realised by building up capacity in Scandinavia. Taking into account the market phase (stagnation) and the substantial investments necessary – being to a considerable degree irreversible, i.e., implying sunk costs – it seems that entry would indeed appear to be unlikely. Additionally, entry would be made more difficult by high brand loyalty concerning the established brands in toilet tissue and paper towels. On top of investing in production facilities, market entrants would thus have to invest in establishing their own brand names. Whether these assumptions can reasonably be made with regard to the markets for toilet tissue and paper towels will be discussed in the next section on the basis of timely and empirically relevant entry. In SCA HYGIENE PRODUCTS/CARTOINVEST, the Commission checked for the same barriers to entry. It concluded, however, that they were of little relevance for the Spanish market: because distances to production sites were relatively small, transport costs as a barrier to entry would be negligible, and since in this case, critical market shares would only be established for private labels, brand loyalty did not play any role. Therefore, the two decisions – although leading to different conclusions – are consistent. 2.4. Economic Assessment The case of SCA/METSÄ raises some interesting questions from an economic point of view. Some of these questions will be dealt with in a little more detail here. They are concerned with: (1) the distinction between producer and private labels as constituting different relevant product markets; (2) the evaluation of transport costs as a barrier to entry; and (3) the evaluation of brand loyalty as an additional barrier to entry. Ad 1) We propose to take a closer look at the distinction between brand and private labels even though, in this particular case, it was not crucial because the
CASE STUDIES
133
merger would have led to high market shares even if this distinction had not been introduced. Yet, the decision in SCA/METSÄ might function as precedent that could influence similar cases in the future. In antitrust analysis, there are a large number of concepts used to determine relevant markets. As was spelt out in the last chapter, the Commission adopts the concept, which asks whether a reasonable consumer considers two products as substitutable. Whether a certain good is suited to fulfil a certain function is therefore solely determined by the other side of the market. It is the customer, no matter whether final consumer, merchant or producer using the good as an input, who determines whether a certain good can be substituted for another one. Strictly speaking, this is thus a subjective concept. Under the assumption that brand labels and private labels are often functionally identical, the decision to declare them as two separate markets is not easy to comprehend. If one focuses exclusively on the substitutability of the products, they should be grouped as belonging to one single market. This should also be the case with regard to toilet tissue and paper towels. Evaluating them as belonging to different markets would only be justified if they had different characteristics, which would prevent customers from substituting one for the other. In the concrete case of SCA/METSÄ TISSUE, the Commission identifies two separate relevant markets of brand and private labels without, however, explicitly demonstrating gaps in the substitutability of the products. Instead, the Commission justifies its decision by pointing to differences in the distribution channels of the two markets. Although this is factually correct, it begs the question whether different distribution channels as such justify the delineation of two different markets. For the way in which a product is distributed would only indicate the existence of two different markets if it led to a reduction of substitutability as judged by consumers. It does not seem warranted to draw such a conclusion in the case in hand. Of course, certain gaps in substitutability might well exist with regard to branded labels as opposed to private labels. But we would argue that these gaps would tend to become less relevant, if suppliers tried to use market power in one of the segments. In this case, one gets the impression that the relevant market was not delineated with consumers in mind, but primarily from the point of view of suppliers. This is, however, not compatible with the substitutability concept as developed in competition policy and as confirmed by the relevant courts. The possible objection that branded and private labels are characterised by different groups of buyers is only of limited relevance here. True, supermarket chains are the buyers of private labels who then turn into suppliers, butt at the end of the day both branded and private labels compete for the demand of final consumers. Whether or not a firm has unlimited room for manoeuvre is decided at the last step of the chain, namely in relation to the final consumer. From the point of view of this final consumer, the Commission has not come up with a convincing argument for why branded and private labels should not be considered as substitutes. This is especially true if one takes into consideration that many private labels are characterised
134
CHAPTER V
by the same quality as branded labels (e.g., Migros or Marks & Spencer). The assumption that we are dealing with two different product markets is thus not justified. Ad 2) Evaluating transport costs as a barrier to entry is a very common practice. From an economic point of view, there is little doubt that the possibility of successful entry is, to an important degree, determined by the transport costs that have to be incurred. If they are very high, successful entry will often be impossible. Yet, even with regard to transport costs, some aspects of their evaluation as barriers to entry are worthy of discussion. In its decision in SCA/METSÄ, the Commission concluded that the transport costs that had to be incurred in order to serve the Scandinavian market were sufficiently high that market entry was only going to be successful if entrants established their own production plants in the Scandinavian countries. To evaluate the relevance of transport costs as a barrier to entry, it is necessary to predict their short-term development. As described in the chapter on trends in the business environment, transport costs have been subject to substantial decreases in the last two decades. If deregulation on the national, as well as on the European, level continues, additional transport cost savings can be expected. With regard to tissue products, this could mean that entry would become more attractive even without setting up private production facilities in Scandinavia. In its decision on SCA/METSÄ, the Commission did not, however, look at the most likely future development of transport costs. Its decision is solely based on transport costs as currently observed. Another aspect that should have been analysed is the question whether possible disadvantages with regard to transport costs could be off-set by cost advantages in other areas, which could then lead to a reduction in the relevance of transport costs as a barrier to entry. These compensating factors could, for example, be efficiencies in production or the realisation of economies of scale. This argument is based on the assumption that there could be differences in the utilisation of production facilities between incumbents and potential competitors present in other geographical markets (Briones 1995). If, for example, the potential competitors produce at higher capacity in their markets than the incumbent, then the potential competitors m might be able to realise economies of scale, which would lead to lower average costs. If these cost advantages are substantial, they could compensate – or even over-compensate – for the disadvantages that are due to transport costs. It is unclear whether the Commission took this argument into consideration in its decision with regard to SCA/METSÄ. It is at least conceivable that competitors such as Kimberly-Clark or Procter & Gamble have at their disposal advantages such as these, which could ease their entry into the Scandinavian market. Another factor that might compensate for transport cost disadvantages is advantages in labour costs. Producers that are active in countries with low labour costs could thus successfully enter a market despite the existence of substantial transport costs. In the market for toilet tissue, this can currently be observed by the attempted
CASE STUDIES
135
entry of Turkish and Russian producers. These producers play the role of fringe firms who constrain the behaviour of the larger producers. In its SCA/METSÄ TISSUE-decision, the Commission evaluated the relevance of fringe firms as marginal. The current success of Russian and Turkish producers seems to prove the Commission wrong. Ad 3) As was already pointed out in our analysis of the VOLVO/SCANIA case, consumer loyalty is an important m element in the Commission’s decisionmaking with regard to barriers to entry. In the case of SCA/METSÄ, we are dealing with consumer products where brand loyalty can generally be assumed to play a more important role than with investment goods. But even with regard to consumer goods, it would be premature to assume that loyalty systematically constitutes an insurmountable barrier to entry. The emergence of consumer loyalty can be explained by the presence of asymmetric information between suppliers and buyers. In general, one can expect uncertainty to be present on the side of consumers with regard to the quality and other characteristics of the goods offered by the suppliers. The supplier can be assumed to be informed with regard to the characteristics of the goods offered ex ante. The buyer, on the other hand, will often only be able to ascertain the quality after having consumed the good, i.e. ex post. It has been shown that such information asymmetries can lead to the breakdown of markets (Akerlof 1970). One consequence of uncertainty with regard to quality is higher transaction costs that are caused by additional information-gathering activities. In such a situation, consumer loyalty can be interpreted as a mechanism to reduce uncertainty and the transaction costs accruing as a consequence of uncertainty. As soon as a consumer has found a good that conforms to her preferences, she will repeat consuming the good even if other suppliers offer other equally good products. Consumer loyalty basically serves to save on transaction costs. If one is to evaluate the relevance of consumer loyalty as a barrier to entry, one thus needs to analyse the relevance of transaction costs for reducing quality uncertainty in the respective market. If these transaction costs are high, consumer loyalty can indeed be expected to be a relevant barrier to entry. Should they, however, be negligible, then consumer loyalty would not constitute an important barrier to entry. With regard to information asymmetry, the economic literature distinguishes between three kinds of goods, namely search-, experience- and trust goods (Nelson 1970/1974). With search goods, consumers can acquire information on the relevant characteristics of the good before buying it. After having bought it, the information can be verified. With experience goods, reliable information on the characteristics is not obtainable before buying the good. The characteristics of the good have to be “experienced” after having bought it. With trust goods, the relevant characteristics are neither ascertainable before nor after buying the good. Drawing on this goods typology, one can conclude that consumer loyalty plays an important role in reducing transaction costs with regard to experience and trust goods but less so with regard to search goods. For evaluating whether consumer
136
CHAPTER V
loyalty constitutes a barrier to entry, one would therefore have to look at the type of good concerned. For the goods under consideration here, namely toilet tissue and paper towels, the role of consumer loyalty is therefore dubious. It is possible that consumers have little incentive to ascertain the quality of toilet tissue and paper towels ex ante because this would, due to the low prices of these items, make no economic sense. It is, however, easily possible to ascertain the characteristics ex post. Due to the characteristics of the products as short-lived consumer goods, the (economic) risk of a bad bargain is limited. From the point of view of information economics, information asymmetries are of little relevance here. Therefore, consumer loyalty should not play an important role. Moreover, consumers do not have to bear any substantial costs of switching, should they decide to turn to a different supplier of paper products. This is an additional argument against the presence of important barriers to entry. In concluding, it can be said that if consumer loyalty is to be taken into account, then looking first at the information properties of the relevant market is urgently recommended. The assumption that the market for toilet tissue is characterised by a high degree of consumer loyalty, making entry difficult, is thus not beyond question. If consumer loyalty to branded products were very high, no retail chain would have any incentive to delist the major a brands. This can, however, be observed time and again. The products of SCA are not exempt from this, which shows that the Commission might have overemphasised consumer loyalty. With regard to recent trends in the market for toilet tissue, there have been entrants in the upper segment of the market (Procter & Gamble recently entered the German market with its premium product Charmin) as well as in the lower segment (by Russian, Polish and Turkish producers). This development is noteworthy because the market for toilet tissue is not believed to be very profitable, not least because of the buying power of the major retailers. That entry is actually taking place leads us to conclude that the barriers to entry analysed by the Commission are empirically less relevant than the Commission believed. This could be restated as a proposal that, in the analysis of barriers to entry, simple conjectures and plausibility arguments should play a less important role and recognition of dynamic factors in the economic environment, which are crucial for competition, should play a more important role. 3. ASSESSMENT OF BARRIERS TO ENTRY IN EUROPEAN MERGER CONTROL: THE CASE OF BASF/BAYER/HOECHST/DYSTAR In September 2000, the European Commission had to decide whether the acquisition of a stake in Dystar– a joint venture controlled by Bayer and Hoechst – by BASF was compatible with European competition law. The acquisition was analysed under European Merger Law because it would lead to concentrative effects.34 The Dystarcase is included in this study because the Commission decided that the relevant geographical market was the world market. In its analysis of the possible effects of barriers to entry, the Commission concluded that there were no relevant barriers, which
CASE STUDIES
137
could lead to market power by the joint venture. This case is dealt with here for two reasons: it is firstly used to describe the conditions under which the Commission is willing to define geographical markets on a global scale and it is secondly used to describe the conditions under which the Commission assumes relevant barriers to entry to be absent. 3.1. Barriers to Entry in the Case of BASF/BAYER/HOECHST/DYSTAR In 1995, the textile dye stuff divisions (TDS-products) of BAYER and HOECHST were merged. The result was a joint venture called DYSTAR. In the case under consideration here, BASF proposed merging its complete TDS business with DYSTAR and thus joining the joint venture. The Commission delineated the relevant product market by analysing the possibilities of substitution available to consumers of TDS-products. From the point of view of the consumer, textile dyestuffs are interchangeable and thus homogenous if they successfully link the colours with the textile fibres to be coloured. For every synthetic fibre, there are exactly specified textile colours that together constitute a so-called TDS category. Within one such TDS category, textile colours can be used interchangeably at will. Nevertheless, there are substantial differences in price depending on whether one is dealing with standard products d or with speciality products. This fact notwithstanding, the Commission concluded that speciality and standard products could be grouped into a single product market. All producers of textile colours offer standard products as well as specialities. In delineating the relevant geographical market, the Commission assumed that this covered at least the EEA if not the entire world. The reason given for such a broad delineation was the absence of barriers to entry. In the first step of its analysis, the Commission identified that there were no barriers to trade in textile colours. Within the EEA, there are no barriers to trade either in the form of tariffs or non-tariff trade barriers. These are also absent in trade relationships with the relevant, less developed countries (with the sole exception of Taiwan). Textile colours can therefore be considered to be freely tradable. The Commission judged transport costs irrelevant with regard to their trade so that geographical distance could safely be ignored. In the second step of its analysis, the Commission considered whether any technological barriers to entry exist. It concluded that there were no substantial technological barriers to entry. The technology for producing textile colours is freely and readily available on a worldwide scale and production can be started without high up-front investments in capital, including human capital. This is empirically proven by the successful market entry of some small Asian companies that succeeded, in a very short time-span, in producing high-quality textile colours. These small companies have also been an economic success. Similar developments can be expected from companies in Central and Eastern Europe as well as in Latin America. The Commission noted that imports of TDS-products into the European market had risen more than 70% in the last ten years. To name but one example, Chinese exports are higher than the entire market volume of textile colours in Central and
CHAPTER V
138
Eastern Europe. Simultaneously, the market share of all European suppliers of TDSproducts, such as DYSTAR, NOVARTIS, CLARIANT and YORKSHIRE, has fallen dramatically in Europe as well as in other parts of the world. Taking these developments into consideration, the Commission concluded that the textile colours supplied by the various producers would by completely substitutable against each other. They are thus considered to be homogenous products. The Commission did not identify any barriers to entry on the basis of brand loyalty, brand- or reputation effects. Although the merger of the colour textile activities of BASF with Dystar would lead to aggregate market shares of up to 60% in Europe in a number of TDScategories, this merger would not constitute the establishment of a dominant position because of (1) the actual level of competition remaining as well as (2) the constraining effect of potential competition that was credible due to the lack of barriers to entry. The Commission pointed to the various Asian suppliers of textile colours that were not only offering the colours cheaply but also at high quality. The entry of these firms into the world market has been accompanied by drastic price reductions for textile colours. Due the absence of barriers to entry, Dystar is forced to react to the price reductions of its competitors with price reductions of its own. Anticipating the development of the market, a reversal of this trend is unlikely because new capacities are being built up in various parts of the world, whereas demand remains stable. The existence of surplus capacity was evaluated as important evidence by the Commission that it expected potential competition to remain significant in the foreseeable future. The Commission concluded that BASF’s stake in Dystar would not lead to the establishment of a dominant position, even though adding together the respective market shares leads to high percentages. The proposed merger was thus approved after the first phase of investigations. 3.2. Economic Assessment This case can be used to show under what conditions the Commission believes a constraining effect of potential competition to be sufficient. The existence of potential competition can be traced backk to the following causes: – – – – – –
Absence of barriers to trade Low transport costs Production technology involving little know-how Fairly homogeneous goods Evidence of recent market entry Excess capacity.
If these factors are present, the Commission believes there is a sufficient degree of potential competition. The factors named here closely resemble those familiar from market structure analysis. A high degree of homogeneity, absence of barriers to entry, and high transparency within the market prevent firms from being able to set
CASE STUDIES
139
prices that would earn them super-normal profits. The criteria used are drawn from and oriented towards the ideal notion of perfect competition. This market can be called contestable when applying the criteria used by representatives of contestability theory, as described in chapter II. Due to their extraEuropean production facilities, the absence of trade barriers, and low transport costs, newcomers can easily enter – and exit – the European market. They have thus the capacity to carry out a “hit-and-run” strategy. The possibility of such hit-and-run entry has a controlling effect on the behaviour of established competitors. The emergence of super-normal profits can therefore be excluded. The decision in this case is thus largely compatible with economic theory. But it also clearly shows that the question whether or not potential competition is present to a sufficient degree is answered solely on the basis of structural factors. If these structural factors signal a high probability off potential competition, the Commission accepts its presence. The effects of globalisation are taken into account if their existence can be proven by structural factors. Hence, decision-making within the Commission is still largely guidedd by structural factors. What is quite problematic in this case – and in many other cases as well – is that the Commission determines the degree of pressure induced by potential competition by looking at actual evidence of entry that has occurred in the past. But such past evidence is clearly not a necessary condition for the effectiveness of potential competition. A market can be contestable if no previous entry can be observed. The nonoccurrence of market entry can be precisely the result of incumbents being afraid of entry and thus setting their prices at levels that let entry appear unattractive. Effective potential competition would then lead to the absence of actual entry. Looking at past evidence of actual entry is thus clearly a wrong criterion. Whatt the Commission needs to look at is whether a threat to enter a market is credible. Trying to operationalise the relevance of potential competition by looking at past entry is clearly inadequate. 4. THE ASSESSMENT OF BARRIERS TO ENTRY IN EUROPEAN MERGER CONTROL: THE MARKETS FOR TELECOMMUNICATIONS IN THE CASE OF TELIA/TELENOR In recent years, the European Commission has had to decide a number of cases in t The extensive processes off deregulation and liberthe telecommunication industries. alisation in telecommunications have led to the emergence of a large number of new firms active in this field. It has further led to the possibility of telecommunications firms being active on a geographically broader scale. This also led to the increasing prevalence of mergers and acquisitions. The Commission has had to deal with a number of cases, including MCI/WORLDCOM, TELIA/TELENOR, MANNESMANN/ORANGE, VODAFONE AIRTOUCH/MANNESMANN, MCI-WORLDCOM/SPRINT, and recently TELIA/SONERA.35 The following case study deals with the Commission’s assessment of barriers to entry in these markets. The focus is on TELIA/TELENOR R (1999), because it was in this case that the Commission published some fundamental reflections concerning
140
CHAPTER V
barriers to entry and potential competition in the market for telecommunication. The first section describes the special conditions in telecommunication markets, especially for entry and potential competition. Section two presents the issues at stake in this case, and section three analyses the approach of the Commission towards assessing barriers to entry in the case of TELIA/TELENOR. Section four critically evaluates the Commission’s decisions from the point of view of economic theory. 4.1. The Special Conditions for Entry in the Markets for Telecommunications During most of the 20th century, the telephone industry in the majority of European countries was structured in the form of vertically integrated national monopolies, usually publicly owned. These public telephone operators offered their final users basic voice services. Such basic voice services provided connection to any other subscriber within the country and – through international agreements with other operators – access to the international public switch telephone network. As described in chapter III, privatisation as well as deregulation led to substantially increased competition in the telecommunications markets. In general, national telephone networks are considered to consist of three distinguishable segments: local loop, long distance and international. In principle, new entrants could build their own networks. However, the existing local loop networks were built up over substantial periods of time, and financed when the telephone companies concerned were public sector bodies. For an entrant facing an established incumbent, the high cost and long time periods involved in building up new local loop networks mean that incentives to enter the market in this way are low. Telephone networks are characterised by a high level of fixed costs, decreasing total average costs and high irreversibility. High irreversibility means that the costs of the investment are sunk. Combined, these factors constitute high barriers to entry. Under conditions like these, the potential entrant thus faces some basic challenges: –
–
–
Potential customers who are currently subscribers of the incumbent, need to be persuaded to switch to the new entrant as their chosen service provider. Subscribers will probably remain physically linked to the incumbent, and it will be necessary for traffic to pass through some of the incumbent’s network in order to get to or from the new entrant. The vast majority of outgoing calls from the new subscribers will still have to be terminated on the network of the incumbent, and the entrant has to be able to hand these calls to the incumbent without incurring charges for the use of the incumbent’s network, which would make his offer uncompetitive.
The simplest possible form of entry is resale, whereby the entrant purchases an endto-end retail service from the incumbent. The entire line continues to be owned and
CASE STUDIES
141
operated by the incumbent, but the entrant resells the retail services provided by the incumbent. The opportunity to make a profit depends on the reseller’s ability to keep overhead costs under his control (sales and marketing, billing, service centre, etc.) lower than the incumbent’s equivalent costs. However, this whole exercise can only be profitable if the incumbent or the national regulatory agency is prepared to allow the reseller to survive. Another form of entry, which involves a more substantial commitment in terms of network development by the entrant, is the call-by-call carrier selection. The final user remains a subscriber to the incumbent, but also becomes a subscriber to whichever other operators provide outgoing call services. Furthermore, entry could also occur by carrier pre-selection, which is essentially the same as call-by-call, except that all outgoing calls are automatically diverted to the new operator unless the subscriber manually overrides the diversion. Carrier pre-selection and call-by-call selection are used mainly for long distance and international calls, where the prices charged by incumbent carriers are substantially above costs and so give an entrant the opportunity to compete by running business over its own network, and charging its customer less for doing so. The telecom incumbents in all EU Member States are obliged by their respective regulators to offer such services, including with respect to the local loop. A more substantial form of entry is local loop unbundling. The effect of such unbundling is as if the subscriber concerned had had his cable connections taken from one local exchange operator and placed onto the main distribution frame of another entrant. All of the subscriber’s relationships are with the entrant. The only respect in which the incumbent continues to have control of the diverted network is that it retains ultimate ownership of the twisted copper pair to the final user. The entrant must generally pay the incumbent for the lease of the copper pair from the incumbent’s premises to the subscriber. Local loop unbundling is mandatory in all EU Member States due to Regulation 2887/2000. There are other possibilities for getting local loop access to final subscribers: for example, the use of cable television connections or of relatively new technologies, such as the conveyance of traffic over electricity cables. Moreover, there are also methods designed to avoid the use of fixed lines entirely, such as radio link. In all the options discussed above, an important barrier to entry for entrants results from the need to interconnect their networks with the network of the incumbent. If interconnection charges are high relative to the price at which the incumbent offers services to its own subscribers, it can be difficult for an entrant to make a profit on the operation. The entrant is entirely dependent on the incumbent for the price at which interconnection is offered. The expectation of increasing interconnection charges could considerably constrain the level of investment in new networks. Investments in the new network are sunk, if the price of interconnection rises, it can render the entrant’s operation unprofitable. This strengthens arguments in favour of the creation of regulatory agencies, which control the level of interconnection charges. Due to the creation of regulatory agencies in the member states, which guarantee non-discriminatory access to the various networks and pricing which is
142
CHAPTER V
subject to ex ante price regulation, the relevance of network access as a barrier to entry will diminish more and more. 4.2. The Case of TELIA/TELENOR In the proposed merger of TELIA/TELENOR, a new company Newco would acquire all the shares in TELIA and TELENOR from the Swedish and Norwegian governments. TELIA is the largest telecommunications operator in Sweden and TELENOR is the largest telecommunications operator in Norway. Both companies provide the full range of telecommunication services within their respective countries as well as television services, and they also provide such services elsewhere in the Nordic area and internationally. The operation concerns the markets for fixed switch telephony services, business data communication, Internet access, PABX36 distribution, local telephone directories, business telephone directories and mobile telephony. In all these segments, the operation would create or strengthen a market a dominant position in Sweden and Norway. After its investigation, the Commission concluded that the proposed merger would raise significant concerns with regard to competition because of (1) the elimination of actual and potential competition between the parties, and (2) the increased ability and incentive of the new entity TELIA/TELENOR to eliminate actual and potential competition from third parties. An important element in this case is the difference between regulatory regimes in Sweden and Norway on the one hand, and countries like Denmark or Finland on the other. The most important differences concern methods of price regulation, and access to end users by way off local loop unbundling as well as other means, such as resale and carrier selection. On price regulation, control of interconnection charges in both Norway and Sweden tends to be ex post rather than ex ante. Therefore, the incumbents may demand high interconnection rates and implement other anticompetitive strategies immediately, without prior regulatory approval. Moreover, in Norway and in Sweden, the incumbents are not required to provide the same level of access to end users to their competitors as in Denmark and in Finland. An important barrier to entry results from the lack of mandatory local loop unbundling in Sweden and Norway. In order to have access to end users, entrants can either build their private infrastructure or resort to switched access or dedicated access. These forms of access are not equivalent to local loop unbundling as they bundle the use of the incumbent’s switching or transport infrastructure with the use of local loop. The obligation to set historic cost oriented interconnection charges is to be seen as an inferior substitute for local loop unbundling, because interconnection enables incumbents to force new entrants to continue to rely on their services and is therefore the source of constant revenues for incumbents. Furthermore, in an increasingly competitive environment, telecom companies struggle to reduce d their cost base by replacing old high cost equipment with cheaper, more efficient equipment. In the face of this, historical costs should be higher than current operating or incremental costs and would thus enable incumbents to earn higher interconnection margins. In its decision the Com-
CASE STUDIES
143
mission pointed out that “the question at issue is not the adequacy of the regulatory system(s) in constraining the merged entity’s future behaviour, but rather whether the merger between TELIA and TELENOR would create or strengthen a dominant position” (Official Journal 2001 No. L 40, 20). In the market for local loop infrastructure, t the Commission identified considerable barriers to entry in Sweden and Norway. Without unbundled local loop access, new entrants cannot develop a market position of their own for both incoming and outgoing calls – unless they are prepared to invest in their own networks. The upgrading and expansion of existing cable networks would present a viable alternative to the incumbent’s network, especially where the new entrant can provide cable TV, telephony and Internet access over the network in competition with the incumbent. However, TELIA owns the largest cable network t in Sweden and TELENOR the second largest network in Norway. In addition, constructing entirely new networks, or upgrading and expanding existing cable and/or other networks for bi-directional use by individual subscribers, would certainly require substantial amounts of time and capital. Therefore, the Commission concluded that the barriers to entry remain high in this market. The Commission also identified considerable barriers to entry in the markets for long distance services, because the new entrant needs an interconnection with the incumbent’s network. A competitor wishing to transport long-distance traffic may have to pay the incumbent for use of its networks, pay the cost of leasing the longdistance lines (again possibly from the incumbent), and then pay for interconnection to deliver the call back onto the incumbent’s network for termination. The incumbent is in a position to control all these costs. The proposed merger would give the parties an enhanced ability to eliminate competitors by raising the price of or degrading interconnections to third parties seeking to terminate calls, or by offering its private customers a better deal on long-distance calls than competitors could offer once they have paid for the necessary interconnection rates. In its decision, the European Commission clears the merger, despite considerable increases in market shares and the elimination of actual and potential competition, subject to conditions and obligations. These conditions and obligations concern mainly the reduction of barriers to entry to facilitate the entrance of newcomers. The new entity must allow competitors access to its respective local access networks in order to provide any technically feasible services on non-discriminatory terms through local loop unbundling. This undertaking will enable competitors to establish a sole customer relationship with telecommunication customers. Moreover, the parties must divest their activities in cable TV to a third party. The local loop unbundling proposal will greatly reduce the competitive concerns identified in respect of the various telecom services, and will – by granting new entrants the ability to establish a unique relationship with their clients – ensure that the merged entity remains subject to at least the same degree of competition, as both TELIA and TELENOR were prior to the proposed merger. In that context, the divestiture of the parties’ cable-TV activities will also have the effect of complementing the local loop unbundling proposal. The new owner of the cable-TV networks will be able to offer competition to the parties’ telecommunication networks by allowing
144
CHAPTER V
increased competition on the various telecommunication markets for residential users and small businesses, which are less likely to benefit from local loop unbundling. 4.3. Assessment of the Decision In respect to the assessment of potential competition, the decision by the Commission can be characterised at first glance as progressive and much more orientated toward dynamic aspects than previous cases. From the outset, the decision was dominated by the assessment of barriers to entry and potential competition in the markets for telecommunication services. The Commission did not confine itself to the analysis of structural conditions. Quite to the contrary, the Commission pursued a more dynamic approach. Its investigation was dominated by the question how potential competition could be made strong enough to sufficiently limit the market power of the new entity. The Commission concluded that the source of potential competition would mainly stem from local loop unbundling and from access to the cable-TV networks. This led to the obligation of the parties to open up their local loop infrastructure for competitors and to divest their own cable-TV network to third parties, such that barriers to entry would be reduced considerably. After having realised these conditions, an increasing effectiveness of potential competition could be expected. The TELIA/TELENOR decision provides valuable insights into the decisionmaking process of the Commission. In this case, the Commission has shown that it is possible to allow mergers despite high increases to market shares, if potential competition can be expected to constrain incumbents effectively. The key factor is the creation of conditions and obligations, which aim to reduce barriers to entry. In this respect, the Commission established a precedent in TELIA/TELENOR. From the outset, the decision of the Commission seems to have been determined by the intention to create unbundled access to the local loop connection cable. It was therefore clear from the outset that the Commission was going to lay heavy emphasis on conditions and obligations. Prima facie, the creation of unbundled access to the local loop seems indeed an adequate instrument to reduce barriers to entry and to increase pressure from potential competition. But such a procedure is not uncontested from an economic point of view. Potential competition does not necessarily mean the reduction of barriers to entry in local loop access but could be achieved by a variety of other means, such as the creation of additional networks, use of cable TV networks, electricity networks or satellites, the use of third generation mobile telephony networks (UMTS) or even other wireless connections, such as wireless local loop through Novel-technology. The Commission’s decision to focus on local loop unbundling means that one very specific form of potential competition has been preferred over other possibilities. But it is questionable whether this preferential treatment makes economic sense. The concrete option chosen – namely the promotion of local loop unbundling – has a number of negative side effects. No newcomer would ever invest in the creation of an alternative network, if the incumbent were obliged to grant access to newcomers (Nikolinakos 2001, 267). One could argue that local loop unbundling is only
CASE STUDIES
145
a first step in the creation of strong competitors: after they have been able to create their own customer base, they could then begin to invest in their own network. But from an economic point of view, this conjecture is not convincing. The question whether to use the network of the established competitor or to invest in one’s own network is basically a make-or-buy decision. The option will be chosen which promises higher profits. The policy of opting for local loop unbundling reduces the costs of the buy-decision relative to the make-decision. This policy thus reduces the incentives for the wholesale creation of alternative networks. This means that a particular communication technology is promoted more than other, alternative technologies. The Commission must here implicitly assume that it has at its disposal knowledge that is superior to that of the market participants. This is a problematic assumption as competition can also be seen as a discovery procedure (Hayek 1969) whereby we uncover knowledge that we would not have had if we had not used competition. Additionally, price (cap) regulations based on average costs might also create the wrong incentives. In densely populated areas, the actual cost will be below average costs due to economies of density, whereas in less populated areas, it will often be above average costs. This means that the local loop problem will be circumvented in highly populated regions by way of using wireless technologies, whereas in the less populated regions, no investments in infrastructure will be made. The regulatory procedure that is based on average costs will lead to prices that are not based on the actual costs arising in the different areas. This will lead to allocative inefficiencies. A policy oriented on local loop unbundling is thus always subject to the danger of slowing down infrastructural developmentt (Gabelmann/Gross 2000, 102). The heavy focus on local loop unbundling leads primarily to an intensification of competition in the area of services. It seems thus doubtful whether the policy chosen by the Commission in TELIA/TELENOR is adequate in order to induce a sustainable increase in competition. One could even suspect that the Commission’s decision in TELIA/TELENOR was primarily motivated by industrial policy considerations. This becomes apparent if one reads two documents together with the decision in this particular case. The two documents are first the ”Fifth Report on the Implementation of the Telecommunications Regulatory Package” published in 1991 and secondly the publication ”Towards a new Framework for Electronic Communications Infrastructure and Associated Services” published in 1999. In both documents, the Commission declared that there was a lack of competition in the local access market and the most effective way to tackle this problem was the introduction of local loop unbundling. Workable competition on the local level would be key to the implementation of fast Internet access, and local loop unbundling thus crucial for the realisation of the information society. Since the Commission believes that the competitiveness of European industry depends on a quick realisation of steps towards the information society, the unbundling of local loops is seen as a very important option in industrial policy (Vinje/Kalimo 2000, 49). In TELIA/TELENOR, merger control seems to have been used as a tool to implement this industrial policy fast. It has previously become evident that the condi-
CHAPTER V
146
tions and obligations of merger control can indeed be used to achieve industrial policy goals (Freytag 1995). This means that merger policy is perceived as an instrument that can be used to create or shape market structures that seem to be desirable from an industrial policy point of view. Whether these decisions make sense from a competition policy point of view seems to be of secondary importance. TELIA/TELENOR is thus an example where European merger policy is indeed also used in order to pursue industrial policy. In TELIA/TELENOR, the recognition of barriers to entry and potential competition played a central role. In order to be able to pass the intended merger, the Commission made its content conditional upon an obligation to reduce barriers to entry by unbundling the local loop. This decision seems to be compatible with economic theory: strong or even dominant positions are unproblematic as long as potential competition is sufficiently strong. Yet, the Commission’s decision can still be criticised. The critique centres around the industrial policy motivation of the conditions and obligations agreed upon in this case, which h prefers a very specific kind of potential competition and privileges service competition by discriminating against other kinds of infrastructure competition. Whether this makes sense from a competition policy point of view appears doubtful. 5. COLLECTIVE DOMINANCE UNDER THE EUROPEAN MERGER REGULATION The application of the European Merger Regulation to cases of oligopolistic or collective dominance was first addressed by the decision of the Court of First Instance in the case of GENCOR against the European Commission.37 This decision confirmed an evolving practice and thus recognition of collective dominance in European merger control was firmly established. As already described in chapter IV, the recent reforms of European Merger Policy modified the relevant wording: collective dominance is now called “coordinated effects”. European merger policy has been in place for a dozen years now. This section focuses on a selection of the most relevant cases with regard to the notion of collective dominance and analyses some of their central aspects where collective dominance proved to be crucial. We aim to describe and evaluate current EU practice with regard to collective dominance beginning with a short discussion of the key cases. 5.1. NESTLÉ/PERRIER (1992) The first explicit investigation of collective dominance as part of European Merger Control was in the case of NESTLÉ/PERRIER.38 NESTLÉ, a global player in food supply, wanted to buy the French producer of mineral water Source PERRIER. NESTLÉ and BSN, the number three supplier of mineral water on the French market, agreed to transfer the mineral water source Volvic from PERRIER to BSN. Reservations with regard to the effects on competition of the proposed merger related to the French market for bottled water from natural sources or wells. Before
CASE STUDIES
147
the merger, the three leading suppliers had an aggregate market share of 80% (PERRIER 40%, NESTLÉ 20% and BSN 20%). Due to existing barriers to entry, the Commission delineated the French market as the relevant market in geographical terms. In evaluating the proposed merger, the European Commission assumed a narrow oligopoly as the underlying market structure and was therefore concerned about the danger of a collectively dominant market position emerging between the two remaining firms NESTLÉ/PERRIER and BSN. In the course of this investigation, the European Commission resorted to a socalled multi-criteria approach, according to which a number of market structure criteria are used in order to evaluate the danger of collusive behaviour by the remaining firms (Briones 1995). The starting point of the analysis in NESTLÉ/PERRIER was establishing that a narrow oligopoly consisting of PERRIER, NESTLÉ, and BSN had existed for a long time before NESTLÉ announced its intention to buy PERRIER (Jones/GonzálezDiaz 1992, Collins et al. 1993). The expected, very high post-merger concentration rate (CR2 = 0.8) would provide prima facie evidence for the establishment of a duopoly. Structural characteristics such as high homogeneity of products and the high level of market transparency due to published price lists would translate into strong incentives for co-ordinating pricing policy. Moreover, existing symmetries concerning the cost structures of the duopolists, low price elasticity of demand as a result of high consumer loyalty and only marginal residual competition due to high barriers to entry would increase incentives jointly to maximise profits. The European Commission believed this analysis was confirmed by y striking parallels in price setting behaviour that had occurred in the past and by y joint aggressive behaviour vis-à-vis foreign competitors trying to enter the French market. In the past, the duopolists NESTLÉ and BSN had together tried to prevent the take-over of PERRIER by a foreign competitor. From these structural factors, the European Commission concluded that the proposed merger would result in oligopolistic interdependence. Neither buying power, nor potential competition nor technological innovation could be expected to act as countervailing factors capable of making the dominant position less dangerous. Considering all these factors together, the merger was accepted by the Commission only if subject to conditions and obligations. In its analysis, the Commission took k into account all the factors suspected to facilitate collusive behaviour according to economic theory. As spelt out in considerable detail in Chapter IV, 4. the theory of collective dominance tries to identify factors facilitating collusive behaviour. One can, however, ask just how solid the basis of such an approach really is. Within the framework of pure structural analysis, efforts are usually confined to the search for factors facilitating co-ordinated behaviour. This alone forms the basis for predicting future collusion. But the possibility of collusive behaviour should not be mistaken for the likelihood of stable co-ordinated behaviour. As we have seen in D.4, it is the stability of collusion that needs to be assessed. Examining only structural factors is, not sufficient to make such a thorough assessment. Possibilities for
148
CHAPTER V
deviation, its detection, and its effective sanctioning need to be taken into account as well. It is exactly here that we would criticise the Commission’s decision in NESTLÉ/PERRIER. The Commission explicitly analysed the incentives for collusion, but not whether they were sustainable over time, i.e., whether there was a risk of stable collusion. In establishing the possibility of collective dominance, the Commission follows a very traditional structure-conduct-performance t approach: it tries to make predictions concerning behaviour (”conduct”) on the basis of a number of structural factors (Phlips 1995, 81). In NESTLÉ/PERRIER, the Commission based its decision on the so-called Cournot Model. According to the model, high degrees of symmetry would ceteris paribus lead to collective dominance. The Cournot Model cannot, however, be used to conjecture the actual process of adaptation taking place in a duopoly (Fisher 1989, 114). The Commission’s procedure could even be called eclectic, in the sense that insights gained from particular models are generalised although they apply, of course, only to very specific situations (Mas-Colell 1980, 121). The Cournot Model is usually used in its one-shot version, which means that conjectures can only be derived on the basis that the game will not be repeated. Competition is, however, an iterative process in which many actors learn and invent new strategies. This dynamic aspect cannot be grasped with the exclusive emphasis on structural factors analysing a symmetry of interests. On this basis, well-founded statements concerning the possible stability of collective dominance over time are impossible. It is worth noting that in this case, the Commission did not analyse whether there are effective punishment mechanisms associated with this oligopoly (Etter 2000, 126). Collusive behaviour is only sustainable a if deviating behaviour can be not only identified but also effectively punished. This presupposes that the non-deviating firms have at their disposal sufficient idle capacity to increase the distributed quantity, which would then cause a sharp fall in prices. But if there is a heterogenisation of products, punishment is made more difficult. If one firm is deviating in a heterogeneous product market, the other firm has a limited capacity to punish, since its products are not perfect substitutes for the products of the deviating firm. In such cases, the punishing firm might be punishing itself most. An increasing degree of heterogenisation thus makes collusive behaviour less likely, as the sustainability of collusive behaviour is severely reduced by the insufficient capacity to punish deviation. In its decision, the Commission failed to take this aspect explicitly into consideration. The description of NESTLÉ/PERRIER primarily serves to present the criteria used by the Commission when it analyses the likelihood of collusive behaviour. It is not our intention to analyse whether the Commission came to the “right” conclusion in this case but rather to analyse whether the criteria used by the Commission were in accordance with economic theory. We have shown that in its first decision in which collective dominance played a decisive role, the Commission centred its analysis on structural factors. Factors facilitating or making effective punishment more difficult were not explicitly recognised. Whether this approach, with its heavy
CASE STUDIES
149
reliance on structural factors, is adequate to understand and evaluate competitive processes appears doubtful. 5.2. KALI&SALZ/MDK/TREUHAND (1993) In December 1993, the Commission declared the merger between MITTELDEUTSCHE KALI and KALI&SALZ, a subsidiary of BASF, as compatible with the Single Market subject to obligations.39 In its analysis, the Commission concluded that the notified merger had effects on two relevant markets, namely the German market for potash fertiliser and the rest of the European market.40 While in Germany, the merged firm would secure a market share of 98%., the merger was based on a failing company defence, and the Commission concluded that the monopoly position that resulted in the German market should not be blocked by European merger policy. The questions concerning the emergence of a collectively dominant position arose with regard to the rest of the European market. Here, the French company SCPA and the newly created entity controlled 80% of European potash production. In order to prevent collective dominance, the two firms had to agree to give up their shared distribution company in France. The Commission’s conjecture that collusive behaviour could arise between K&S/MDK and SCPA was based on a number of indicators that the Commission had already relied upon in NESTLÉ/PERRIER: – – – – –
High market shares and a high concentration ratio Price-inelastic demand High market transparency High degree of homogeneity of products Absence of technological innovations.
Based on various aspects of the prevailing market structure, the Commission concluded that there was a high probability of collusive behaviour emerging. It specifically pointed out the existing business relationship between the two remaining firms. The two market leaders were already co-operating in other regional markets, and spreading parallel behaviour to other markets would be substantially facilitated. This was a reference to existing joint ventures in Canada and France as well as to common membership of an export cartel, which co-ordinates the export of kali potash to countries outside Europe. Based on this experience, the Commission concluded that some kind of cartel tradition existed in this industry. Analogous to its decision in NESTLÉ/PERRIER, the Commission stated that, based on the structural factors ascertained, there was a high probability of oligopolistic dominance. It did not explicitly deal with the question whether collusive behaviour could be sustained over time. This doubtful evaluation led the French Government to sue the Commission at the European Court of Justice. In March 1998, the Court annulled the Commission’s decision, claiming that the economic reasoning was not sufficiently well grounded. It specifically emphasised that insufficient account had been taken of the asymme-
150
CHAPTER V
tries between K&S/MDK on the one hand and SCPA on the other, which would make sustainable collusion more difficult. Itt further indicated that the importance of the existing structural ties between the parties had been overstated by the Commission, as these ties would not necessarily have led to parallel behaviour. Thirdly, it noted that the possibilities for third parties to constrain the two duopolists were not sufficiently recognised (Venit 1998, 1110). The differences in capacity between K&S/MDK and SCPA rendered sustained collusive behaviour unlikely. K&S/MDK controls some 60% of capacity available in the Community, whereas SCPA controls t only 20%. Its capacity will, moreover, be exhausted by 2004. This means that SCPA has high incentives to collude. But this is not the case for K&S/MDK with its very substantial reserves. The merged firm has, on the contrary, every incentive to take market share away from SCPA. Due to the very limited capacity of SCPA, the French firm has few means credibly to threaten punishment. Taking these differences explicitly into account, the relevance of the structural ties between KALI&SALZ and SCPA appear less important. Indeed with the proposed merger between K&S and MDK, they become less relevant. It is not possible to take pre-merger behaviour as a reliable indicator of post-merger behaviour. Past business relations between competitors are therefore neither a necessary nor sufficient condition for predicting co-ordinated behaviour. With regard to the possibility of so-called fringe firms, which have a controlling influence on the market leaders, the Court noted that both Composa (Spain) and CPL (United Kingdom) would have the capability and incentives to undermine collective dominance by way of price competition. The Court believed this to be the case because Composa had at its disposal significant capacity reserves and CPL had lower average costs. In summary, the Commission focused exclusively on structural and thus static factors. The conclusion that high market shares and past business relations between the competitors were sufficient for predicting oligopolistic behaviour does not survive critical scrutiny from a more dynamic point of view. In contrast to the Commission, the Court focused more on dynamic factors that generate incentives for deviation. Moreover, it demanded that the role of fringe firms be more explicitly recognised. But the prevailing asymmetries between the competitors should really be the central point in the analysis, as they make collusive behaviour more difficult – and thus less likely. The Commission focused on static issues. It did not establish a broader framework that would have enabled it to take dynamic factors explicitly into account. 5.3. GENCOR/LONRHO (1996) On 24 April 1996, the Commission prohibited the merger between the South African firm GENCOR and the British firm LONRHO. Both firms wanted to merge their platinum activities. This merger would bring together the second and third largest producers of platinum on a worldwide level, creating a combined global market share of 28%.41 According to the European Commission, the market for platinum is
CASE STUDIES
151
the relevant product market as the substitutability of platinum against other precious metals is low. With regard to the geographical dimension, the world market would be the relevant market, as platinum is mined only in a few places, is easy to transport and is in demand the world over. The Commission pointed out that the market for platinum was characterised by a narrow oligopoly, with a low intensity of competition even before the proposed merger. After the merger, the only producers left would be Amplats, GENCOR/LONRHO and the Russian mines and theyy all – at least in part – had very close ties . The factors that facilitated collective behaviour according to the Commission were (1) high markett transparency (platinum is traded only on a small number of commodity exchanges throughout the world); and (2) low price elasticity. After the merger, the new market structure would be an oligopoly between Amplats and the newly merged firm, with a market share of some 70% and control of 80% of known platinum reserves. Russian fringe firms would not have a constraining impact because they were suffering from severe financial hardship and their reserves would be exhausted in just a few years. Due to these arguments, the Commission concluded that the emergence of collective dominance was most likely and it prohibited the merger. In its decision, the Commission emphasised the relevance of economic factors that would point towards collusive behaviour. The combination of high concentration, stable and symmetric market shares, high symmetry with regard to costs, price inelastic demand, a high degree of homogenisation of products and low level of technological change would create important incentives to collude. There was little chance of cheating and high potential for sanctioning deviating behaviour (Olsson 2000, 22). All these structural factors point to the high probability of co-ordinated behaviour. In particular, the high transparency in prices that stems from platinum being traded on commodity exchanges would make deviating behaviour immediately visible, and so could be followed by immediate punishment. Additionally, the high barriers to entry into this market would facilitate collective dominance. Extraction and subsequent treatment of platinum are associated with high capital intensity and sunk costs. Moreover, all profitable mines in South Africa are owned by the merging parties, which means that potential competition is unlikely. The Commission concluded that there were considerable incentives ffor parallel behaviour and it would not be profitable for the parties to deviate from this. In 1996, it was unclear whether collective dominance could be dealt with as part of European Merger Control. Therefore, GENCOR decided to challenge the Commission’s decision by taking the case to the Court of First Instance. The Court of First Instance completely agreed with the Commission’s decision. In particular, it underlined the Commission’s evaluation that deviating behaviour would not be profitable and that effective punishment mechanisms were lacking. The Court thus concluded that there were high incentives for co-ordinated behaviour. In its dictum, the Court introduced an important distinction (Caffara/Kühn 1999, 335). It did nott fall under the competence of European Merger Control to predict whether explicit collusion was likely post-merger. This issued had to be dealt
152
CHAPTER V
with by applying art. 81 and 82 TEC. Within European Merger Control, the question was whether implicit or tacit collusion, in the sense of co-ordinated behaviour not explicitly agreed upon, was likely or not. Such an interpretation is not unproblematic. From an economic point of view, it is completely irrelevant whether collusion is explicit or implicit. m Both will lead to similar welfare reducing results. What is decisive for the sustainability of collusion is whether there are credible punishment mechanisms in case one of the parties deviates from (implicit orr explicit) collusion. In analysing GENCOR/LONRHO, it would have made sense for the Court of First Instance to look more closely at two issues. In order to implement price increases jointly, the two firms would have to co-ordinate the quantities that they were to offer to the market. Realising higher prices is only possible if quantities are reduced. But this presupposes that firms can vary their degree of capacity utilisation. Moreover, the costs of sub-optimal capacity utilisation must not be high. Additionally, the Court should have analysed whether the two firms were capable of slowing down capacity extension to a greater extent than if the merger did not exist. Assuming that decisions concerning future capacity involve high amounts of capital and cannot easily be revised, the possibilities for deviation and its sanctioning are no longer unequivocally predictable. The firm that moves first in implementing a decision to slow down capacity expansion risks losing market share to its rival – if the rival does not slow down its capacity expansion as well. It can thus be concluded that the Court could have analysed the economic incentives present in GENCOR/LONRHO more closely than it did. 5.4. EXXON/MOBIL (1999) The merger between EXXON and MOBIL was the first so-called ”mega merger” dealt with by the Commission.42 The threat of collective dominance was believed to be relevant with regard to two issues: firstly the global market for exploration and development of oilfields and secondly the German market for distributing petrol for cars. In both these issues, the Commission analysed whether collective dominance was a real danger. Here, we only deal with the second of these issues, i.e., the concerns with regard to the distributionn of fuel for cars in Germany. An evaluation of whether the danger of collective dominance was real, was made by drawing on structural indicators. The German market for the distribution of fuel appeared already to be characterised by a high degree of concentration on the supply side. The five largest suppliers (Aral, Shell, Esso, Dea and BP) had a combined market share of 85% of the total market. The merger between EXXON and MOBIL would bring Esso and Aral together which would then mean that four large firms would command a market share of 85%. Moreover, fuel is a very homogenous product with few possibilities for differentiation. From the point off view of the consumer, fuel from the different suppliers is completely substitutable.43 The Commission further assumed that the market for fuels was characterised by a low degree of technological innovation. It thus assumed the market to be rather stable.
CASE STUDIES
153
Market transparency with regard to price was high. Prices could easily be seen and compared; market participants could obtain this information without having to incur transaction costs. The market for fuel grows at a very moderate pace, namely only 0.4% a year. This moderate growth also contributes to the stability of the market. On top of these established structural factors, the Commission also analysed the relevance of symmetry between the various suppliers. Here, the Commission ascertained that symmetry in market shares was less relevant than in the other cases here discussed. It was argued that it was rather the symmetry of the cost structures that mattered. Decisive economies of scale are a function of the number and the performance of gas stations. Iff gas stations are able to realise possible economies of scale, the question of total market share becomes marginal. A chain of petrol stations might have a very high market share but if this is dispersed over a multitude of stations, none of them might be profitable. This means that a symmetric cost structure can lead to asymmetric market shares. As such, this does not imply m an incentive to cheat because the possibility of gaining additional market share is constrained if one assumes that the gas stations are already selling at the minimum optimal firm size. Under these assumptions, aggressive fights for additional market share entail a high risk of losing due to the reaction of competitors. What is important here is that all suppliers are vertically integrated and have at their disposal vast sums of capital, which implies that every supplier can react to the behaviour of other members of the oligopoly and would also be able to survive losses for an extended period off time. This means that the risk of price wars would be especially high. The demand curve is rather inelastic in this industry. This would imply another incentive to collude, which would be amplified by the existing structural relations between the oligopolists. Evaluating the relevance of barriers to entry, the Commission noted that vertical integration was not an insurmountable barrier. This had been shown by a number of successful entries of non-integrated suppliers. On the other hand, there were a number of barriers, which would restrict the effectiveness of potential competition. If one is aiming at successful entry, one needs to open a minimum number of gas stations in order to sell quantities that make entry profitable. This is dependent on high initial investment. The amount of investment necessary has been rising over recent years due to ever-stricter environmental regulation. Most recently, successful entry has only been observed in what are called “supermarket petrol stations”, i.e., petrol stations that are run by huge supermarkets. Supermarkets are able to sell petrol at competitive prices because they have central key account management and an existing administrative structure as well as promising locations. To a certain extent, the market for gas stations is thus contestable. But many gas stations are not able to realise economies of scale, and one can furthermore observe that the number of petrol stations has for many years been shrinking. Market entry does not therefore seem very likely. As spelt out in some detail in chapter IV.4, the evaluation of collective dominance is incomplete if the questions of deviation and sanctioning are not explicitly addressed. Assuming that all members of the oligopoly have high capacities at their disposal, they could immediately react were a member of the oligopoly to deviate.
154
CHAPTER V
What is decisive is that not only can they immediately react to a deviation, but also that there are incentives for costly punishment. u Under the assumptions of homogeneity and transparency, deviating behaviour can be detected immediately by the competitors of the deviator. The only tool that the competitors have in order to prevent reduced profits is to react immediately to the deviation. Whoever moves first suffers the least lost turnover. This means that all firms have an incentive to react aggressively as soon as deviating behaviour is detected. Assuming that all participating firms can rely on substantial financial resources and there is high excess capacity in the market, such reaction seems feasible. If you want, there is a built-in sanctioning mechanism due to the incentive structure in this game. The incentives to deviate can thus be safely ignored and parallel behaviour would be an optimal strategy. That fringe firms effectively constrain the large oligopolists seems rather unlikely. Rather they act as price takers who usually slightly undercut the big suppliers. u Their capacities are insufficient to constrain the large firms effectively. Hence the obligations imposed on MOBIL to divest its share of Aral and to give up its joint venture with BP in the market for lubricants seem justified. 5.5. AIRTOURS/FIRST CHOICE (1999) The intended acquisition of FIRST CHOICE by AIRTOURS was prohibited by the Commission because it would have led to the creation of a collectively dominant position in the British market for short haul package tours.44 Both firms were active primarily as tour operators and vertically integrated due to significant activities in the charter and retail business. AIRTOURS also ran cruise liners and hotels, FIRST CHOICE dealt with air capacity for transporting passengers and with rental cars. In its analysis, the Commission claimed thatt the proposed merger would lead to oligopolistic dominance by the parties and two other vertically integrated firms, namely Thomson and Thomas Cook. This oligopoly could secure an aggregate market share of 80%. The Commission further stated that package tours are a standardised product with small margins, the supply of which is determined before the start of the season and can later only be slightly adjusted . The profitability of tour operators thus depends on the global equilibrium between supply and demand. In this case, the operators would be united in their interest to constrain supply. Additionally, structural connections existed between them because they sold each others’ products and swapped flying capacity. The communication channels between the members of the oligopoly were, in other words, sufficiently strong to enable them to co-ordinate their behaviour. The rest of the market would be made up of small, nonintegrated suppliers whose efficiency would be severely limited because they depend on the large suppliers for their access to charter flights and retail services. Based on these considerations, the Commission judged the proposed merger incompatible with the Single Market. In this case, the Commission particularly stressed the high degree of concentration and resulting market power, and the marginal role of the smaller firms that, lacking vertical integration, were not capable of effectively constraining the behav-
CASE STUDIES
155
iour of the remaining three large tour operators. Vertical integration equated to a high barrier to entry (Christensen/Rabassa 2001). Among the factors that would facilitate co-ordinated behaviour, the Commission mentioned the high degree of transparency in the marketplace, which was the result of the catalogues being readily available to anybody, and the high degree of homogeneity among products. With regard to short-haul package tours, the Commission concluded that there was high degree of standardisation in the products. These factors would all contribute to making collusion easy. Concerning the question whether or not collusive behaviour could be stable, the Commission noted that the risk of entering a situation characterised by excess supply would deter the oligopolists from competing on market share. Given that plans regarding the next holiday season were openly available, it was possible to sanction deviations from quantities that had been tacitly agreed upon, making deviation less attractive and sustained collusion thus more likely. Parallel behaviour could be expected with regard to travel capacities. The Commission explicitly pointed out that the oligopolists plan their future capacity taking explicit account of the possible reaction of their competitors. It thus believed d that the conditions for sustainable collusive behaviour were present. But these arguments are somewhat unconvincing. Let us start with the structural factors that, according to the Commission, facilitate collusive behaviour. The assumption that holiday tours are homogenous products seems very controversial. They differ according to the kind of trip (recreation, adventure, sports, study tours, etc.), the location, the time, and the quality (for example the hotel category). These differences are reflected in price differences as well as in supply capacity differences. One can thus argue that holiday tours are highly heterogeneous products. This is a first indicator that the possibility of collusion is severely constrained. In its decision, the Commission further overlooked the fact that market shares in the British market for package tours are not stable, but highly volatile. Volatility in market shares is a very good proxy for a high intensity in competition, which makes parallel behaviour unlikely (Motta 1999, 12). Also contributing to the difficulties of establishing collusive behaviour is the factt that demand is very unstable. The demand for package tours is subject to changes depending on the business cycle, but also on rapid changes in preferences. Tour operators have to plan their offers approximately 18 months ahead of time; iff one further recognises that demand is highly volatile, co-ordinating behaviour is very unlikely. The Commission’s conclusion that there are substantial barriers to entry to the British market for package tours can also be criticised. Hotel capacities as well as flight capacities are dealt with on a global scale and therefore accessible to everyone. The arrangement of package tours neither presupposes a high degree of sunk costs nor is it particularly capital intensive. Barriers to entry thus seem rather low. The argument that the vertical integration of the three remaining firms would constitute an important barrier to entry does not withstand critical scrutiny either: the answer to the question whether vertical integration (here: the combination of tour operator and airline) constitutes an effective barrier to entry is, at least in part, determined by the intensity of competition in the upstream market.
156
CHAPTER V
If there is a high degree of competition in the market for airline passengers, vertical integration might not function as a barrier to entry because newcomers to the market for package tours can switch suppliers easily. This is exactly what we observe in the travel market: the market for passenger capacity is characterised by a high degree of competition. So-called “no frills airlines” have successfully entered the market (Ryanair, EasyJet) and forced the established airlines to establish similar airlines (e.g., Buzz as a division of KLM). Due to this high level of competition, anybody who wants to enter the package tour market can do so without having to be vertically integrated. But the critique concerning the decision of the Commission does not stop here: it was not only the evaluation of structural factors that was problematic but also the lack of any analysis to determine whether t deviating behaviour could be sanctioned sufficiently. The issue of threatened sanctions being credible, or not, was simply not dealt with satisfactorily. But if this is not the case, the stability of collusion is at risk. Again: package tour operators have to plan their capacities 18 months ahead. In the short run, only minor modifications are possible. This means that capacity is basically fixed for this period. It constitutes a strong incentive to deviate, as the cheating firm cannot be sanctioned immediately but only after a significant period of time. The deviating firm can use this time to increase its profit to the detriment of the other oligopolists. Hence the Commission’s assumption m that the oligopolists would share a common interest in constraining supply is not convincing. But even if capacity collusion was possible, one could not infer that collusive behaviour would be the stable outcome (Staiger/Wolak 1992). If the uncertainties concerning the development of demand are taken into consideration and actual demand lags behind expected demand, then intense price competition could set in,. In such a case, every member of the oligopoly would try to undercut his or her competitors. Capacity collusion would thus not work. The Commission’s decision thus seems to be economically unfounded. The economic preconditions for realising sustainable collusion were not present. It was not shown that deviating behaviour could be immediately detected (due to the possibility of reductions not printed in the catalogue), nor that additional profits from deviation were low. The fact that losses from sanctioning behaviour are high was not sufficiently taken into consideration, nor was the fact that the capacity to sanction immediately exists at all. It was thus no great surprise that the Court of First Instance sided with AIRTOURS – and against the Commission – in its decision on the case. In its dictum, the Court pointed out that the Commission had not dealt with the issue of whether there was sufficient potential threat to sanction deviating behaviour, nor had it taken the constraining effects of fringe firms adequately into account. The Court thus criticised the weak economic basis on which the Commission had based its decision. The Commission’s decision in AIRTOURS/FIRST CHOICE constitutes a relapse. It is based on ad hoc-assumptions with regard to structural factors and their impact on the behaviour of oligopolists. The determinants that would facilitate collusion in the long run were hardly mentioned in the decision.
CASE STUDIES
157
5.6. UPM-KYMMENE/HAINDL and NORSKE SKOG/PARENCO/WALSUM (2001) This case concerns the Finnish company UPM-KYMMENE, which intended to buy exclusive control over the paper producer HAINDL.45 In a second step, UPMKYMMENE proposed selling both PARENCO, which produces paper in The Netherlands, and another factory in Duisburg-Walsum to NORSKE SKOG. At that time, PARENCO and the factory in Duisburg were owned by HAINDL.46 In its analysis, the Commission identified the market for newspaper paper as the relevant product market. Only paper containing no wood, which is used for magazines, is considered to be a different market: all the other qualities belong to the single market for newspaper paper. On a geographical basis, the Commission delineated the EEA as the relevant market. The Commission was interested in one central question, namely whether the proposed merger would lead to a dominant position for the leading firms UPMKYMMENE, SKOG, STORA-ENSO, and HOLMEN. In its decision, it concluded that this was not the case. Concerning the characteristics of the market for newspaper paper, the Commission (Rabassa/Simon/Kleiner 2002) indicated that the competitive process was determined by two factors: in the long run by the parameterr capacity, and in the short run by the parameter price. Newspaper paper a can be seen as a homogeneous product. The market is highly transparent with regard to price and capacity. Demand is not elastic and follows the business cycle. Regarding barriers to entry, the Commission noted that production technology is freely available but thatt constructing a paper factory is associated with substantial sunk costs. Barriers are thus to be interpreted as substantial. All existing capacities were being used at peak load at the time the parties notified the Commission. The four leading producers of newspaper paper would together command a market share of some 70%. In wood-free paper, CR3 would equal 0.8. Due to these characteristics, the Commission concluded that incentives for collective dominance were strong. Collusive behaviour would be possible by way of two mechanisms: (1) through the co-ordination of investments in new capacity in order to constrain total market capacity and to be able to realise higher prices, and (2) through the co-ordination of output reductions in case of short-term slumps in demand, in order to stabilise prices at the old level. With regard to the second mechanism, the Commission concluded that shortterm co-ordination of output quantities was impossible since deviations were difficult or even impossible to detect. Therefore, incentives to cheat would exist and they would be amplified by the fact that sanctioning deviating behaviour would be extremely costly, and thus not credible. Idle capacity is generally not sufficient to release huge new amounts of paper onto the market, if a firm deviates from the implicit agreement. The Commission thus concluded that output-co-ordination was unlikely. The analysis with regard to the first mechanism was slightly more complicated. The Commission had to answer the question whether the high degree of capacity
158
CHAPTER V
utilisation had already been the result of co-ordinated behaviour. If competitors are successful in controlling capacity, co-ordination with regard to prices and produced quantities are superfluous, since prices will – due to the limited capacity – remain high. Analysing investment behaviour is no simple feat. The underlying conditions of an entire market are determined by investmentt behaviour; at the same time, business conditions change perpetually, and it is difficult to integrate these factors into gametheoretic models. The starting point of the analysis must be that capacity collusion only makes sense if the decisions concerning additional capacity are at least partially irreversible (Compte/Jenny/Rey 2002). If this were not the case, firms could not credibly commit to delay investments in additional capacity. In such a situation, every firm would have an incentive to undercut the prices of its competitors as the necessary capacity could be installed and market share gained without much delay. But one should bear in mind that the possibility of co-ordination is also restricted by the presence of irreversible factors. Irreversibility makes the sanctioning of deviating firms more difficult. If a firm constructs a new plant with high capacity, the firm is – due to irreversible factors – bound to this capacity. The other firms have two options to react to this newly erected capacity: they can either punish the deviating firm or increase capacities themselves, which would, however, leadd to decreasing prices – and also profits. Or they could refrain from m adding capacity themselves, inn order not to become less profitable. The incentive to sanction a deviating firm by increasing capacity is thus extremely costly, and it cannot be expected to occur often. f The punishment option is thus not very credible. All oligopolists would be better off if nobody erected a new plant, but for every individual participant in the market, the incentive to gain additional market share by adding capacity dominates the “refrain from investment” option. Capacity collusion is thus not stable. Here, irreversible factors play a crucial role in the decision to cheat because they determine that deviation can be profitable not only in the short run. We can thus expect attempts to be the first mover in a market, as this is the only way to increase profits. These theoretically derived conditions seem to be met in the market for newspaper paper. Building new factories necessitates substantial investment with costs for a new factory running between 300 and 500 million euros, and the expected length of production some 30 years. In order to co-ordinate successfully, the oligopolists would have to agree on the sequence of investments. If deviation occurs, at least one member of the oligopoly must be ready to invest, although this would lead to lower profits across the entire industry. The market for newspaper paper grows at a very slow rate, and it would take between six and eight years to punish the deviator effectively. Long-term capacity collusion therefore does not seem sustainable. This instability in capacity collusion can be nicely illustrated by drawing on the Stackelberg game. Suppose for simplicity that an industry can be characterised as duopolistic. Both firms now have the option to invest in additional capacity or to refrain from additional investment. The second option would be equivalent to capac-
CASE STUDIES
159
ity collusion. If either of the two firms invests in additional capacity and the other firm does not respond by its own investment, this firm has the capacity to gain market share at the expense of the other firm. Being the first to invest is thus connected with so-called first mover advantages. If the other firm responds by investing, prices will drop and both firms will incur profit losses. If one assumes that the firms do not act simultaneously but rather sequentially, the game can be written in the following way:
Invest
B
Invest
Not invest
A
Invest Not invest
(-5,-5)
(8, 2) (2, 8)
B Not invest
(5, 5)
Suppose firm A manages to move first and d invests. If A has invested, the best response of B is not to invest (B having the choice between a utility level of “-5” and “2”). Since A knows that the best response B can do its investing in additional capacity, it has incentives to invest. If it succeeds to be the first, it can secure a utility level of “8”, which is the best possible outcome of this game for A. Of course, it was just an assumption that A moves first. In reality, both firms have an incentive to try to secure a first-mover advantage. This means that capacity collusion appears unlikely in situations that can be depicted with the Stackelberg game. This argument shows that the Commission was not able to prove that collective dominance would occur in this case. This meant that the merger had to be approved. This case might have considerable future effects, as it questions the possibility of collusion in industries in which irreversible investments play an important role. It could thus become relevant for the chemical industry, the steel industry and others.
This page intentionally blank
CHAPTER VI PRACTICAL PROPOSALS
1. INTRODUCTORY REMARKS This is the last chapter of a lengthy study. We have therefore decided not to present an extensive summary. Instead, this chapter contains only two parts: in the first part, the substantive reform proposals for improving m predictability of European merger policy as discussed in chapter IV are simply reiterated in a summary fashion. In the second part, some proposals for how predictability could be increased by improving the procedures used by the European organs are presented. 2. OVERVIEW OF SUBSTANTIVE PROPOSALS AS DEVELOPED IN CHAPTER IV Proposal #1: Delineate the Relevant Market, Taking both Demand and Supply Side into Account Our analysis has shown that in order to delineate the relevantt product market, the Commission relies heavily on the demand side. This has often led to an overly narrow definition of the relevant market. The consequence of this narrow approach is that the application of the SSNIP test remains incomplete: if prices were raised by 5 or 10%, new competitors might be induced to enter the market and the price increase would thus turn out to be unprofitable. The current practice is thus incoherent and should be modified. Some of the business trends described above clearly pointed to the increased relevance of supply-side substitutability. Many steps in the value chain of a firm have been outsourced over recent years. Often, this has been achieved via management buy-outs or spin-offs. The newly arisen independent suppliers are frequently no longer confined to working exclusively for their former “parent company”, but operate as independent suppliers in the market. Their products can thus be bought by anybody. This possibility of outsourcing many steps in the value chain means that it has become easier for entirely new firms to enter into a market, as they do not have to make heavy investments. Supply-side substitutability has therefore increased in relevance and should be routinely taken into account by the Commission in its delineation of the relevant market.
161
162
CHAPTER VI
Proposal #2: Assessing the Importance of Customer Loyalty in Delineating the Relevant Market More Systematically Quite frequently, the European Commission decides to delineate separate product markets if customer loyalty with regard to certain products d exists. This often leads to too narrow a delineation of markets. The way customer loyalty is taken into account often appears unsystematic. Based on sound d economic reasoning one would ask in what situations customer loyalty leads to significant gaps in substitution given that products are functionally equivalent. If gaps a in substitution are substantial, then the delineation of separate product markets seems reasonable. We propose to take customer loyalty only into account if a transaction cost rationale for them can be named. We thus propose to distinguish between rationally explainable and rationally unexplainable customer loyalty and argue that only the first kind should play any role in merger policy. It is proposed that customer loyalty only be used in order to delineate different product markets if one is dealing with durable experience goods or trust goods. For search goods and non-durable experience goods, reliance on customer loyalty will lead to somewhat spurious delineations (section D 2.5.1 contains a more systematic treatment of this issue). Proposal #3: Taking the geographical dimension adequately into account It was shown that deregulation and privatisation have occurred on a worldwide scale. It was concluded that international interaction costs have, in many industries, been reduced to such an extent that one can talk off truly global markets. This does not seem to be adequately reflected in the way merger policy is put into practice. Very often, markets are still delineated on a much narrower scale. What is crucial for the geographical distinction is the possibility of border-crossing trade, in particular imports. It is thus insufficient to look at actual trade statistics. What should, instead, be taken explicitly into consideration is the sheer possibility of trade. This can be ascertained by analysing the relevant transport costs as well as costs due to statemandated barriers to entry. This procedure thus explicitly acknowledges the close relationship between defining the relevant market and ascertaining the relevance of barriers to entry. Predictability could be further advanced if the Porter classification of completely globalised, partially globalised, and regional markets were explicitly recognised in the definition of geographical markets. If the firms knew ex ante how their industry was classified, predictability would be greatly increased. Proposal #4: Taking the time dimension adequately into account In a rapidly changing business environment, reliable predictions concerning future market shares have become virtually impossible. But a responsible merger policy needs to take likely developments into account: if the high market share resulting from a merger today is not very likely to persist tomorrow, the merger should be passed. Drawing on market shares in merger analysis is based on the hypothesis that they can be translated into market power and can thus be used to the detriment of consumers. But what if the equation “high market share = low consumer rent” does not
PRACTICAL PROPOSALS
163
hold anymore? Rapid change induced either by supply or by demand side factors (or by both) can prevent the emergence and the use of market power because it leads to the possibility of unstable market structures. Rapid change is thus the crucial variable. Structure could only be consolidated – and probably used in order to raise prices and restrict output – if change is slowed down. Firms with a dominant position might thus have an incentive to try to slow down change. But often, they will not be in a position to succeed in that endeavour: if they have competitors who expect to gain by innovating, they will not be successful. If consumption patterns are subject to rapid change, they will not be successful either. We thus propose that competition authorities analyse (1) the speed of change in a given industry and (2) the actors in control t of the factors that are responsible for rapid change in a given industry. Ascertaining the speed of change in a given industry is, of course, not easy. Counting the number of patents will not do as some innovations never get patented and as new products do not necessarily have to rely on patentable knowledge (SONY created the Walkman drawing on readily available techniques). As already mentioned, rapid change can be due to supply side factors, but also to demand side factors. Demand side factors are certainly not beyond any influence from the supply side (marketing), but are difficult to control. In some cases, control over necessary inputs (resources, patents, public orders, etc.) can seriously constrain the capacity to be innovative. In such cases, merger policy should be restrictive. If parties willing to merge do not, however, control the process, the r is the – temporary merger should pass even if a highly concentrated market structure – result. Proposal #5: Assessing importance of asset specificity In Transaction Cost Economics, (1) asset specificity, (2) uncertainty, and (3) the frequency of interactions all play an important role in the determination of the optimal governance structure. It is assumed that firms try to economise on transaction costs and that unified governance – i.e. large firms – could be the result. This means that transaction cost arguments are basically efficiency arguments. They are dealt with separately here because they are intimately connected with one specific theoretical development, namely Transaction Cost Economics. Once invested, highly specific assets make the firm that has invested in them subject to opportunistic behaviour by its interaction partners. This might lead to investment rates below the optimum level. It can therefore be in the interest of both sides to agree on specific governance structures in order to reduce the risk of being exposed to opportunism. This insight has potential consequences for merger policy: the higher the degree and amount of specific assets involved, the greater the justification for a unified governance structure, in this case for a merger. In order to be able to take asset specificity explicitly into account, one either needs to measure it or to use proxies for it. Four kinds of asset specificity are usually distinguished, namely (1) site specificity (costs of geographical relocation are great), (2) physical asset specificity (relationship-specific equipment), (3) human asset specificity (learning-by-doing, especially in teams comprising various stages of the production process), and (4) dedicated assets (investments that are incurred due to
164
CHAPTER VI
one specific transaction with one specific customer). Physical proximity of contracting firms has been used as a proxy for site specificity (e.g. by Joskow 1985, 1987, 1990 and Spiller 1985) and R&D expenditure as a proxy for physical asset specificity. With regard to both human asset specificity and dedicated assets, survey data have been used. It is thus possible to define asset specificity empirically. Since the merger rationale in cases of high asset specificity is quite straightforward, it should be taken into account explicitly. Proposal #6: Assessing importance m of uncertainty The theoretical argument concerning uncertainty has great similarities with the argument concerning asset specificity: if interactions could be made beneficial for all parties concerned, they might still not take place if too high a degree of uncertainty were involved. In such cases, welfare could be increased if the interested parties were allowed to form a common governance structure in order to cope with uncertainty. With regard to merger policy, this means that in mergers in which uncertainty plays an important role, the evaluation should be somewhat less strict than in cases in which uncertainty is marginal. Developing definitive empirical evidence of uncertainty is no mean feat. In the literature, various proxies have been discussed, volatility in sales being one of them. Others (Walker and Weber 1984, 1987) have proposed focusing on one specific kind of uncertainty, namely “technological uncertainty”, measured as the frequency of changes in product specification and the probability of technological improvements. Given that technological uncertainty seems to have dramatically increased, it seems worthwhile to take it explicitly into account. The argument is that mergers are more likely in markets with high uncertainty, as proxied by high volatility in sales or high technological uncertainty. These mergers are potentially welfare increasing and should thus be passed. Proposal #7: Assessing importance of frequency Frequency is the last component to the Transaction Cost Economic trio of asset specificity, uncertainty, and frequency. The argument is that the more frequently interactions between two specified parties take place, the higher the potential benefits from a unified governance structure. The implications for merger policy are obvious: assess the frequency of interactions between parties willing to merge. The more frequently they interact, the higher the chance that efficiencies could be realised, the more relaxed the competition policy stance should be. Proposal #8: The assessment of the relevance of potential competition should be structured into four steps The assessment of the relevance of potential competition could be structured into four steps: The first step consists of ascertaining the absolute advantages of the established firm or firms. The primary focus would be on absolute cost advantages that are the result of exclusive access to a vital input or state-mandated barriers to entry.
PRACTICAL PROPOSALS
165
The second step could consist of checking whether the potential competitors possess any cost advantages over the established producers that would allow them to (over-) compensate for possible disadvantages in other areas. The source of such cost advantages could be more cost-efficient capacity utilisation or economies of scope in research and development or distribution. Lower labour and (or) finance costs are another potential factor for relevant cost advantage. Assessment of Absolute Advantages enjoyed by Incumbents
Assessment of Relative Costs
Assessment of Sunk Costs
Reaction Time and Contract Length Figure 14: A Four-Step Procedure for Assessing Barriers to Entry
The next step would consist of an evaluation of sunk costs. The last step would consist of estimating the time that the incumbents would need in order to react to the entry of a new competitor. If the incumbents can react immediately, market entry is less attractive. If the market is characterised by long-term contracts between incumbents and consumers, this constrains the short-term flexibility of incumbents to react to a possible entry. The newcomer enjoys an advantage because it can serve customers based on better conditions. The existence of long-term contracts is therefore not always a barrier to entry because it reduces the possibilities for incumbents to react quickly to newcomers. It is necessary to qualify this, however: the argument is based on the assumption that the relevant market is growing, in the sense of an increasing number of consumers. Otherwise, newcomers would have a hard time finding contracting partners. This schematic four-step procedure could be incorporated into the evaluation of potential competition within merger control. The advantages of such a standardised procedure would be that economic considerations would be given their due place and that firms willing to merge could make an educated guess concerning the expected decision of the Commission. It would thus lead to an increase in predictability. Proposal #9: Increasing predictability in the assessment of collective dominance The evaluation of mergers that are connected with the danger of collective dominance is highly uncertain. Up till now, no generally accepted economic approach has emerged that would be capable of adequately dealing with the complexities involved
CHAPTER VI
166
in cases of collective dominance. So far, the Commission has not been able to deal with the limited available knowledge in a pragmatic fashion. The bias towards structure-based approaches is not appropriate to take sufficient account of the trends in the business environment that have occurred in recent years. The limited knowledge notwithstanding, legal certainty and predictability are necessary for all parties concerned. That is why we propose the introduction of a standardised and transparent procedure. It should tightly constrain the discretionary leeway of the Commission. What criteria should be used in such a procedure? On the basis of both inductive as well as game-theoretically based industrial economics, a proposal concerning their possible contents is developed here. We propose a checking scheme that is based on three steps. Firstly, it should be checked whether an agreement among firms, no matter whether explicit or implicit, is likely at all. Secondly, it should be checked whether there is any chance that such an agreement could be stable over time. The first two steps deal with the interactions among the firms possibly participating in an agreement. The third step focuses on possible factors external to them that could effectively restrain their behaviour. The checking scheme could look like this: Step 1: Agreement –
What factors are conducive to agreement among firms?
Step 2: Sustainability – – –
Are there any incentives to cheat, i.e. not to behave as the (explicit or implicit) agreement would suggest? Is it possible to detect deviating behaviour? Can a substantial punishment credibly be threatened?
Step3: Dominance Reducing Factors – – –
How strong would remaining competitors be? How strong is the bargaining position of the other side of the market (usually the demand side)? How strong is potential competition?
This is only a rough outline of the scheme proposed here. The three steps are described in more detail in section 4.5 of chapter IV. 3. PROCEDURAL PROPOSALS It has often been noted that the separation of powers is very insufficiently realised in the European Union. This is especially true with regard to European merger policy: In states where the separation of powers is implemented, some functions are care-
PRACTICAL PROPOSALS
167
fully separated from other functions. For political decisions to be made, one often needs the consent of at least two of the different organs involved. These need not necessarily belong to different branches in n the sense of Montesquieu who distinguished between legislature, executive, and the judiciary. Many systems even know checks and balances within a single branch: bicameral systems in which legislative functions are allocated to a lower and a higher house are an example, the U.S. Constitution implies a system where the legislature is even made up of three houses, namely the House, the Senate, and the President whose consent to many legislative bills is crucial for them to be passed. Over the last couple of years, the important role of so-called “non-majoritarian” institutions (Majone 1996; Voigt and Salzberger 2002 is a conceptual overview) has become a cottage industry of its own. It started with the analysis of the effects of independent central banks for price stability. In the meantime, the effects of many other independent agencies have been analysed. These include all kinds of regulatory agencies. The effects of (independent) competition authorities have, however, not been at the top of the research agenda. In generating the procedural proposals, the insights produced by this research programme have often been very helpful. Proposal #10: Strengthening the separation of powers The “value chain” of European merger policy can be meaningfully separated into six different functions (see Neven et al. 1993, chap.8): Notification
Investigation
Negotiation
DecisionMaking
Pol. Review
Jud. Review
In principle, one could think of six different f bodies responsible for the relevant steps. One could also think of combining two or more of the functions spelled out here and allocating them to two or three different bodies. Neven et al. (1993, 232) have pointed out that currently, only two separate bodies are involved in the six functions outlined in the “value chain”. The first five functions are taken care of by the European Commission. Judicial review is allocated to the European Courts. If one wants to be precise, one could even claim that the last step in the value chain is allocated to two different bodies, namely the Court of First Instance and the European Court of Justice as the appellate body. On the other hand, de facto decisionmaking is almost exclusively with the Commission: the time needed in order to get a court-decision is so long that having a decision reviewed by the judiciary is not an economically meaningful option to many firms. As is easily seen, the traditional notion of checks and balances is only very imperfectly implemented in European merger policy. The first four steps are often in the hands of a staff team assembled in order to deal with a specific case. The dangers of such an allocation of competence should be obvious: if it is the same team that makes the decision concerning a notified merger, the incentives to investigate the case thoroughly and impartially are seriously flawed. If it is me who decides in the end, why should I go through pains to collect evidence and to evaluate it?
168
CHAPTER VI
Instead of discussing a variety of possible ways to dilute this undue concentration of powers with very few decision-makers, we just want to emphasise that a separation between the two important steps “investigation” and “decision-making” would seem to be a prima facie solution. Just think of the following solution: a case team investigates a notified merger and then has to plead its case in front of a decision-making body, which could be part of the Commission but which should be independent from the case team. Kühn (2002) has recently reminded us that the separation between state prosecutors and judges in criminal law entails excellent incentives for the prosecutor to do a good job: in order to convince the judge that he has got his act together, he will thoroughly prepare the case at hand. One could think of a similar solution in merger policy: members of the case team plead their case in front of a decision-maker, the notifying parties would also have a chance to argue in favour of the intended merger. For the decision-maker to be really independent, her career-prospects should not depend on superiors within DG Comp, as that would make her cater to their preferences. Investigative tasks would centre on fact-based evaluations: what is the relevant market? What are the market shares of the relevant competitors? Can potential competition be expected from firms already present in other geographical areas? Or from companies offering related products? Are barriers so low that most anybody could enter the market? Based on these fact-based evaluations they could propose a decision to the decision-maker. Decision-making tasks would include (1) the decision to open phase II, (2) the decision to evaluate a merger as incompatible with the Common Market, and (3) the definition of conditions and obligations that need to be fulfilled before a merger can be consummated. Additionally, it could include presiding over public hearings. The Commission has recently introduced a peer review of cases (Drauz 2002, Ryan 2003). Before the case team takes a decision, the soundness of its analysis and conclusions are subject to peer review. Prima facie, this sounds like a good idea – and like an improvement over the former state of affairs. Yet, what incentives do the peer reviewers have to be hard on their colleagues? There are many informal rules according to which openly criticising colleagues is a no-no and can even lead to the respective colleagues being mobbed. As long as these informal norms are not broken up by hard incentives, the effect of peer review will be rather marginal. If career prospects were, however, substantially improved after having initiated a revision of a proposed decision via peer review, this proposal might have some beneficial effect. Additionally, the weight that will be attached to critical peer review needs to be clarified. Does the case team only have to listen, does it have to modify its decision or does it have to accept the criticism in its entirety? Likewise, the Commission has created the office of a chief economist who is in charge of securing that decisions reflect the state of the art in competition theory. There is thus an additional person who has a “fresh look” on merger decisions. A person who analyses cases from a different angle is certainly useful. Two caveats need, however, be made: Since the chief economist is not granted any formal competence in decision-making, the separation of powers remains formally unchanged. Secondly, even if he were granted some competence in decision-making, it is not
PRACTICAL PROPOSALS
169
clear whether this would mean improved predictability. Predictability can be best enhanced by a transparent merger policy that is based on universal rules. The effect of both of these changes is thus unclear. It is, moreover, noteworthy that these procedural changes are neither part of the Merger Regulation nor of the Guidelines on Horizontal Mergers. In the press release of the Commission (Commission 2002), they are simply called “non-legislative measures” implying, of course, that they are not binding and their implementation cannot be fought in court. The introduction of a peer review as well as of a Chief Economist should be based on documents having legal status. Announcing them in press statements or contributions to academic journals does not seem to be a sufficient legal basis. What has, furthermore, not been made sufficiently clear in such statements are the precise competences these actors are to have: in what cases are they to intervene, when exactly will their advice be solicited. Drauz (2002) announces that “...peer review "panels" will be used systematically in all phase 2 cases", but does not provide the reader with analogous information concerning the Chief Economist. Proposal #11: Increase the transparency of decision-making in the College of Commissioners The value chain depicted above reminds us that officially, the decision made by DG Comp is still subject to political review. In that sense, the European competition authority is nott an independent agency. If the members of DG Comp do not want their decisions to be overturned by the college of commissioners, they better take their (expected) preferences concerning particular cases into account when deciding upon them. According to the principle of collegiality, no decision in which there is discretion should be taken by one commissioner only. But the college is composed of a number of commissioners who represent very different functional areas. Additionally, they come, of course, from all the Member States of the EU. Decisions made in the College of Commissioners might therefore be influenced by considerations other than competition. Not many cases in which the College took a merger-decision are publicly known. One can supposedly assume thatt not many decisions are really taken by the College. Yet, the mere possibility of merger cases being decided by the commissioners in their entirety induces unnecessary uncertainty and leads to a decrease in predictability. At the same time, the possibility that a case is reviewed on ideological or national interest grounds reduces the incentives of the staff team to do a good job: if there is a possibility that their arguments will not count, why should they bother in producing good arguments? The College of Commissioners should thus not be concerned with standard merger cases. This does, however, not exclude the possibility that a political body – such as the College – be made a review body. According to such a procedure, standard merger cases would end with the decision-making step performed by an independent decision-maker. If the notifying parties wish to have the case reviewed, they could be given the option to turn to a political review body – or directly to judicial review. If they believe that the decision-maker has made a legal mistake, they would turn directly to the Court of First Instance.
170
CHAPTER VI
But there may be cases that involve complex political trade-offs, e.g., if a firm claims that a merger is in the general European public interest. The treaty in its current form already allows for such arguments, art. 3 is the most frequently cited one in this respect. Giving the decision-making body the competence to decide upon such claims would make it subject to enormous pressure. This is why the reform of the College to a political review body would make sense. Even if the attempt to get a merger decision reviewed by the political review body fails, judicial review should still be possible. In such a case, it would not only involve to check the conformity of DG Comp’s decision with the Merger Regulation, but also the conformity of the College’s decision with the broader goals of the Community as spelled out in art. 3 of the treaty. Implementation of this proposal would reduce the possibility that motivations alien to competition policy influence a decision. But it would also mean that overwhelmingly important political considerations could, nevertheless, play a role. By clearly separating standard decisions in the area of competition policy from nonstandard political decisions, predictability should, overall, be increased. Proposal #12: Improving incentives for quality work by separate decision-making body Economists know that people behave according to the incentives that are set up by the formal – and often also the informal – rules of the game. Members of DG Comp do not constitute an exception at that regard. If one wants them to thoroughly investigate a case and come to a well-founded conclusion, one needs to set up an incentive structure that rewards such behaviour. Career prospects should thus depend on the quality of dossiers that the case-team members prepare. Currently, promotion possibilities seem to depend more on tenure than on merits, like in many state-run administrations. The creation of such an incentive structure is not without problems. It presupposes that the quality of decisions can be ascertained unequivocally. This might appear like an insurmountable problem. Yet, if one is ready to introduce elements of checks and balances within DG Comp as proposed above, the quality of a case-team responsible for the investigation of a case can a be evaluated based on the decisions of the (separate) decision-making body. An analogy might help to clarify the point: in principle, the quality of state prosecutors could be ascertained by checking to what degree judges follow them in their pleas. But how about the quality of the decisions made by the decision-making body? If it were unconstrained itself, it would probably not care much. This would, in turn, undermine the possibility to evaluate the quality of dossiers prepared by staff teams. The quality of the decision-making could be constrained by the ease with which cases could be taken to the Courts. In chapter I of this study, we have seen that the average time needed before a case is decided by the Court was 33 months and that this was a deterrent against taking cases to the Court in the first place. Time lags between taking a case to the Court and getting a decision thus need to be reduced radically. We will dwell upon this possibility a bit in the next section.
PRACTICAL PROPOSALS
171
Proposal #13: Implementing a general rule allocating notified mergers to staff In many rule of law states, citizens have a right to “their” legal judge. There, it is not a chief judge who allocates cases by fiat to the members of her court, but there is a general rule according to which cases are allocated. This is to secure the impartiality of the judge and reduce the potential off the chief judge to put pressure on the judges of her court. In order to insure the impartiality off the staff dealing with specific merger cases, one should think of a similar rule with regard to European merger policy. There are, of course, many possibilities, such as the first letter of the parties notifying a merger, or the last letter or something similar. As judges can be “prejudiced”, a similar notion should be introduced with regard to merger policy. If a member of the DG Comp has some personal stakes in one of the companies involved or the like, it should be exempt from dealing with the case. It would seem to make sense to think about a rule stipulating that no member of a team dealing with a specific merger originates in one of the home countries of the notifying parties. In order to increase expertise of the staff dealing with specific cases, one could also think about the introduction of departments within DG Comp based on specific industries. But the creation of industry-specific departments within DG Comp would also have some disadvantages. As these departments a would be fairly small, members would develop ongoing relationships with large firms that frequently notify mergers. This entails the danger of factors becoming important that are unrelated to competition issues. In case of sympathy, mergers would more easily pass, in case of antipathy, notifying firms would have a hardd time. There is thus a clear trade-off involved: the relevance of industry-specific expertise needs to be weighed against the danger of personal relationships. The rule according to which cases are allocated to members of the staff can be publicly announced. Whether the personal identity of a staff member who is to handle a case should be known ex ante, can, however, be discussed. If firms can know their names before, this constitutes an incentive to establish good relations – or do some more serious rent seeking. Proposal #14: Introducing a standard structure t of decisions Transparency reduces possibilities of discretionary behaviour and enhances predictability. A high degree of comparability between cases increases transparency. The comparability between merger decisions is enhanced if all merger decisions have the same structure. Comparing the Commission’s decision-making concerning similar cases, one notices that the structure of its decisions is often different. To name an example: in MERCEDES-BENZ/KÄSSBOHRER and VOLVO/SCANIA, potential competition was explicitly and extensively analysed. In MAN/AUWÄRTER, potential competition is, however, not mentioned at all. This reduces the comparability of the decisions. Introducing similar structures will increase pressure on DG Comp staff to produce more coherent decisions over time.
172
CHAPTER VI
Proposal #15: Publishing the economic rationale underlying merger decisions One could oblige the Commission to regularly publish certain information that remains unpublished at the moment. In particular, the economic theories underlying its decisions are often not named. It would be a service to the reader, and increase predictability in the long run, if the Commission spelled out those economic factors explicitly that proved relevant for its decision. This could, e.g., apply to the quantitative data used in the decisions. Use of implausible data would lead to pressure on the Commission to use more reliable data and the quality of decisions would, again, be improved in the long run. Proposal #16: Introducing procedures to secure uniformity of decisions Predictability also depends on the consistency of decisions over time. In the common law tradition, consistency is secured via stare decisis or precedent. A prerequisite for consistency is that case handlers are familiar with landmark decisions. The establishment of a “clearing house” within DG Comp might increase consistency. This “clearing house” could simply consist of a member of the Merger Task Force being assigned responsibility for checking that all upcoming decisions are consistent with decisions taken by the Commission in the past. Case teams dealing with mergers displaying great resemblance to former cases should be encouraged to make explicit mention of these cases and to offer reasons for deviating decisions. Chapter E contains a comparison between the cases MERCEDES/KÄSSBOHRER and VOLVO/SCANIA, which are very similar, yet were decided in completely different ways. If the case team concerned with the second merger had mentioned MERCEDES/KÄSSBOHRER R and had explained the reasons for deciding differently, transparency and predictability would have been increased. Proposal #17: Setting up mechanisms that minimise potential for industrial policy as a consequence of remedies; making negotiation process concerning remedies as transparent as possible a a merger that would otherwise go bust. Remedies are often crucial in order to save Yet remedies are often only dealt with at the last moment and their effects are not thoroughly estimated ex ante. A number off reasons can be named for this: firms have little incentive to start talking about remedies as this could be interpreted as a signal to be willing to accept (wide-ranging) remedies. The other side, the Commission, is under tight time-constraints both in phase I and phase II examinations. From our survey among very large European firms that have been frequently involved in mergers, we know that half of all the companies that had first-hand experience with remedies experienced them as “unpredictable”. The process used in order to identify the remedies demanded by the Commission should thus be made more predictable. It is often suspected that remedies in European merger policy are used in order to advance some industrial policy goals. The empirical knowledge concerning the effects of remedies in EU merger policy is, however, almost non-existing. The situation is a little better concerning the consequences of such measures with regard to
PRACTICAL PROPOSALS
173
antitrust policy in the U.S. There, experience with remedies has been rather unsatisfactory. One of the central problems is that remedies will typically be negotiated between the notifying parties and the EU Commission. The bargaining power that the parties bring to the table is, however, very unevenly distributed: If the parties really want a merger, they will at times be ready to very far-reaching concessions. Whether these are, however, to the benefit of effective competition is often very questionable: If the structure of a company has been created with the aim of economising transaction costs, the forced selling of one or more divisions can substantially reduce efficiency of a given company. One proposal in line with the general idea of strengthening the separation of powers and checks and balances could be to allocate the assessment of economically meaningful conditions and obligations to yet another department within DG Comp. Additionally, a standard sequence should be used in order to identify possible conditions and obligations. This should not only reduce the bargaining power that the Commission representatives bring to the bargaining table but should also serve to increase the predictability of the entire procedure. Proposal #18:The decision to prohibit a merger is not binding if the notifying parties take the case to the court Currently, a negative decision by the Commission with regard to a notified merger means that it cannot be carried out.47 Many proposed mergers are buried after the decision of the Commission because decisions of the Court of First Instance take quite a long time, even under the new “fastt track” procedure. The introduction of a special chamber dealing exclusively with merger cases has been proposed but has been met with scepticism. A more radical change in the procedure would be the following: a prohibition of a merger decided by DG Comp would remain unenforceable if the parties were to ask the Court for an injunction and the injunction were granted by the Court. If the company to be bought is a publicly dealt stock company, the buying firm might be allowed to buy stock but not to influence current business decisions. This could be secured by proscribing the company to transfer the stocks – and the corresponding decision-rights they entail – to a trustee. Potentially, such a procedure could play an important role with regard to remedies. The non-acceptance of certain remedies would no longer be equivalent to the failure of the entire merger. Proposal #19: Drastically reducing currentt time lag between Commission’s decision and judgement by Court of First Instance In chapter I, we have seen that the average duration between taking a case to Court and getting a decision is almost three years. After such a long time, many mergers will not be feasible anymore. This means that a merger might very well be compatible with European Merger Regulation, but might nevertheless not be implementable because it simply takes too long to get a Court decision. Effective legal protection
174
CHAPTER VI
depends on getting correct decisions fast. This is a crucial aspect of the rule of law. One could therefore say that the European Union has very serious deficits here. One possible solution would be to increase the number of judges. A more specific solution would be to introduce time limits similar to those that bind DG Comp in its decisions. The creation of a special chamber of the Court of First Instance that would exclusively deal with competition issues has also been proposed. Increased speed in judicial decision-making will have a number of positive incentive effects: currently, many firms never even think about taking their case to Luxembourg as they know of the time that will pass before a dictum is pronounced. If judicial decision-making can be made so fast that the realisation of originally prohibited mergers would still make economic sense, firms’ incentives to take courses to Luxembourg would be increased. This, in turn, would mean that the decision-makers within DG Comp would be subject to more regular evaluations of the quality of their work. The radical reduction off time lags would thus not only improve effective legal protection, it would also increase the incentives of DG Comp staff team members to do a good job. Proposal #20: Setting up mechanisms thatt give DG Comp competence to point towards barriers that inhibit realisation of common market Imperfections in the realisation of the Single Market can act as barriers to entry. Firms might realise supernormal profits and in case some of them wanted to merge, the Commission might want to prohibit a notified merger precisely because potential competition was not credible due to the existence of sizeable barriers to entry. A way to save such a merger could consist in the establishment of a mechanism that would secure the quick abolition of the relevant barriers. In its investigations, DG Comp is in a privileged position to identify such barriers; it should thus have particular weight in pointing towards barriers that inhibit the realisation of the Single Market. This “fire alarm” procedure should be institutionalised. Proposal #21:Strengthening exclusive focus on competition The EU aims at realising partially conflicting goals; e.g., in the areas of industrial policy and competition policy. Depending on the industry involved, notified mergers can be assessed quite differentially. Such a practice thus leads to a reduction in predictability. Inversely, if the exclusive focus is on competition, predictability will be enhanced. DG Comp should thus be allocated only the task to pursue competition policy; industrial policy considerations should not play into its decisions. This should be reflected in the treaty. Proposal #22: Creating an independent supervisory/monitoring body that publishes report on the development of European merger policy at regular intervals DG Comp can easily get lost in day-to-day business. An Independent Supervisory Body could help DG Comp to keep a long-term perspective. There are, of course, many options how to set up such a Body. In Germany, the Monopolkommission is a body comprised of academics – from both law a and economics – as well as from bu-
PRACTICAL PROPOSALS
175
siness. It reports on the progress of competition policy on a regular basis. On top of it, it is free to select topics of its own choice and publish reports on them. Currently, DG Comp does hardly do any systematic ex post evaluation in which its own decisions are critically reassessed using the benefit of hindsight. This is one possible task that a more independent Body such as the one proposed could work on. On a more mundane level, one could think of the creation of an international supervisory body, which could be given the task to compare the different sanctions that various jurisdictions and competition authorities use for identical problems. A systematic assessment of “best practice” rules could eventually lead to changes in the legal foundations. This would imply a certain harmonisation, which would reduce transaction costs in the form of fees for legal firms, information-gathering within the company etc. Such bodies have been around for a long time (OECD) or have been founded more recently, as the International Competition Network. To date, their relevance has been rather limited. 4. CONCLUSIONS AND OUTLOOK This study has been primarily concerned with predictability in merger policy. In chapter I, the crucial importance of predictability for the functioning of market economies in general and the functioning off competition more specifically were shown. These abstract notions of predictability were compared with the degree of predictability factually achieved in European Merger Policy. It was concluded that predictability needs substantial improvement. A survey conducted among the members of the European Round Table who have ample m experience pointed into a similar direction: although the Commission’s need for demanding lots of data was clearly acknowledged, some proposals for improvementt could be derived out of the results of the survey. The development of competition theory was the subject of chapter II. It was shown that the so-called structure-conduct-performance approach is still the dominant paradigm in competition policy although it suffers from very serious deficiencies. Some of the developments in the so-called “New Industrial Organisation”, which draws heavily on game theory, were described. From m a theoretical point of view, models in this tradition are often very convincing. But if one tries to apply them to the real world, one is often confronted with the fact that the results of the models are very sensitive on specific assumptions made. Since models applied in competition policy need to be robust, they should only be used with care. The chapter is finished with a plea for a broader consideration of Transaction Cost Economics models. These models show that profit maximisation by way of economising on transaction costs can often be a very convincing explanation for horizontal as well as for vertical mergers. Over the last couple of decades, the world has been changing at an unprecedented pace. State-mandated barriers to entry such as tariffs and non-tariff barriers have often been substantially reduced. Transportation and communication costs have decreased radically. The world seems to have become smaller. Some of these developments are described in chapter III. It is argued that they should have some effects for
176
CHAPTER VI
competition policy: one cannot only say that the world has become smaller but one could also say that markets have become larger. Competition policy has been rather hesitant in taking these developments explicitly into account. Chapter IV really is the core of the study. It is organised around three major topics in competition policy, namely (1) the assessment of dominance in which the delineation of the relevant market plays a crucial role, (2) the recognition of barriers to entry – or their absence, and (3) the issue of collective dominance. With regard to all three of these topics, recent theoretical developments are described, possible consequences of the changes in business environment pointed out, and the current EU practice presented. With regard to all three topics, proposals how predictability could be improved are generated. These proposals are concerned with substantial issues and do therefore not deal with procedural questions. Notified mergers upon which the European Commission had to decide are used throughout the study in order to illustrate a number of points. Chapter E is exclusively dedicated to illustrate the decision-making of the Commission with regard to two areas, namely the assessment of barriers to entry on the one hand and collective dominance on the other. The result is thatt very similar cases have been decided in very different directions. It is obvious that this is detrimental to predictability. Additionally, it was shown that in cases concerning collective dominance the Commission has taken established economic theory only insufficiently into consideration. Predictability does not only depend on the substance of competition rules but also on the procedures used to apply them. In this chapter, attention has therefore been given to procedural issues. The rule of law in general and predictability and certainty more specifically also require that issues are decided within a reasonable period of time. A court decision that a merger can be carried out can be worthless if it is published too late. The observation that the rule of law has only been factually realised in countries with a separation of powers points to a yet more fundamental problem in European merger policy. Currently, the realised degree of checks and balances seems to insufficient in order to factually implement the rule of law. Both of these problems are dealt with in this chapter of the study. A number of proposals how both the problems could be solved or at leased reduced are made. The reforms of European Merger Policy implemented in 2003 and 2004 are critically taken into consideration in this study. It was shown that their realisation should not be expected to be a great service to the improvement of predictability in European Merger Policy. Quite to the contrary, a number of areas could even experience less predictability. It was the aim off this study not only to describe and evaluate the current situation, but also to make some proposals for improvement. As it stands right now, the improvement of predictability in European Merger Policy will remain on the agenda even after the Commission’s proposals for reform have been implemented.
APPENDIX MAKING EUROPEAN MERGER POLICY MORE PREDICTABLE SURVEY OF ERT MEMBERS PART I: OVERVIEW OF YOUR EXPERIENCE OF EUROPEAN MERGER POLICY
(1) How many mergers has your company notified to the Commission in the period 1997 – to date? ___________________________________________________ Total: Average:
125 5,2
(2) How many of those mergers were cleared without remedies in the first phase? ________________________________________________________________ Total: Rate in per cent:
95 76
(3) How many of those mergers were cleared in the second phase? _____________ Total: Rate in per cent:
16 12,8
(4) How many mergers were withdrawn after Notification?___________________ Total: Rate in per cent:
3 2,4
(5) What was the reason for withdrawal? Numerous answers : – No decision in phase I 177
178
APPENDIX
–
Avoid prohibition decision
(6) Have there been any examples of mergers your company pursued in the period f because it expected a prohibition by the 1997 – to date but did not even notify Commission? ( ) YES ( ) NO Yes in per cent: 25 If yes, how many? 3,5 (Min 1; Max 7) Average: PART II: QUESTIONS CONCERNING YOUR SPECIFIC EXPERIENCE OF EUROPEAN MERGER POLICY The questions in Part I were concerned with your overall experience regarding European merger policy. In this part, we are interested in the most complex and difficult cases you have handled. We would ask you to think of the most complicated merger case that you were involved in and answer the following questions with this specific case in mind.
(7) Did you contact DG Competition before agreeing on the broad outlines of the deal with your partner? ( ) YES ( ) NO Yes in per cent: 37,5
(8) Did your preliminary discussions with DG Competition result in significant modifications of the proposed merger? ( ) YES ( ) NO 12,5 Yes in per cent: If you answered “yes”, could you please specify the kind of modifications? Numerous answers: – Unwinding of a subsidiary – Offer of divestiture commitments
(9) How many working days were required to collect information for and to complete the Notification Form (CO Form)? _______________________________ Average:
65,5
(Min; Max): (7,5; 400)
APPENDIX
179
(10) The CO Form has often been criticised. If you were to improve it, what changes would you make? Numerous answers: – The necessity of extended information acknowledged almost unanimously; – The Commission is requested to display more flexibility depending on the merits of the individual case; – The process should not be stopped entirely if details are missing – It is proposed to shorten section 8 and 9 ("General Conditions in Affected Markets" and "General Market Information") – In order to prevent redundancy, section 6 "Market Definition" and section 8 "General Conditions in Affected Markets" should be united
(11) How many weeks elapsed between yourr first contact with the Merger Task Force (MTF) and Notification? ______________________________________ Average:
6,26
(Min; Max): (1; 20)
(12) How many meetings did you hold with officials from the MTF after Notification? ___________________________________________________________ Average:
8,52
(Min, Max): (0; 95)
(13) How many weeks elapsed between Notification and the Commission’s informing you of its decision? _____________________________________________ Average:
9,5
(Min; Max): (1,4; 20)
(14) Has the publication of guidelines on the definition of the relevant market been helpful in the sense that your capacity to predict the decisions of the Commission has increased? ( ) YES ( ) NO Yes in per cent:
59,1
(15) Many mergers are expected to lead to substantial u synergies. Do expected synergies play any role in your explanations concerning the notified merger? ( ) YES
( ) NO
180
APPENDIX
Yes in per cent:
60,8
(16) Some mergers are only approved after the notifying parties have agreed on so-called “remedies”. Has your company been involved in a merger comprising remedies? ( ) YES ( ) NO Yes in per cent:
79,2
(17) If you answered “yes” to question 16, were the remedies the Commission was asking for predictable? ( ) YES Yes in per cent:
( ) NO
57,8
If you have further suggestions on any items relevant to European Merger Policy which you would like us to take into consideration, please make them here: The following proposals were made: – – – – – –
Implement separation of powers; divide Notification from negotiation and decision Introduce institution of "Independent Hearing Officer" Introduce "Stop the Clock"-Mechanism Speed up Procedure Introduce special chamber for competition issues at CFI Solve Multiple Filing Problem
Thank you for taking the time to fill in this questionnaire. Your input is greatly appreciated.
ENDNOTES
1 2
3
4 5
6
7 8 9 10
11
12 13
14
The newly enacted Merger Regulation 139/2004 sets very similar time limits. But it gives the participants the option to “stop the clock” if they so desire. Given this cumbersome and lengthy procedure, it is rather surprising that a number of firms have indeed tried to get legal remedy from the judiciary. Suppose that after so many months, carrying out the merger is not interesting to them anymore. Then, their taking the case to the Court amounts to the production of a pure public good: Firms willing to merge in the future can base their case on the reasoning of the Court, the firm that won the case thus produces positive externalities. The two most recent decisions by the CFI immediately catch the eye because it took the CFI only 12 months to come up with a decision. There is a reason for this, namely the introduction of a fast-track procedure in 2001 and applied here. Yet for many firms, the lead-time of at least 17 months for the fast-track procedure (1 plus 4 with the Commission, 12 with the CFI) is still too long to save a merger. The questionnaire is attached as Appendix 1. Critics might argue that our results couldn’t reasonably be compared with those of Neven et al. because they focused on all the mergers that had been notified until then, whereas we focus only on the complicated ones. True, this reduces comparability. Yet, the Neven survey was conducted at the very beginning of European merger policy. At that time, no company had much experience with the procedures and all cases were complicated. In that sense, the results are indeed comparable. This result might, however, be somewhat influenced by the biased sample: ERT members are, without exception, very large firms. It is quite possible that consensus concerning the general necessity of extended information concerning the business the merging firms are involved in would be lower if firms with smaller turnovers had been asked. This is clearly recognised in the White Paper on European Governance published by the Commission (2001). More specifical proposals are, however, developed in chapter VI. This argument is based on the assumption that the operation of the net itself and the use of the net cannot be separated in a sensible manner. A recent evaluation of game theory is not particularly helpful in fending off that critique. Etter (2000, 122) writes: „Despite the criticism in literature game theory can be considered as a big step ahead, if we take it as an exemplifying theory. An exemplifying theory does not tell us what mustt happen, it tells us what can happen.” If a multitude of things can happen, not much is gained by relying on game theory. In welfare economics, any deviation from the rule „price = marginal costs“ is supposed to be an allocative inefficiency. Power is often simply equated with the capacity of a firm to realise prices that are above marginal costs. These include the costs of transferring technology and know-how. Both the Cournot and the Stackelberg duopoly deal with competition in quantities – and possibly capacities. In the Cournot case, both firms simultaneously choose their outputs and try to maximise their own profits. In the Stackelberg case, the firms choose their outputs sequentially. The firm that moves first systematically has a higher output as well as higher profits than in the simultaneous Cournot case. Marginal costs are the costs that the firm needs to incur for the production of the last entity produced, i.e., the costs incurred at the margin. Marginal revenues are then the revenues a firm gets for the last unit of the product sold. If the price of a product does not depend on the behavior of a (small) firm, the firm is called a “price taker”, meaning that it chooses its output level on the assumption that the price is exogenously given. If the firm is, however, a monopolist, the price it can redeem for its product is not be given any more but depends on the quantity of goods supplied to consumers. The decision-problem of a firm that is too large to be a price-taker thus consists of finding that price-quantity combination which maximises profits.
181
182
15
16
17
18
19
20 21
22 23
ENDNOTES
The profit maximising condition that marginal revenues should be equal to marginal cost is the general solution to this decision problem. A simple way to understand this condition is to ask whether the firm can increase its profits if it is not fulfilled. Suppose a firm supplies a quantity where marginal returns are still higher than marginal costs. This would mean that a (small) increase of the output could still increase profits. Suppose that marginal costs are larger than marginal revenues: that would mean that the firm subsidises these last units and that it should decrease its output in order to increase its profit. The question whether there is any “true“ information might sound academic but is very real: Even managers acting under a perfect principal-agent contract might be subject to this problem. A “perfect” principal-agent contract here means that the incentives of the managers are perfectly aligned with the interests of their shareholders. They thus do not have any incentives to make themselves better off to the detriment of their shareholders. Yet, even in such a purely illustrative case, managers might err on the realisable synergies because they are estimates that might prove too optimistic ex post. It is often claimed that only a fraction of all consummated mergers leads to the realisation of substantial synergies. The evidence here is mixed, but suppose it was true. One possible reason for this mixed track record could be that managers do have their own agenda and that they expect higher salaries, additional bonuses, etc. This is then a problem of the principal-agent contract between management and shareholders, with shareholders having a problem of making management act in their – the shareholders – interest. It might well be that there is a substantial problem in this area. We believe that it should be dealt with where it belongs, namely in corporate law and not in competition law. Scanners used in supermarkets cannot only be used for cashing and logistical purposes but also for learning something about consumer preferences. Iff the price of a specific product is marginally changed, the reduction in demand as well as the increase in the demand of competing products can be ascertained by drawing on scanners. Market research institutes such as Nielsen use this technique to estimate the own-price elasticity of specific products as well as the cross-price elasticity of a given product with its competitors. Sleuwaegen (1994) has demonstrated that this approach is more than a theoretical possibility: using three-digit NACE industry data, he applied a similar framework to Belgian industries. Sleuwaegen points out one problem with such an approach, namely that the industry classification is often not fine-grained enough for meaningful analysis. An additional problem that would have to be solved if the classification were to be used in competition policy on a regular basis would be that most mergers take place between firms that are active in a whole range of three-digit industries. On the other hand, many mergers are only critical with regard to a low number of industries. Williamson (1996, 195) tries to take this into account and proposes the concept of “remediableness”. If one proposes a new policy, one better take the costs of getting from the current status quo to the proposed policy explicitly into account. Getting there might be costly. The proposed policy only constitutes an improvement if the returns from the new policy are higher after the costs of getting there have already been reduced from the expected benefits. This leads Williamson to redefine the notion of efficiency (1996, 195): „An outcome for which no feasible superior alternative can be described and implemented with net gains is presumed to be efficient.“ Basically, the term cheating is used here in a value-free sense: it is simply a description that one actor deviates from a behaviour that he had implicitly agreed to before. From a welfare economic point of view, cheating in such cases leads to welfare increases. The most common form of punishment being to increase one’s own quantity, which will lead to an erosion of prices – to the detriment of all participants in the market. This was the case in Kali&Salz/MdK/Treuhand, Pilkington-Technit/SIV, MANNEMANN/VALLOUREC/ILVA, ABB/DAIMLER-BENZ, GENCOR/LONRHO, PRICE WATERHOUSE/COOPERS&LYBRAND; EXXON/MOBIL, AIRTOURS/FIRST CHOICE, UPMKYMMENE/HAINDL and NORSKE SKOG/PARENCO/WALSUM. This is described in more detail in Chapter V.2. The Commission’s AIRTOURS/FIRST CHOICE decision has been criticised by a number of authors, e.g.. Richardson and Gordon (2001); Kloosterhuis (2001); and Motta (1999).
ENDNOTES
24
25 26 27 28 29
30 31 32 33 34 35
36 37 38 39 40 41 42 43
44 45 46 47
183
Unilateral effects arise when two or more closely competing products are brought under common ownership. They refer to the fact that the post-merger firm has an incentive to set higher prices even if the merger has no effect on the behaviour of the competing firms. Where a firm merges with one of its rivals, this may reduce the elasticity of the firm’s residual demand curve. In the pre-merger market, price increases by an individual firm will lead to a reduction in sales. Some of these lost sales will be transferred to the other merging party. Hence, the impact of the price increase on profits is potentially smaller after a merger because some of the lost sales are recaptured in higher sales of the other merging party. They are not to be confused with price effects resulting from monopoly power or collective dominance (Bishop/Walker 1999). Case No. COMP/M 1672 VOLVO/SCANIA, in: Official Journal 2001, No. L 143, pp.74. Case No. IV/M. 477 Mercedes-Benz/Kässbohrer, in: Official Journal 1995, No. L. 211, pp. 1. Case No. IV/M. 004 Renault/Volvo. Cf. VOLVO/SCANIA, in: Official Journal r 2001, No. L 143, 95. The source of potential competition in VOLVO/SCANIA could come from other European bus manufacturers such as Evobus (Mercedes-Benz/Kässbohrer), MAN, Neoplan, Irisbus, Bova and Van Hool. Case COMP/M 2201 MAN/AUWÄRTER, in: Official Journal 2002 No. L 116, pp. 35. Case No. COMP/M 2097 SCA/Metsä Tissue, in: Official Journal 2002, No. L 57, pp. 1. Case No. COMP/M. 2522 SCA Hygiene Products/Cartoinvest. Case No. IV/M 623 Kimberly-Clark/Scott, in: Official Journal 1996, No. L 183, pp. 1. Case No. COMP/M. 1987 BASF/Bayer/Hoechst/Dystar. Case No. M 1069 WorldCom/MCI, in: Official Journal 1999 No. L 116, pp.1; Case No. M 1439 TELIA/TELENOR, in: Official Journal 2001 No. L 40, pp.1; Case No. 1760 Mannesmann/Orange, Case No. 1795 Vodafone Airttouch/Mannesmann, Case No. M 1741 MCI WorldCom/Sprint, and Case No. M 2803 TELIA/Sonera. PABX – Private Automatic Branch eXchange Cf. Case No. T-102/96 Gencorr vs. European Commission. Cf. Case No. IV/M.190 NESTLÉ/PERRIER, in: Official Journal 1992, No. L 356, pp.1 Case No. IV/M. 0315 Kali&Salz/MdK/Treuhand, in: Offical Journal 1994, No. L 186, 38. The Commission decided to analyse two markets in this case, as a monopoly was emerging in Germany but not the other European countries. Case No. IV/M. 619 GENCOR/LONRHO, in: Official Journal 1997 No. L. 11, 30. Case No. IV/M. 1383 Exxon/Mobil. The large mineral oil brands try, of course, time and again to create customer loyalty. According to empirical studies, their success cannot be proved. The German Competition Authority as well as the European Commission believe that customer loyalty can safely be ignored. Case No. IV/M. 1534 Airtours/First Choice, in: Official Journal 2000 No. L 93, 1. Case No. COMP/M. 2498 UPM-Kymmene/Haindl. Case No. COMP/M 2499 Norke Skog/Parenco/Walsum This is, however, not the case if national law proscribes that mergers be completed before they can be notified with competition authorities, as is, e.g., the case in France.
This page intentionally blank
REFERENCES
Aigner, Andreas (2001). Kollektive Marktbeherrschung im EG-Vertrag: Zugleich eine Untersuchung der Behandlung von Oligopolfällen durch die Kommission und den Gerichtshof der Europäischen Gemeinschaften. Wien: Manz. Akerlof, George A. (1970). The Market for “Lemons”: Quality Uncertainty and the Market Mechanism. Quarterly Journal of Economics, 84, 488-502. Amstutz, Marc (1999). Kollektive Marktbeherrschung im europäischen Wettbewerbsrecht: Eine evolutorische Perspektive. Tübingen: Mohr. Bain, Joe S. (1968). Industrial Organization (2nd ed.). New York: Wiley & Sons. Bain, Joe S. (1956). Barriers to New Competition. Cambridge, Mass.: Harvard Univ. Press. Baldwin, Richard E., & Phillipe Martin (1999). Two Waves to Globalisation: Superficial Similarity and Fundamental Differences, NBER working paper, 6904. Baumol, William Jack & Robert D. Willig (1986). Contestability: developments since the book. In D. Morris, P. Sinclair, P. Slater, and J. Vickers (Eds.), Strategic Behavior and Industrial Competition (pp 9-36), Oxford: Clarendon Press. Baumol, William JackJ, John C. Panzar & Robert D. Willig (1982). Contestable Markets and the Theory of Industrial Structure. San Diego: Jovanovich. Bellamy, Christopher, & Graham Child (1993). Common Market Law of Competition ( 4th ed.). London: Sweet & Maxwell. Besanko, David & Daniel F. Spulber (1993). Contested Merger and Equilibrium Antitrust Policy. Journal of Law, Economics & Organization, 9, 1-29. Bishop, Simon (1999). Power and Responsibility: The ECJ´s Kali-Salz Judgement. European Competition Law Review, 20, 37-39. Bishop, Simon, & Mike Walker (1999). The Economics of EC Competition Policy. London: Sweet & Maxwell. Bittlingmayer, George (2001). Regulatory Uncertainty and Investment: Evidence from Antitrust Enforcement, Cato Journal, 20, 295-325. Bork, Robert H. (1978). The Antitrust Paradox – A Policy at War with itself. New York: Free Press. Briones, Juan (1995). Oligopolistic Dominance: Is there a Common Approach in Different Jurisdictions? A Review of the Decisions adopted by the Commission under Merger Regulation. European Competition Law Review, 335-46. Brijones, Juan & Atilano Jorge Padilla (2001). The Complex Landscape of Oligopolies under EU Competition Policy: Is Collective Dominance Ripe for Guidelines?. World Competition, 24, 307-18. Brunetti, Aymo, Grogory Kisunko & Beatrice Weeder (1997). Credibility of Rules and Economic Growth. Policy Research Working Paper, 1760: The World Bank. Busse, Matthias (2001). Transaktionskosten und Wettbewerbspolitik. HWWA-Diskussionspapier, 116. Hamburg. Caffara, Cristina, & Kai-Uwe Kühn (1999). Joint Dominance: The CFI Judgement on Gencor/Lonrho, European Competition Law Review, 20, 355-59. Camesasca, Peter (1999). The Explicit Efficiency Defence in Merger Control: Does it Make the Difference? European Competition Law Review, 20, 14-28. Cason, Timothy N. (1994). The Impact of Information Sharing Opportunities on Market Outcomes: An Experimental Study, Southern Economic Journal, 61, 14-34. Christensen, Peder & Valerie Rabassa (2001). The Airtours Decision: Is there a new Commission Approach to Collective Dominance? European Competition Law Review, 22, 227-37. Clark, John M. (1940). Toward a concept of workable competition. American Economic Review, 30, 2416. Coase, Ronald (1937). The Nature of the Firm. Economica, 4, 386-405. Coase, Ronald (1960). The Problem of Social Cost. Journal of Law and Economics, 3, 1-44.
185
186
REFERENCES
Coase, Ronald (1972). Industrial Organization: A Proposal for Research. In V. Fuchs (Ed.), Policy Issues and Research Opportunities in Industrial Organization (pp. 59-73). New York: National Bureau of Economic Research. Collins, Damian, Paul Glaezner & Laura McRoberts (1993). EEC. In Verloop, P. (Ed.), Merger Control in the EEC: A Survey of European Competition Laws (pp. 201-34). Boston - Deventer: Kluwer. Commission of the European Communities (1992). XXIth Report on Competition Policy 1991. Brussels: Luxembourg. Commission of the European Communities (2001). White Paper on European Governance. Brussels: Luxembourg. Commission of the European Communities (2002a). Commission adopts comprehensive Reform of the EU Merger Control. Press Releases IP/02/1856. Brussels. Commission of the European Communities (2002b). Draft Commission Notice on the Appraisal of Horizontal Mergers under the Council Regulation on the Control of Concentrations between Undertakings. COMP/2002/1926/01. Brussels. Compte, Olivier, Frédéric Jenny & Patrick Rey (2002). Capacity Constraints, Mergers and Collusion. European Economic Review, 46, 1-29. Crocker, Keith J. & S.E Masten (1996). Regulation and Administered Contracts Revisited: Lessons from Transaction-cost Economics for Public Utility Regulation. Journal of Regulatory Economics, 9(1), 539. Darby, Michael R. & Eli Karni. (1973). Free Competition and the Optimal Amount of Fraud. Journal of Law and Economics, 16, 67-88. Dicken, Peter. (1998). Global Shift, Transforming the World Economy. London: Chapman. Dixit, Avinash (1979). A Model of Oligopoly Suggesting a Theory of Entry Barriers. Bell Journal of Economics, 10, 20-32. Dixit, Avinash (1981). The Role of Investment in Entry Deterrence, Economic Journal, 95, 95-100. Easterbrook, Frank H. (1984). The Limits of Antitrust. Texas Law Review, 63, 1-40. Ebenroth, Carsten T. & Knut-Werner Lange (1994). EG-Fusionskontrolle nach Abschluß der UruguayRunde im Rahmen des GATT: Zugleich Besprechung der Entscheidung der EG-Kommission Mannesmann/Vallourec/Ilva. Wirtschaft und Wettbewerb, 44, 601-615. Enchelmaier, Stefan (1997). Europäische Wettbewerbspolitik im Oligopol. Baden-Baden: Nomos. Elzinga, Kenneth G. (1994). The Relevant Market for Less-Than-Truckload Freight: Deregulation´s Consequences. Transportation Journal, 34, 29-36. Etter, Boris (2000). The Assessment of Mergers in the EC under the Concept of Collective Dominance: An Anylsis of the Recent Decisions and Judgements - by an Economic Approach. Journal of World Competition, 23, 103-139. Fisher, Franklin M. (1989). Game Economists Play: A Noncooperative View. Rand Journal of Economics, 20, 113-124. Flint, David (1978). Abuse of a Collective Dominant Position. Legal Issues of European Integration, 1978-II, 21-38. Freytag, Andreas (1995). Die strategische Handels- und Industriepolitik der EG: Eine politökonomische Analyse. Köln: Institut für Wirtschaftspolitik an der Universität zu Köln. Froeb, Luke M. & Gregory J. Werden (1991). Correlation, Causality, and all that Jazz: The Inherent Shortcoming of Proce Tests for Antitrust Market Delineation. Economic Analysis Group Discussion Paper, US Department of Justice, Antitrust Division. Washington, DC. Fudenberg, Drew & Jean Tirole (1984). The Fat Cat Effect, the Puppy Dog Ploy and the Lean and Hungry Look. American Economic Review, Papers and Proceeedings, 74, 361-368. Fujita, Masahisa, Paul Krugman & Anthony J. Venables (1999). The Spatial Economy, Cities, Regions and International Trade. Cambridge, MA.: MIT Press. Gabelmann, Anne & Wolfgang Gross (2000). Telekommunikation: Wettbewerb in einem dynamischen Markt. In Knieps, G. & G. Brunnekreeft (Eds.), Zwischen Regulierung und Wettbewerb: Netzsektoren in Deutschland (pp. 83-123). Heidelberg: Springer. Gellhorn, Ernest & and Willuiam Kovacic (1994). Antitrust Law and Economics in a Nutshell. St. Paul, Minn. West Public. Gilbert, Richard J. (1989). Mobility Barriers and the Value of Incumbency. In Schmalensee, R. A., & R. Willig (Eds.), Handbook of Industrial Organization (pp. 470-503). Amsterdam: Elsevier.
REFERENCES
187
Güth, Werner (1992). Spieltheorie und Industrieökonomik - Muß Liebe weh tun?, IFO-Studien, 38, 271316. Hamilton, A., Madison, J. & Jay, J. (1788/1961). The Federalist Papers, with an introduction by C. Rossiter. New York: Mentor. Harbord, David & Tom Hoehn (1994). Barriers to Entry and Exit in European Competition Policy. International Review of Law and Economics, 14, 411-435. Hay, George & Daniel Kelly (1974). An Empirical Survey of Price Fixing Conspiracies. Journal of Law and Economics, 17, 13-38. Hayek, Friedrich August von (1960). The Constitution of Liberty. Chicago: University of Chicago Press. Hayek, Friedrich August von (1964). Kinds of Order in Society. New Individualist Review, 3(2), 3-12. Hayek, Friedrich August von (1969). Der Wettbewerb als Entdeckungsverfahren. In Freiburger Studien: Gesammelte Aufsätze (pp. 249-78). Tübingen:Mohr. Hayek, Friedrich August von (1973). Law, Legislation and Liberty. Vol.1: Rules and Order. Chicago: University of Chicago Press. Hayek, Friedrich August von (1978). Competition as a Discovery Procedure. In F. A. Hayek (Ed.), New Studies in Philosophy, Politics, and Economics and the History of Ideas (pp. 179-90). Chicago: University of Chicago Press. Jones, Chreistopher & F. Enrique González-Diaz (1992). The EEC Merger Regulation. London: Sweet & Maxwell. Joskow, Paul L. (2002). Transaction Cost Economics, Antitrust Rules and Remedies. Journal of Law, Economics, and Organization, 18, 95-116. Joskow, Paul L. (1991). The Role of Transaction Cost Economics in Antitrust and Public Utility Regulatory Policies. The Journal of Law, Economics and Organization, 7, 53-83. Joskow, Paul L. (1990). The Diffusion of New Technologies: Evidence from the Electric Utility Industry. Rand Journal of Economics, 21, 354-373. Joskow, Paul L. (1987). Contract Duration and Relationship – Specific Investments: Empirical Evidence from Coal Markets. American Economic Review, 77, 168-185. Joskow, Paul L. (1985). Vertical Integration and Long-term Contracts: The Case of Coal-Burning Electric Generating Plants. Journal of Law, Economics & Organization, 1, 33-80. Kant, Immanuel (1797/1995). The Metaphysics of Morals - Introduction, translation, and notes by Mary Gregor. Cambridge: Cambridge University Press. Kantzenbach, Erhord & Hermann H. Kallfaß (1981). Das Konzept des funktionsfähigen Wettbewerbs. In Cox, H., U. Jens, K. Markert (Eds), Handbuch des Wettbewerbs (pp. 103-127). München: Vahlen. Kantzenbach, Erhard & Jorn Kruse (1989). Kollektive Marktbeherrschung. Göttingen: Vandenhoeck & Ruprecht. Kantzenbach, Erhard & Reinald Krüger (1990). Zur Frage der richtigen Abgrenzung des sachlich relevanten Marktes bei der wettbewerbspolitischen Beurteilung von Zusammenschlüssen. Wirtschaft und Wettbewerb, 40, 472-481. Kantzenbach, Erhard, Elke Kottmann & Reinald Krüger (1995). New Industrial Economics and Experiences from European Merger Control: New Lessons about Collective Dominance?. Brussels, Luxembourg. Kerber, Wolfgang (1994). Die Europäische Fusionskontrollpraxis und die Wettbewerbskonzeption der EG. Bayreuth: P.C.O. Verlag. Kinne, Konstanze (2000). Effizienzvorteile in der Zusammenschlußkontrolle: Eine vergleichende Analyse der deutschen, europäischen und US-amerikanischen Wettbewerbspolitik. Baden-Baden: Nomos. Kloosterhuis, Erik (2001). Joint Dominance and the Interaction Between Firms. European Competition Law Review, 21, 79-92. Koch, Eckart (1997). Internationale Wirtschaftsbeziehungen. Bd. 1 Internationaler Handel. München: Vahlen. Kruse, Jorn (1994). Kollusion. Diskussionsbeitrag aus dem Institut für Volkswirtschaftslehre der Universität Hohenheim 101/1994. Stuttgart. Kühn, Kai-Uwe (2002). Reforming European Merger Policy: Targeting Problem Areas in Policy Outcomes. Paper 02-012, University of Michigan, JJohn M. Olin Centre for Law and Economics. Leontiades, James C. (1985). Multinational Corporate Strategy. Lexington, Toronto: Lexington Books. Leker, Jens (2001). Reorientation in a Competitive Environment: An Analysis of Strategic Change. Schmalenbach Business Review, 53, 41-55.
188
REFERENCES
Levitt, Theodore (1983). The Globalization of Markets. Harvard Business Review, 61, 92-102. Lipsey, Richard & Kelvin Lancaster (1956). The General Theory of Second Best. Review of Economic Studies, 24,11-32. London Economics (1994). Barriers to entry and exit in UK competition policy, Office of Fair Trading Research, Paper 2. Lopez, Edward J. (2001). New Anti-Merger Theories: A Critique. Cato Journal, 20, 359-78. Lyons, Bruce R. (1996). Empirical Relevance of Efficient f Contract Theory: Inter-Firm Contracts. Oxford Review of Economic Policy, 12, 27-52. Manne, Henry G. (1965). Mergers and the Market for Corporate Control. Journal of Political Economy, 73, 110-20. Majone, Giandomenico (1996). Temporal Consistency and Policy Credibility: Why Democracies Need Non-Majoritarian Institutions, European University Institute, Working Paper RSC, No. 96/57. Mas-Colell Andreu (1980). Noncooperative Approaches to the Theory of Perfect Competition: Presentation. Journal of Economic Theory, 22, 121-135. Moldovanu, Benny & Philippe Jéhiel (2001). The European UMTS/IMT-2000 Licence Auctions. Centre for Economic Policy Research, London. Montag, Frank & Ralph Kaessner (1997). Neuere Entwicklungen in der Fallpraxis der europäischen Fusionskontrolle. Wirtschaft und Wettbewerb, 47, 781-794. Morgan, Eleanor J. (1996). The Treatment of Oligopoly under the European Merger Control Regulation. Antitrust Bulletin XLI, I 203-246. Motta, Massimo (2000a). EC Merger Policy, and the Airtours Case. European Competition Law Review, 21, 428-436. Motta, Massimo (2000b). Economic Analysis and EC Merger Policy. EUI Working Papers, RSC 2000/33. Robert Schumann Centre for Advanced Studies. Nelson, P. (1970). Information and Consumer Behavior. Journal of Political Economy, 78, 311-329. Nelson, P. (1974). Advertising as Information. Journal of Political Economy, 82, 729-754. Neven, Damien, Robin Nuttall & Paul Seabright (1993). Merger in Daylight: The Economics and Politics of European Merger Control. ECPR. London. Nikolinakos, Nikos Th. (2001). Promoting Competition in the Local Access Network: Local Loop Unbundling. European Competition Law Review, 21, 266-80. Noel, Pierre Emanuele (1997). Efficiency Considerations in the Assessment of Mergers. European Competition Law Review,17, 498-519. Olsson, Carina (2000). Collective Dominance: Merger Control on Oligopolistic Markets. Discussion Paper of Universitet Göteborg. Juridiska Institutionen. Göteborg. Phlips, Louis (1995). Competition Policy: A Game-Theoretic Perspective. Cambridge, Mass.: Cambridge Univ. Press. Porter, Michael E. (1990). The Competitive Advantage of Nations. London: Basingstoke. Porter, Michael E. (1998). Competitive Advantage: Creating and Sustaining Superior Performance, with a New Introduction. New York: Free Press.. Porter, Michael E. (2001). Competition and Antitrust: Toward a Productive-Based Approach to Evaluating Mergers and Joint Ventures. The Antitrust Bulletin, 46, 919-958. Rabassa, Valérie, Stephan Simon & Thibaut Kleiner (2002). Investigation into Possible Collective Dominance in the Publication Paper Industry. Competition Policy Newsletter, 1, 77-79. Richardson, Russell & Clive Gordon (2001). Collective Dominance: The Third Way?. European Competition Law Review, 21, 416-423. Ridyard, Derek (1992). Joint Dominance and the Oligopoly Blind Spot under the EC Merger Regulation. European Competition Law Review, 13, 255-262. Riordan, Michael H. (1998). Anticompetitive Vertical Integration by a Dominant Firm. American Economic Review, 88, 1232-48. r (2000). Efficiency Gains from Mergers, DiscusRöller, Lars.-Hendriok, Jan Stenneck & Frank Verboven sion Paper WZB-Berlin. Rotemberg, Julio J. & Garth Saloner (1986). A Supergame-Theoretic-Model of Price Wars During Booms. American Economic Review, 79, 390-408. Salop, Steven C. (1986). Practices that (Credibly) Facilitate Oligopoly Coordination. In Stiglitz, J. E. & G. F. Mathewson (Eds.), New Developments in the Analysis of Market Structure (pp. 95-116). Cambridge, Mass.: MIT Press.
REFERENCES
189
Scherer, Frederic. M. & David. Ross (1990). Industrial Market Structure and Economic Performance (3rd ed.). Boston: Houghton Mifflin Company. Schmalensee, Richard (1981). Product Differentiation Advantages of Pioneer Brands. American Economic Review, 72, 349-365. Schmalensee, Richard (1983). Advertising and Entry Deterrence: An Exploratory Model. Journal of Political Economy, 90, 636-653. Schmidt, André (1998). Ordnungspolitische Perspektiven der europäischen Integration im Spannungsfeld von Wettbewerbs- und Industriepolitik. Frankfurt am Main: Peter Lang. Schmidt, André (2000). Wettbewerbspolitik im Zeitalter der Globalisierung. In Walter, H., S. Hegner & J. M. Schechler (Eds.), Wachstum, Strukturwandel und Wettbewerb: Festschrift für Klaus Herdzina (pp. 377-416). Stuttgart: Lucius & Lucius. Schmidt, Ingo (2001). Wettbewerbspolitik und Kartellrecht: Eine Einführungg (7th ed.). Stuttgart: Lucius & Lucius. Schmidt, Ingo & André Schmidt (1997). Europäische Wettbewerbspolitik: Eine Einführung. München: Vahlen. Schmidtchen, Dieter (1994). Antitrust zwischen Marktmachtphobie und Effizienzeuphorie: Alte Themen – neue Ansätze. In W. Möschel (Ed.), Marktwirtschaft und Rechtsordnung. Baden-Baden: Nomos. r Wagemann (2001). Kartellrechtspraxis und Kartellrechtsprechung Schultz, Klaus-Peter & Markus 2000/01. Köln: RWS-Verlag. Schwalbach, Joachim (1993). Stand und Entwicklung der Industrieökonomik. In Neumann, M. (Ed.), Unternehmensstrategie und Wettbewerb auf globalen Märkten und Thünen Vorlesungg (pp. 93-109). Berlin: Duncker & Humblot. Selten, Reinhard (1975). Convolutions, Inertia Supergames, and Oligopolistic Equilibria. Bielefeld: Universitätsverlag. Shelanski, Howard A. & Peter G. Klein (1999). Empirical Research in Transaction Cost Economics, A Review and Assessment. In Carroll, G.R. & D.J. Teece (Eds.), Firms, Markets, and Hierarchies: The Transaction Cost Economics Perspective. New York: Oxford University Press. Shepherd, William G. (1972). The Elements of Market Structure. Review of Economics and Statistics, 54, 25-38. Shughart, William F. (1990). The Organization of Industry. Homewood, Ill.: BPI Irwin. Shughart, William.F. & R. Tollison (1991). The Employment Consequences of Antitrust Enforcement. Journal of Institutional and Theoretical Economics, 147, 38-52. Shy, Oz (1995). Industrial Organization: Theory and Applications. Cambridge, Mass: MIT Press. Simon, Herbert A. (1955). Administrative Behavior: A Study of Decision Making Process in Administrative Organization. New York: Free Press. Spence, Michael (1979). Investment Strategy and Growth in a New Market. Bell Journal of Economics, 10, 1-19. Spiller, Pablo Thomas (1985). On Vertical Mergers. Journal of Law, Economics, and Organization, 1, 285-312. Staiger, Robert W. & Frank A. Wolak (1992). Collusive Pricing with Capacity Constraints in the Presence of Demand Uncertainty. Rand Journal of Economics, 23, 203-19. Stigler, George J. (1964). A Theory of Oligopoly. Journal of Political Economy, 72, 44-61. Stigler, George. J. (1968). The Organization of Industry. Homewood, Ill: Irwin. Sutton, John (1991). Sunk Costs and Market Structure: Price Competition, Advertising, and the Evolution of Concentration. Cambridge: MIT Press. Tirole, Jean (1988). The Theory of Industrial Organization. Cambridge, MA: MIT Press. Tollison, Robert D. (1997). Rent Seeking. In Mueller, D. C. (Ed.), Perspectives of Public Choice (pp 506525). Cambridge, Mass.: Univ. Press. Venables, Anthony J. (2001). Geography and International Inequaltities: The Impact of New Technologies. Journal of Industry, Competition and Trade, 1, 135-159. Venit, James S. (1998). Two Steps Forward and No Steps Back: Economic Analysis and Oligopolistic Dominance after Kali & Salz. Common Market Law Review, 35, 1101-1134. Viner, Jacob (1950). The Customs Union Issue. New York: Carnegie Endowment for International Peace. Vinje, Thomas C. & Hairi Kalimo (2000). Does Competition Law Require Unbundling of Local Loop?. Journal of World Competition, 23, 49-80.
190
REFERENCES
Voigt, Stefan (1993). „Strategische Allianzen - Modisches Schlagwort oder Antwort auf globale Herausforderungen?“. WiSt, 22, 246-249. Voigt, Stefan (2002). Institutionenökonomik. Neue Ökonomische Bibliothek. München: Fink. Voigt, Stefan & Eli M. Salzberger (2002). Choosing Not To Choose: When Politicians Choose To Delegate Powers. Kyklos, 55, 289-310 Walker, G. & D. Weber (1984). A Transaction Cost Approach to Make-or-Buy Decisions. Administrative Science Quarterly, 29, 373-91. Walker, G. & D. Weber (1987). Supplier Competition, Uncertainty and Make-or-Buy Decisions. Academy of Management Journal, 30, 589-96. Weiss, Leonard W. (1974). The Concentration-Profit Relationship and Anti-Trust. In Goldschmidt, H.J., H. Mann & J. F. Weston (Eds.), Industrial Concentration: The New Learningg (pp 184-233). Boston London - Toronto: Little Brown. Williamson, Oliver. E. (1968). Economics as an Antitrust Defence: The Welfare Trade-offs. American Economic Review, 58, 18-36. Williamson, Oliver E. (1975). Markets and Hierarchies – Analysis and Antitrust Implications: A Study in the Economics of Internal Organization. New York: Free Press. Williamson, Oliver E. (1985). The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting. New York: Free Press. Williamson, Oliver E (1996). The Mechanism of Governance. New York: Free Press. Winckler, Antoine & Marc Hansen (1993). Collective Dominance under the EC Merger Control Regulation. Common Market Law Review, 30, 787-828. Ysewyn, Johan & Cristina Caffarra (1998). Two´s Company, Three´s Crowd: The Future of Collective Dominance After the Kali & Salz Judgement. European Competition Law Review, 19, 468-472.
INDEX
competition 15 completely globalised markets 62, 87 concentration ratios 75 Conglomerate effects 97 constraints to competition 18, 20 consumer goods 127 consumer loyalty 66, 77, 90, 135, 136 consumption patterns 94, 110, 163 contestability 23 coordinated behaviour 151, 155, 158 coordinated effects 100, 113 Cournot Model 148 Court of First Instance 6, 146, 151, 152, 156, 167, 169, 173, 174 credibility 3 CROWN CORK & SEAL/METALBOX 67 customer loyalty 77, 95, 121, 126, 127, 128, 162 dealer networks 126 demand substitutability 48, 50, 55, 57 deregulation 41 DG Comp 169, 170, 171, 173, 174 divestiture 20 dominant position 68, 98 duopoly 104 dynamic efficiency 15 economies of scale 117 economies of scope 99 efficiencies 10, 60, 79, 81, 82, 83, 84 efficiency 69 Elzinga-Hogarty Test 75 employment effects 4 ENSO/STORA 95, 96
AÉROSPATIALE/DE HAVILLAND 79 AÉROSPATIALE-ALENIA/DE HAVILLAND 63, 65, 69 AIRTOURS/FIRST CHOICE 95, 111, 112, 119, 154, 156 ALCATEL/AEG 110 ALCATEL/TELETTRA 69 asset specificity 30, 31, 84, 163 barriers to entry 34, 4 40, 63, 77, 87, 89, 90, 91, 93, 94, 96, 97, 99, 100, 103, 109, 112, 117, 120, 122, 123, 124, 125, 126, 128, 129, 132, 134, 136, 137, 143, 144, 146, 151, 153, 165, 174, Siehe barriers to exit 91 BASF/BAYER/HOECHST/DYSTAR 96, 119, 136 BERTELSMANN/KIRCH/PREMIER E 67 bounded rationality 30 brand loyalty 124, 126, 132, 135 Business-to-Business 47 Business-to-Consumer 47 buyer power 69 capacity collusion 158, 159 CFI 6 Chicago approach 17, 20, 21, 37 chief economist 168 collective dominance 100, 102, 104, 108, 111, 113, 116, 117, 146, 147, 148, 153, 159, 165 collusion 100, 102, 106, 107, 108, 109, 110, 116, 150, 152, 155, 156 collusive behaviour 102 Comparative Institutional Analysis 33, 34 191
192 European Court of Justice 149, 167 European public interest 170 excess capacity 138 experience goods 78, 135, 162 export diversification strategy 41 EXXON/MOBIL 111, 112, 119, 152 Folk theorem 105 foreign direct investment 39, 40, 43, 44 Form CO 9, 10, 64 frequency 85 fringe firms 93, 108, 111, 135, 150, 151, 154, 156 game theory 25, 26, 27, 28, 104, 105, 106 GATS 45 GATT 41, 42 GENCOR/LONRHO 69, 111, 119, 150, 152 General Agreement on Trade in Services (GATS) 43 globalisation 39, 50 globalised markets 162 goal of competition policy 12, 20 governance structures 30 Guidelines 70, 97, 117 Guidelines on horizontal mergers 113, 169 Harvard approach 13, 14, 16, 17, 38 Harvard school 102 HENKEL/NOBEL 110 Hirschman Herfindahl Index 57, 70, 114 hit-and-run 22, 24 HOLLAND MEDIA GROEP 79 hybrid forms 49 import substitution 41 industrial policy 146, 172, 174 international interaction costs 39, 43, 46, 49, 62 Internet 47, 93, 109 investment goods 127
INDEX
irreversible investment 91 KABEL 110 KALI&SALZ/MDK/TREUHAND 111, 112, 119, 149 KIMBERLY CLARK/SCOTT 65, 129 liberalisation 40, 43, 44, 45 local loop 142, 143, 144, 145, 146 local loop networks 140 local loop unbundling 141 MAN/AUWÄRTER 65, 119, 120, 124, 125, 171 MANNESMANN/HOESCH 69 MANNESMANN/ORANGE 139 MANNESMANN/VALLOUREC/ILVA 70, 95 market economy 1 market power 55 market process 16 market shares 20, 68, 86, 98 market structure 16, 24, 35 MCI/WORLDCOM 139 MCI-WORLDCOM/SPRINT 139 MERCEDES-BENZ/KÄSSBOHRER 65, 67, 69, 95, 96, 6 119, 120, 121, 125, 128, 171 METSO/SVEDALA 70 mobility of supply 93, 108 monitoring body 174 NAFTA 42 Nash equilibrium 25, 26, 27 natural monopoly 22 NESTLÉ/PERRIER 65, 73, 96, 111, 119, 146, 148, 149 new industrial organisation 25 New Institutional Economics 28, 32 NORDIC SATELLITE 79 NORDIC SATELLITE DISTRIBUTION 67, 69 NORSKE SKOG/PARENCO/WALSUM 113, 119, 157
INDEX
oligopolistic interdependence 101, 102, 109, 147 oligopolistic interdependency 26 opportunistic behaviour 30 outcome-oriented competition approach 26 partially globalised markets 62, 87, 162 pedictability 97 peer review 168 per se rules 20 potential competition 22, 63, 69, 94, 98, 114, 123, 130, 131, 144, 151, 153, 164, 168 potential entrant 23 precedent 3 predictability 1, 2, 5 preferences 93, 109 price fixing agreements 20 price regulations 145 PRICE WATERHOUSE/COOPERS & LYBRAND 111, 112 price-correlation analysis 73 Prisoners’ Dilemma 105, 106 process-oriented approaches 26 PROCTER&GAMBLE/VP SCHICKEDANZ 65 product homogeneity 103 public procurement 121 regional markets 62, 87, 162 regional trade associations 42 relevant market 55, 56, 63, 64, 66, 97, 161, 162 remediableness 33 remedies 11, 172 RENAULT/VOLVO 65 RTL/VERONICA/ENDEMOL 67, 70 rule of reason 20 SCA HYGIENE PRODUCTS/CARTOINVEST 119, 129, 131
193 SCA/METSÄ TISSUE 64, 66, 67, 77, 95, 119, 129, 133, 134 search goods 78, 135 Second Best 33 separation of powers 12, 167, 173 service networks 126 shock-analysis 74 SIC test 70 SIEMENS/ITALTEL 70, 95 single dominance 58 SSNIP test 161 ST. GOBAIN/NOM/WACKER CHEMIE 95 ST. GOBAIN/WACKER CHEMIE/NOM 70 Stackelberg game 158, 159 static efficiency 15 strategic alliances 46 strategic uncertainty 25 structure-conduct-performance 13, 14, 26, 34, 88 structure-conduct-performance paradigm 102, 104 substitutability 63 sunk costs 91, 92, 94, 99, 107, 116, 117, 165 supply substitutability 56 supply-side substitutability 72, 161 supply-side substitution 65 surplus capacities 103 survey 8 survivor test 18 technological change 93, 108, 151 TELIA/SONERA 139 TELIA/TELENOR 119, 139, 142, 145 theory of contestable markets 22 theory of limit pricing 91 THORN EMI/VIRGIN MUSIC 110 trade-off model 18 trade-related aspects of intellectual property rights (TRIPS) 43
194 trade-related investment measures (TRIMS) 43 Transaction Cost Economics 28, 29, 31, 32, 34, 38, 8 46, 84, 92 transaction costs 40, 102, 135, 163, 164 transport costs 46, 87, 93, 108, 129, 131, 134, 138, 139 TRIMS 45 trust goods 78, 135, 162 type I errors 36, 37 type II errors 36, 37 uncertainty 1, 2, 31, 85 unilateral effects 112 universalisable rules 2
INDEX
UPM-KYMMENE/HAINDL 113, 119, 157 US merger policy 97 VARTA/BOSCH 64, 72, 110 Veblen-Effect 78 VODAFONE AIRTOUCH/MANNESMANN 139 VOLVO/SCANIA 64, 65, 67, 70, 95, 98, 99, 119, 120, 122, 123, 125, 126, 128, 171 Williamson Trade off 19 workable 15 workable competition 13 World Trade Organisation (WTO) 42