THE
QUARTERLY JOURNAL OF ECONOMICS Vol. CXXV
November 2010
Issue 4
WHAT COMES TO MIND∗ NICOLA GENNAIOLI AND ANDREI S...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

THE

QUARTERLY JOURNAL OF ECONOMICS Vol. CXXV

November 2010

Issue 4

WHAT COMES TO MIND∗ NICOLA GENNAIOLI AND ANDREI SHLEIFER We present a model of intuitive inference, called “local thinking,” in which an agent combines data received from the external world with information retrieved from memory to evaluate a hypothesis. In this model, selected and limited recall of information follows a version of the representativeness heuristic. The model can account for some of the evidence on judgment biases, including conjunction and disjunction fallacies, but also for several anomalies related to demand for insurance.

I. INTRODUCTION Beginning in the early 1970s, Daniel Kahneman and Amos Tversky (hereafter KT 1972, 1974, 1983) published a series of remarkable experiments documenting significant deviations from the Bayesian theory of judgment under uncertainty. Although KT’s heuristics and biases program has survived substantial experimental scrutiny, models of heuristics have proved elusive.1 In this paper, we present a memory-based model of probabilistic inference that accounts for quite a bit of the experimental evidence. Heuristics describe how people evaluate hypotheses quickly, based on what first comes to mind. People may be entirely capable of more careful deliberation and analysis, and perhaps of better ∗ We are deeply grateful to Josh Schwartzstein for considerable input, and to Pedro Bordalo, Shane Frederick, Xavier Gabaix, Matthew Gentzkow, Daniel Hojman, Elizabeth Kensinger, Daniel Kahneman, Lawrence Katz, Scott Kominers, David Laibson, Sendhil Mullainathan, Giacomo Ponzetto, Drazen Prelec, Mathew Rabin, Antonio Rangel, Jesse Shapiro, Jeremy Stein, Richard Thaler, and three anonymous referees for extremely helpful comments. Gennaioli thanks the Spanish Ministerio de Ciencia y Tecnologia (ECO 2008-01666 and Ramon y Cajal grants), the Barcelona GSE Research Network, and the Generalitat de Catalunya for financial support. Shleifer thanks the Kauffman Foundation for research support. 1. Partial exceptions include Griffin and Tversky (1992), Tversky and Koehler (1994), Barberis, Shleifer, and Vishny (1998), Rabin and Schrag (1999), Mullainathan (2000), and Rabin (2002), to which we return in Section III.C. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1399

1400

QUARTERLY JOURNAL OF ECONOMICS

decisions, but not when they do not think things through. We model such quick and intuitive inference, which we refer to as “local thinking,” based on the idea that only some decision-relevant data come to mind initially. We describe a problem in which a local thinker evaluates a hypothesis in light of some data, but with some residual uncertainty remaining. The combination of the hypothesis and the data primes some thoughts about the missing data. We refer to realizations of the missing data as scenarios. We assume that working memory is limited, so that some scenarios, but not others, come to the local thinker’s mind. He makes his judgment in light of what comes to mind, but not of what does not. Our approach is consistent with KT’s insistence that judgment under uncertainty is similar to perception. Just as an individual fills in details from memory when interpreting sensory data (for example, when looking at the duck–rabbit or when judging distance from the height of an object), the decision maker recalls missing scenarios when he evaluates a hypothesis. Kahneman and Frederick (2005) describe how psychologists think about this process: “The question of why thoughts become accessible—why particular ideas come to mind at particular times—has a long history in psychology and encompasses notions of stimulus salience, associative activation, selective attention, specific training, and priming” (p. 271). Our key assumption describes how scenarios become accessible from memory. We model such accessibility by specifying that scenarios come to mind in order of their representativeness, defined as their ability to predict the hypothesis being evaluated relative to other hypotheses. This assumption formalizes aspects of KT’s representativeness heuristic, modeling it as selection of stereotypes through limited and selective recall. The combination of both limited and selected recall drives the main results of the paper and helps account for biases found in psychological experiments. In the next section, we present an example illustrating the two main ideas of our approach. First, the data and the hypothesis being evaluated together prime the recall of scenarios used to represent this hypothesis. Second, the representative scenarios that are recalled need not be the most likely ones, and it is precisely in those instances when a hypothesis is represented by an unlikely scenario that judgment is severely biased. In Section III, we present the formal model and compare it to some earlier theoretical research on heuristics and biases.

WHAT COMES TO MIND

1401

In Section IV, we present the main theoretical results of the paper and establish four propositions. The first two deal with the magnitude of judgment errors. Proposition 1 shows how judgment errors depend on the likelihood of the recalled (representative) scenarios. Proposition 2 then shows how a local thinker reacts to data, and in particular overreacts to data that change his representation of the hypothesis he evaluates. The next two propositions deal with perhaps the most fascinating judgment biases, namely failures of extensionality. Proposition 3 describes the circumstances in which a local thinker exhibits the conjunction fallacy, the belief that a specific instance of an event is more likely than the event itself. Proposition 4 then shows how a local thinker exhibits the disjunction fallacy, the belief that the probability of a broadly described group of events (such as “other”) is lower than the total probability of events in that group when those events are explicitly mentioned. In Section V, we show how the propositions shed light on a range of experimental findings on heuristics and biases. In particular, we discuss the experiments on neglect of base rates and on insensitivity to predictability, as well as on the conjunction and disjunction fallacies. Among other things, the model accounts for the famous Linda (KT 1983) and car mechanic (Fischhoff, Slovic, and Lichtenstein 1978) experiments. In Section VI, we apply the model, and in particular its treatment of the disjunction fallacy, to individual demand for insurance. Cutler and Zeckhauser (2004) and Kunreuther and Pauly (2005) summarize several anomalies in that demand, including overinsurance of specific narrow risks, underinsurance of broad risks, and preference for low deductibles in insurance policies. Our model sheds light on these anomalies. Section VII concludes by discussing some broader conceptual issues. II. AN EXAMPLE: INTUITIVE REASONING IN AN ELECTORAL CAMPAIGN We illustrate our model in the context of a voter’s reaction to a blunder committed by a political candidate. Popkin (1991) argues that intuitive reasoning plays a key role in this context and helps explain the significance that ethnic voters in America attach to candidates’ knowledge of their customs. He further suggests that although in many instances voters’ intuitive assessments work fairly well, they occasionally allow even minor blunders such as the one described below to influence their votes.

1402

QUARTERLY JOURNAL OF ECONOMICS

In 1972, during New York primaries, Senator George McGovern of South Dakota was courting the Jewish vote, trying to demonstrate his sympathy for Israel. As Richard Reeves wrote for New York magazine in August, “During one of McGovern’s first trips into the city he was walked through Queens by city councilman Matthew Troy and one of their first stops was a hot dog stand. ‘Kosher?’ said the guy behind the counter, and the prairie politician looked even blanker than he usually does in big cities. ‘Kosher!’ Troy coached him in a husky whisper. ‘Kosher and a glass of milk,’ said McGovern. (Popkin, 1991, p. 2)

Evidently, McGovern was not aware that milk and meat cannot be combined in a kosher meal. We use this anecdote to introduce our basic formalism and to show how “local thinking” can illuminate the properties of voters’ intuitive assessments. We start with a slightly different example in which intuitive assessments work well and then return to hot dogs. Suppose that a voter only wants to assess the probability that a candidate is qualified. Before the voter hears the candidate say anything, he assesses this probability to be 1/2. Suppose that the candidate declares at a Jewish campaign event that Israel was the aggressor in the 1967 war, an obvious inaccuracy. How does the voter’s assessment change? For a Bayesian voter, the crucial question is the extent to which this statement—which surely signals the candidate’s lack of familiarity with Jewish concerns—is also informative about the candidate’s overall qualification. Suppose that the distribution of candidate types conditional on calling Israel the aggressor is described by Table I.A. Not only is “calling Israel the aggressor in the 1967 war” very informative about a candidate’s unfamiliarity with Jewish concerns (82.5% of the candidates who say this are unfamiliar), but unfamiliarity is in turn very informative about qualification, at least to a Jewish voter (relative to a prior of 1/2 before calling Israel the aggressor). The latter property is reflected in the qualification estimate of a Bayesian voter, which is equal to (1)

Pr(qualified) = Pr(qualified, familiar) + Pr(qualified, unfamiliar) = 0.175,

where we suppress conditioning on “calling Israel aggressor.” The Bayesian reduces his assessment of qualification from 50% to 17.5% because the blunder is so informative. Suppose now that Table I.A rather than being immediately available to the voter, is stored in his associative long-term

1403

WHAT COMES TO MIND

TABLE I.A DISTRIBUTION OF CANDIDATE TYPES: BLUNDER INFORMATIVE ABOUT QUALIFICATION Familiarity with Jewish concerns Calls Israel aggressor in 1967 war

Familiar

Unfamiliar

0.15 0.025

0.025 0.8

Qualification of candidate Qualified Unqualified

memory and that—due to working memory limits—not all candidate types come to mind to aid the voter’s evaluation of the candidate’s qualification.2 We call such a decision maker a “local thinker” because, unlike the Bayesian, he does not use all the data in Table I.A but only the information he obtains by sampling specific examples of qualified and unqualified candidates from memory. Crucially, we assume in KT’s spirit that the candidates who first come to the voter’s mind are representative, or stereotypical, qualified and unqualified candidates. Specifically, the voter’s mind automatically fits the most representative familiarity level—or “scenario”—for each level of qualification of the candidate. We formally define the representative scenario as the familiarity level that best predicts, that is, is relatively more associated with, the respective qualification level. These representative scenarios for a qualified and an unqualified candidate are then given by (2)

s(qualified) =

arg max

Pr(qualified | s),

s∈{familiar, unfamiliar}

(3)

s(unqualified) =

arg max

Pr(unqualified | s).

s∈{familiar, unfamiliar}

In Table I.A, this means that a stereotypical qualified candidate is familiar with Jewish concerns, whereas a stereotypical unqualified one is unfamiliar with such concerns.3 This process reduces the voter’s actively processed information to the diagonal of the bold entries in Table I.B. 2. Throughout the paper, we take the long-term associative memory database (in this example, Table I.A) as given. Section III discusses how, depending on the problem faced by the agent, such a database might endogenously change and what could be some of the consequences for judgments. 3. Indeed, Pr(qualified | familiar) = (.15/(.15 + .025)) = .86 > .14 = (.025/ (.15 + .025)) = Pr(qualified | unfamiliar). The reverse is true for an unqualified candidate.

1404

QUARTERLY JOURNAL OF ECONOMICS

TABLE I.B LOCAL THINKER’S REPRESENTATION: BLUNDER INFORMATIVE ABOUT QUALIFICATION Familiarity with Jewish concerns Calls Israel aggressor in 1967 war Qualification of candidate Qualified Unqualified

Familiar

Unfamiliar

0.15 0.025

0.025 0.8

TABLE I.C LOCAL THINKER’S REPRESENTATION: BLUNDER UNINFORMATIVE ABOUT QUALIFICATION Familiarity with Jewish concerns Drinks milk with a hot dog

Familiar

Unfamiliar

Qualification of candidate Qualified Unqualified

0.024 0.026

0.43 0.52

Because a local thinker considers only the stereotypical qualified and unqualified candidates, his assessment (indicated by superscript L) is equal to (4)

Pr L (qualified) Pr(qualified, familiar) Pr(qualified, familiar) + Pr(unqualified, unfamiliar) ≈ .158.

=

Comparing (4) with (1), we see that a local thinker does almost as well as a Bayesian. The reason is that in Table I.A stereotypes capture a big chunk of the respective hypotheses’ probabilities. Although the local thinker does not recall that some unfamiliar candidates are nonetheless qualified, this is not a big problem for assessment because in reality, and not only in stereotypes, familiarity and qualification largely go together. The same idea suggests, however, that local thinkers sometimes make very biased assessments. Return to the candidate unaware that drinking milk with hot dogs is not kosher. Suppose that, after this blunder, the distribution of candidate types is as shown in Table I.C. As in the previous case, in Table I.C the candidate’s drinking milk with hot dogs is very informative about his unfamiliarity with Jewish concerns, but now such unfamiliarity is extremely

WHAT COMES TO MIND

1405

uninformative about the candidate’s qualifications. Indeed, 95% of the candidates do not know the rules of kashrut, including the vast majority of both the qualified and the unqualified ones. In this example a Bayesian assesses Pr(qualified) = .454; he realizes that drinking milk with a hot dog is nearly irrelevant to qualification. The local thinker, in contrast, still views the stereotypical qualified candidate as one familiar with his concerns and the stereotypical unqualified candidate as unfamiliar. Formally, the scenario “familiar” yields a higher probability of the candidate being qualified (.024/(.024 + .026) = .48) than the scenario “unfamiliar” (.43/(.43 + .52) = .45). Likewise, the scenario “unfamiliar” yields a higher probability of the candidate being unqualified (.55) than the scenario “familiar” (.52). The local thinker then estimates that (5)

Pr L(qualified) Pr(qualified, familiar) Pr(qualified, familiar) + Pr(unqualified, unfamiliar) ≈ .044,

=

which differs from the Bayesian’s assessment by a factor of nearly 10! In contrast to the previous case, the local thinker grossly overreacts to the blunder and misestimates probabilities. Now local thinking generates massive information loss and bias. Why this difference in the examples? After all, in both examples the stereotypical qualified candidate is familiar with the voter’s concerns, whereas the stereotypical unqualified candidate is unfamiliar because, in both cases, familiarity and qualification are positively associated in reality. The key difference lies in how much of the probability of each hypothesis is accounted for by the stereotype. In the initial, more standard, example, almost all qualified candidates are familiar and unqualified ones are unfamiliar, so stereotypical qualified and unqualified candidates are both extremely common. When stereotypes are not only representative but also likely, the local thinker’s bias is kept down. In the second example, in contrast, the bulk of both qualified and unqualified candidates are unfamiliar with the voter’s concerns, which implies that the stereotypical qualified candidate (familiar with these concerns) is very uncommon, whereas the stereotypical unqualified candidate is very common. By focusing on the stereotypical candidates, the local thinker drastically underestimates qualification because he forgets that many qualified candidates

1406

QUARTERLY JOURNAL OF ECONOMICS

are also unfamiliar with the rules of kashrut! When the stereotype for one hypothesis is much less likely than that for the other hypothesis, the local thinker’s bias is large. Put differently, in our examples, after seeing a blunder the local thinker always downgrades qualification by a large amount because the stereotypical qualified candidate is very unlikely to commit any blunder. This process leads to good judgments in situations where the blunder is informative not only about the dimension defining the stereotype (familiarity) but also about qualification (Table I.A), but it leads to large biases when the blunder is informative about the dimension defining the stereotype but not about the target assessment of qualification (Table I.C). We capture this dichotomy with the distinction between the representativeness and likelihood of scenarios. This distinction plays a key role in accounting for the biases generated by the use of heuristics. A further connection of our work to research in psychology is the idea of attribute substitution. According to Kahneman and Frederick (2005, p. 269), “When confronted with a difficult question, people may answer an easier one instead and are often unaware of the substitution.” Instead of answering a hard question, “is the candidate qualified?,” the voter answers an easier one, “is the candidate familiar with my concerns?” We show that such attribute substitutions might occur because, rather than thinking about all possibilities, people think in terms of stereotypical candidates, which associate qualification and familiarity. In many situations, such substitution works, as in our initial example, where familiarity is a good stand-in for qualification. But in some situations, the answer to a substitute question is not the same as the answer to the original one, as when many candidates unfamiliar with the rules of kashrut are nonetheless qualified. It is in those situations that intuitive reasoning leads to biased judgment, as our analysis seeks to show.

III. THE MODEL The world is described by a probability space (X, π ), where X ≡ X1 × · · · × XK is a finite state space generated by the product of K ≥ 1 dimensions and the function π : X → [0, 1] maps each element x ∈ X into a probability π (x) ≥ 0 such that π (x) = 1. In the tables in Section II, the dimensions of X are the candidate’s qualification and his familiarity with voter concerns; that is, K = 2 (conditional on the candidate’s blunder, which is a dimension kept

WHAT COMES TO MIND

1407

implicit), the elements x ∈ X are candidate types, and the entries in the tables represent the probability measure π . An agent evaluates the probability of N > 1 hypotheses h1 , . . . , hN in light of data d. Hypotheses and data are events of X. That is, hr (r = 1, . . . , N) and d are subsets of X. If the agent receives no data, d = X: nothing is ruled out. Hypotheses are exhaustive but may be nonexclusive. Exhaustiveness is not crucial but avoids trivial cases where a hypothesis is overestimated simply because the agent cannot conceive of any alternative to it. In (X, π ), the probability of hr given d is determined by Bayes’ rule as Pr(hr ∩ d) x∈h ∩d π (x) = r . (6) Pr(hr | d) = Pr(d) x∈d π (x) In our example, (1) follows from (6) because in Table I.A the probabilities are normalized by Pr(calls Israel aggressor). As we saw in Section II, a local thinker may fail to produce the correct assessment (6) because he considers only a subset of elements x, those belonging to what we henceforth call his “represented state space.” III.A. The Represented State Space The represented state space is shaped by the recall of elements in X prompted by the hypotheses hr , r = 1, . . . , N. Recall is governed by two assumptions. First, working memory limits the number of elements recalled by the agent to represent each hypothesis. Second, the agent recalls for each hypothesis the most “representative” elements. We formalize the first assumption as follows: A1 (Local Thinking). Given d, let Mr denote the number of elements in hr ∩ d, r = 1, . . . , N. The agent represents each hr ∩ d using a number min(Mr , b) of elements x in hr ∩ d, where b ≥ 1 is the maximum number of elements the agent can recall per hypothesis. The set hr ∩ d includes all the elements consistent with hypothesis hr and with the data d. When b ≥ Mr , the local thinker recalls all of these elements, and his representation of hr ∩ d is perfect. The more interesting case occurs when at least some hypotheses are broad, consisting of Mr > b elements.4 In this case, the agent’s representations are imperfect. 4. A1 is one way to capture limited recall. Our substantive results would not change if we alternatively assumed that the agent discounts the probability of certain elements.

1408

QUARTERLY JOURNAL OF ECONOMICS

In particular, a fully local thinker, with b = 1, must collapse the entire set hr ∩ d into a single element. To do so, he automatically selects what we call a “scenario.” To give an intuitive but still formal definition of a scenario, consider the class of problems where hr and d specify exact values (rather than ranges) for some dimensions of X. In this case, hr ∩ d takes the form (7)

hr ∩ d ≡ {x ∈ X | xi = xˆi }, i ∈ [1, . . . , K]

and

for a given set of xˆi ∈ Xi ,

where xˆi is the exact value taken by the ith dimension in the hypothesis or data. The remaining dimensions are unrestricted. This is consistent with the example in Section II, where each hypothesis specifies one qualification level (e.g., unqualified), but the remaining familiarity dimension is left free (once more leaving the blunder implicit). In this context, a scenario for a hypothesis is a specification of its free familiarity dimension (e.g., unfamiliar). More generally, when a hypothesis hr ∩ d belongs to class (7), its possible scenarios are defined as follows: DEFINITION 1. Denote by Fr the set of dimensions in X left free by hr ∩ d. If Fr is nonempty, a scenario s for hr ∩ d is any event s ≡ {x ∈ X | xt = xt } for all t ∈ Fr . If Fr is empty, the scenario for hr ∩ d is s ≡ X. Sr is the set of possible scenarios for hr ∩ d. A scenario fills in the details missing from the hypothesis and data, identifying a single element in hr ∩ d, which we denote by s ∩ hr ∩ d ∈ X. How do scenarios come to mind? We assume that hypotheses belonging to class (7) are represented as follows: A2 (Recall by Representativeness). Fix d and hr . Then the representativeness of scenario sr ∈ Sr for hr given d is defined as Pr(hr ∩ sr ∩ d) , (8) Pr(hr | sr ∩ d) = Pr(hr ∩ sr ∩ d) + Pr(hr ∩ sr ∩ d) where hr is the complement X\hr in X of hypothesis hr . The agent represents hr with the b most “representative” scenarios srk ∈ Sr , k = 1, . . . , b, where index k is decreasing in representativeness and where we set srk = φ for k > Mr . A2 introduces two key notions. First, A2 defines the representativeness of a scenario for a hypothesis hr as the degree to which that scenario is associated with hr relative to its complement hr . Second, A2 posits that the local thinker represents hr

WHAT COMES TO MIND

1409

by recalling only the b most “representative” scenarios for it. The most interesting case arises when b = 1, as the agent represents hr with the most “representative” scenario sr1 . It is useful to call the intersection of the data, the hypothesis, and that scenario (i.e., sr1 ∩ hr ∩ d ∈ X) the “stereotype” that immediately comes to the local thinker’s mind. Expression (8) then captures the idea that an element of a hypothesis or class is stereotypical not only if it is common in that class, but also—and perhaps especially—if it is uncommon in other classes. In our model, the stereotype for one hypothesis is independent of the other hypotheses being explicitly evaluated by the agent: expression (8) only refers to the relationship between a hypothesis hr and its complement hr in X. From A2, the represented state space is immediately defined as follows: DEFINITION 2. Given data d and hypotheses hr , r = 1, . . . , N, the agent’s representation of any hypothesis hr is defined as h˜ r (d) ≡ k=1,...,b srk ∩ hr ∩ d, and the agent’s represented state ˜ is defined as X ˜ ≡ ˜ space X r=1,...,N hr (d). The represented state space is simply the union of all elements recalled by the agent for each of the assessed hypotheses. Definition 2 applies to hypotheses belonging to the class in (7), but it is easy to extend it to general hypotheses which, rather than attributing exact values, restrict the range of some dimensions of X. The Appendix shows how to do this and to apply our model to the evaluation of these hypotheses as well. The only result in what follows that relies on restricting the analysis to the class of hypotheses in (7) is Proposition 1. As we show in the Appendix, all other results can be easily extended to fully general classes of hypotheses. III.B. Probabilistic Assessments by a Local Thinker In the represented state space, the local thinker computes the probability of ht as (9)

Pr L(ht | d) =

Pr(h˜ t (d)) , ˜ Pr( X)

which is the probability of the representation of ht divided by that ˜ Evaluated at b = 1, (9) is the of the represented state space X. counterpart of expression (4) in Section II.

1410

QUARTERLY JOURNAL OF ECONOMICS

Expression (9) highlights the role of local thinking. If b ≥ ˜ = X ∩ d, h˜ t (d) ≡ ht ∩ d and (9) Mr for all r = 1, . . . , N, then X boils down to Pr(ht ∩ d)/ Pr(d), which is the Bayesian’s estimate of Pr(ht | d). Biases can arise only when the agent’s representations are limited, that is, when b < Mr for some r. When the hypotheses are exclusive (ht ∩ hr = φ ∀t = r), (9) can be written as

(9 )

Pr stk | ht ∩ d Pr(ht ∩ d) , Pr L(ht | d) = N b k r=1 k=1 Pr sr | hr ∩ d Pr(hr ∩ d) b k=1

where Pr(s | hr ∩ d) is the likelihood of scenario s for hr , or the probability of s when hr is true. The bracketed terms in (9 ) measure the share of a hypothesis’ total probability captured by its representation. Equation (9 ) says that if the representations of all hypotheses are equally likely (all bracketed terms are equal), the estimate is perfect, even if memory limitations are severe. Otherwise, biases may arise. III.C. Discussion of the Setup and the Assumptions In our model, the assessed probability of a hypothesis depends on (i) how the hypothesis itself affects its own representation and (ii) which hypotheses are examined in conjunction with it. The former feature follows from assumption A2, which posits that representativeness shapes the ease with which information about a hypothesis is retrieved from memory. KT (1972, p. 431) define representativeness as “a subjective judgment of the extent to which the event in question is similar in essential properties to its parent population or reflects the salient features of the process by which it is generated.” Indeed, KT (2002, p. 23) have a discussion of representativeness related to our model’s definition: “Representativeness tends to covary with frequency: common instances and frequent events are generally more representative than unusual instances and rare events,” but they add that “an attribute is representative of a class if it is very diagnostic; that is the relative frequency of this attribute is much higher in that class than in a relevant reference class.” In other words, sometimes what is representative is not likely. As we show below, the use of representative

WHAT COMES TO MIND

1411

but unlikely scenarios for a hypothesis is what drives several of the KT biases.5 In our model, representative scenarios, or stereotypes, quickly pop to the mind of a decision maker, consistent with the idea— supported in cognitive psychology and neurobiology—that background information is a key input in the interpretation of external (e.g., sensory) stimuli.6 What prevents the local thinker from integrating all other scenarios consistent with the hypothesis, as a Bayesian would do, is assumption A1 of incomplete recall. This crucially implies that the assessment of a hypothesis depends on the hypotheses examined in conjunction with it, as the latter affect recall and thus the denominator in (9). In this respect, our model is related to Tversky and Koehler’s (1994) support theory, which postulates that different descriptions of the same event may trigger different assessments. Tversky and Koehler characterize such nonextensional probability axiomatically, without deriving it from limited recall and representativeness. The central role of hypotheses in priming which information is recalled is neither shared by existing models of imperfect memory (e.g., Mullainathan [2002], Wilson [2002]) nor by models of analogical thinking (Jehiel 2005) or categorization (e.g., Mullainathan [2000], Mullainathan, Schwartzstein, and Shleifer [2008]). In the latter models, it is data provision that prompts the choice of a category, inside which all hypotheses are evaluated.7 This formulation implies that categorical thinking cannot explain the conjunction and disjunction fallacies because inside the chosen category the agent uses a standard probability measure, so that events with larger (equal) extension will be judged more (equally) likely. Although in many situations categorical and local

5. This notion is in the spirit of Griffin and Tversky’s (1992) intuition that agents assess a hypothesis more in light of the strength of the evidence in its favour, a concept akin to our “representativeness,” than in light of such evidence’s weight, a concept akin to our “likelihood.” 6. In the model, background knowledge is summarized by the objective probability distribution π (x). This clearly need not be the case. Consistent with memory research, some elements x in X may get precedence in recall not because they are more frequent but because the agent has experienced them more intensely or because they are easier to recall. Considering these possibilities is an interesting extension of our model. 7. To give a concrete example, in the context of Section II a categorical Jewish voter observing a candidate drinking milk with a hot dog immediately categorizes him as unfamiliar with the voter’s concerns, and within that category the voter estimates the relative likelihood of qualified and unqualified candidates. The voter would make a mistake in assessing qualification, but only a small one when virtually all candidates were unfamiliar.

1412

QUARTERLY JOURNAL OF ECONOMICS

thinking lead to similar assessments, in situations related to KT anomalies they diverge. To focus on the impact of hypotheses on the recall of stereotypes, we have taken the probability space (X, π ) on which representations are created as given. However, the dimensions of X and thus the space of potential stereotypes may depend on the nature of the problem faced and the data received by the agent.8 We leave the analysis of this additional, potentially interesting source of framing effects in our setup to future research. Our model is related to research on particular heuristics, including Barberis, Shleifer, and Vishny (1998), Rabin and Schrag (1999), Rabin (2002), and Schwartzstein (2009). In these papers, the agent has an incorrect model in mind and interprets the data in light of that model. Here, in contrast, the agent has the correct model, but not all parts of it come to mind. Our approach also shares some similarities with models of sampling. Stewart, Chater, and Brown (2006) study how agents form preferences over choices by sampling their past experiences; Osborne and Rubinstein (1998) study equilibrium determination in games where players sample the performance of different actions. These papers do not focus on judgment under uncertainty. More generally, our key innovation is to consider the model in which agents sample not randomly but based on representativeness, leading them to systematically oversample certain specific memories and undersample others.

IV. BIASES IN PROBABILISTIC ASSESSMENTS IV.A. Magnitude of Biases We measure a local thinker’s bias in assessing a generic hypothesis h1 against an alternative hypothesis h2 by deriving from expression (9 ) the odds ratio (10)

Pr L(h1 | d) Pr L(h2 | d)

b =

k=1 b k=1

Pr s1k | h1 ∩ d Pr(h1 | d) , Pr s2k | h2 ∩ d Pr(h2 | d)

8. As an example, Table 1.A could be generated by the following thought process. In a first stage, the campaign renders the “qualification” dimension (the table’s rows) salient to the voter. Then the candidate’s statement about Jewish issues renders the familiarity dimension (the table’s columns) salient, perhaps because the statement is so informative about the candidate’s familiarity with Jewish concerns.

1413

WHAT COMES TO MIND TABLE II DISTRIBUTION OF CANDIDATE TYPES Data d Qualified Unqualified

Familiar

Unfamiliar

π1 π3

π2 π4

where Pr(h1 | d)/Pr(h2 | d) is a Bayesian’s estimate of the odds of h1 relative to h2 . The bracketed term captures the likelihood of the representation of h1 relative to h2 . The odds of h1 are overestimated if and only if the representation of h1 is more likely than that of h2 (the bracketed term is greater than one). In a sense, a more likely representation induces the agent to oversample instances of the corresponding hypothesis, so that biases arise when one hypothesis is represented with relatively unlikely scenarios. When b = 1, expression (10) becomes 1 Pr s1 | h1 ∩ d Pr(h1 | d) Pr s11 ∩ h1 ∩ d = , = (11) Pr s21 ∩ h2 ∩ d Pr s21 | h2 ∩ d Pr(h2 | d) Pr L(h2 | d) Pr L(h1 | d)

which highlights how representativeness and likelihood of scenarios shape biases. Overestimation of h1 is strongest when the representative scenario s11 for h1 is also the most likely one for h1 , whereas the representative scenario s21 for h2 is the least likely one for h2 . In this case, Pr(s11 | h1 ∩ d) is maximal and Pr(s21 | h2 ∩ d) is minimal, maximizing the bracketed term in (11). Conversely, underestimation of h1 is strongest when the representative scenario for h1 is the least likely but that for h2 is the most likely. This analysis illuminates the electoral campaign example of Section II. Consider the general distribution of candidate types after the local thinker receives data d (see Table II). We assume that, irrespective of the data provided, π1 /(π1 + π3 ) > π2 /(π2 + π4 ): being qualified is more likely among familiar than unfamiliar types, so familiarity with Jewish concerns is at least slightly informative about qualification. As in the examples of Section II, then, the representative scenario for h1 = unqualified is always s11 = unfamiliar, whereas that for h2 = qualified is always s21 = familiar. The voter represents h1 with (unqualified, unfamiliar) and h2 with (qualified, familiar), estimating that Pr L (unqualified) = π4 /(π4 + π1 ). The assessed odds ratio is thus equal

1414

QUARTERLY JOURNAL OF ECONOMICS

to π4 /π1 , which can be rewritten as

π1 π3 + π4 Pr L(unqualified) π4 , (12) = L π + π π + π 4 3 1 2 π1 + π2 Pr (qualified) which is the counterpart of (11). The bracketed term is the ratio of the likelihoods of scenarios for low and high qualifications (Pr(unfamilar | unqualified)/ Pr(familiar | qualified)). In Table I.A, where d = calling Israel the aggressor, judgments are good because π2 and π3 are small, which means that representative scenarios are extremely likely. In the extreme case when π2 = π3 = 0, all probability mass is concentrated on stereotypical candidates, local thinking entails no informational loss, and there is no bias. In this case, stereotypes are not only representative but also perfectly informative for both hypotheses. In contrast, in Table I.C (where d = drinks milk with a hot dog), judgments are bad because π1 and π3 are small, whereas π2 and π4 are large. If, at the extreme, π1 is arbitrarily small, the overestimation factor in (12) becomes infinite! Now h2 = qualified is hugely underestimated precisely because its representative “familiar” scenario is very unlikely relative to the “unfamiliar” scenario for h1 = unqualified. The point is that in thinking about stereotypical candidates, for whom qualification is positively associated with familiarity, the local thinker views evidence against “familiarity” as strong evidence against qualification, even if Table I.C tells us that this inference is unwarranted. To see more generally how representativeness and likelihood determine the direction and strength of biases in our model, consider the following proposition, which, along with other results, is proved in Online Appendix 2 and is restricted to the class of hypotheses described in (7): PROPOSITION 1. Suppose that the agent evaluates two hypotheses, h1 and h2 , where the set of feasible scenarios for them is the same, namely S1 = S2 = S. We then have 1. Representation. Scenarios rank in opposite order of representativeness for the two hypotheses, formally s1k = s2M−k+1 for k = 1, . . . , M, where M is the number of scenarios in S. 2. Assessment bias. i. If π (x) is such that Pr(s1k | h1 ∩ d) and Pr(s1k | h2 ∩ d) strictly decrease in k (at least for some k), the representativeness and likelihood of scenarios are positively related for h1 , and negatively related for h2 . The agent

WHAT COMES TO MIND

1415

thus overestimates the odds of h1 relative to h2 for every b < M. One can find a π (x) for which such overestimation is arbitrarily large. The opposite is true if Pr(s1k | h1 ∩ d) and Pr(s1k | h2 ∩ d) strictly increase in k. ii. If π (x) is such that Pr(s1k | h1 ∩ d) decreases and Pr(s1k | h2 ∩ d) increases in k, the representativeness and likelihood of scenarios are positively related for both hypotheses. The agent over- or underestimates the odds of h1 relative to h2 at most by a factor of M/b. Proposition 1 breaks down the roles of assumption A2 and of the probability distribution π (x) in generating biases.9 With respect to representations, A2 implies that when considering two exhaustive hypotheses, the most representative scenarios for h2 are the least representative ones for h1 and vice versa. This property (which does not automatically hold in the case of three or more hypotheses) formally highlights a key aspect of representativeness in A2, namely that stereotypes are selected to maximize the contrast between the representation of different hypotheses. Intuitively, the stereotype of a qualified candidate is very different from that of an unqualified one even when most qualified and unqualified candidates share a key characteristic (e.g., unfamiliarity). What does this property of representations imply for biases? Part 2.i says that this reliance on different stereotypes causes pervasive biases when the most likely scenario is the same under both hypotheses. In this case, the use of a highly likely scenario for one hypothesis precludes its use for the competing hypothesis, leading to overestimation of the former. The resulting bias can be huge, as in Table I.C, and even infinite in the extreme. In contrast, part 2.ii captures the case where the representativeness and likelihood of scenarios go hand in hand for both hypotheses. Biases are now limited (but possibly still large) and the largest estimation bias occurs when the likelihood of one hypothesis is fully concentrated on one scenario, whereas the likelihood of the competing hypothesis is spread equally among its M scenarios. This implies that hypotheses whose distribution is spread out over a larger number of scenarios are more likely to be underestimated, the more so the more local is the agent’s thinking (i.e., the smaller is b). 9. The proof of Proposition 1 provides detailed conditions on classes of problems where S1 = S2 = S holds.

1416

QUARTERLY JOURNAL OF ECONOMICS

IV.B. Data Provision Local thinkers’ biases described in Proposition 1 do not rely in any fundamental way on data provision. However, looking more closely at the role of data in our model is useful for at least two reasons. First, as we show in Section V, the role of data helps illuminate some of the psychological experiments. Second, interesting real-world implications of our setup naturally concern agents’ reaction to new information. To fix ideas, note that for a Bayesian, provision of data d is informative about h1 versus h2 if and only if it affects the odds ratio between them (i.e., if Pr(h1 ∩ d)/ Pr(h2 ∩ d) = Pr(h1 )/ Pr(h2 )). To see how a local thinker reacts to data, denote by si1 the representative scenario for hypothesis hi (i = 1, 2) if no data are 1 , the representative scenario for hi (i = 1, 2), provided and by si,d when d ⊂ X is provided. This notation is useful because the role of data in expression (11) depends on whether d affects the agent’s representation of the hypotheses. We cannot say a priori whether data provision enhances or dampens bias, but the inspection of how expression (11) changes with data provision reveals that the overall effect of the latter combines the two basic effects: PROPOSITION 2. Suppose that b = 1 and the agent is given data d ⊂ X. If d is such that si1 ∩ d = φ and si1 ∩ d = φ for all i, then stereotypes and assessments do not change. In this case, the agent underreacts to d when d is informative. If, in contrast, d is such that si1 ∩ d = φ for some i, then the stereotype for the corresponding hypothesis must change. In this case, the agent may overreact to uninformative d. In the first case, stereotypes do not change with d (i.e., 1 ∩ hi ∩ d for all i), and so data provision affects neisi1 ∩ hi = si,d ther the representation of hypotheses nor—according to (11)— probabilistic assessments. If the data are informative, this effect captures the local thinker’s underreaction because—unlike the Bayesian—the local thinker does not revise his assessment after observing d. In the second case, the representations of one or both hypotheses must change with d. This change can generate overreaction by inducing the agent to revise his assessment even when a Bayesian would not do so. This effect increases overestimation of h1 if the new representation of h1 triggered by d is relatively more likely than that of h2 (if the bracketed term in (11)

WHAT COMES TO MIND

1417

rises). We refer to this effect as data “destroying” the stereotype of the hypothesis whose representation becomes relatively less likely. IV.C. Conjunction Fallacy The conjunction fallacy refers to the failure of experimental subjects to follow the rule that the probability of a conjoined event C&D cannot exceed the probability of event C or event D by itself. For simplicity, we only study the conjunction fallacy when b = 1 and when the agent is provided no data, but the fundamental logic of the conjunction fallacy does not rely on these assumptions. We consider the class of problems in (7), but in Online Appendix 2 we prove that Proposition 3 holds also for general classes of hypotheses. We focus on the so-called “direct tests,” namely when the agent is asked to assess the probability of a conjoined event h1 ∩ h2 and of one of its constituent events such as h1 simultaneously. De1 the scenario used to represent the conjunction h1 ∩ h2 note by s1,2 1 and by s1 the scenario used to represent the constituent event h1 . In this case, the conjunction fallacy obtains in our model if and only if 1 ∩ h1 ∩ h2 ≥ Pr s11 ∩ h1 , (13) Pr s1,2 that is, when the probability of the represented conjunction is higher than the probability of the represented constituent event h1 . Expression (13) is a direct consequence of (9), as in this direct test the denominators are identical and cancel out. The conjunction fallacy then arises only under the following necessary condition: PROPOSITION 3. When b = 1, in a direct test of hypotheses h1 and h1 ∩ h2 , Pr L(h1 ∩ h2 ) ≥ Pr L(h1 ) only if scenario s11 is not the most likely for h1 . The conjunction fallacy arises only if the constituent event h1 prompts the use of an unlikely scenario and thus of an unlikely stereotype. To see why, rewrite (13) as 1 ∩ h2 | h1 ≥ Pr s11 | h1 . (14) Pr s1,2 The conjunction rule is violated when scenario s11 is less likely 1 1 ∩ h2 for hypothesis h1 . Note, though, that s1,2 ∩ h2 is itthan s1,2 1 self a scenario for h1 because s1,2 ∩ h2 ∩ h1 identifies an element of X. Condition (14) therefore only holds if the representative

1418

QUARTERLY JOURNAL OF ECONOMICS

scenario s11 is not the most likely scenario for h1 , which proves Proposition 3.10 IV.D. Disjunction Fallacy According to the disjunction rule, the probability attached to an event A should be equal to the total probability of all events whose union is equal to A. As we discuss in Section V.C, however, experimental evidence shows that subjects often underestimate the probability of residual hypotheses such as “other” relative to their unpacked version. To see under what conditions local thinking can account for this fallacy, compare the assessment of hypothesis h1 with the assessment of hypothesis “h1,1 or h1,2 ,” where h1,1 ∪ h1,2 = h1 (and obviously h1,1 ∩ h1,2 = φ), by an agent with b = 1. It is easy to extend the result to the case where b > 1. Formally, we compare Pr L(h1 ) when h1 is tested against h1 with Pr L(h1,1 ) + Pr L(h1,2 ) obtained when the hypothesis “h1,1 or h1,2 ” is tested against its complement h1 . The agent then attributes a higher probability to the unpacked version of the hypothesis, thus violating the disjunction rule, provided that Pr L(h1,1 ) + Pr L(h1,2 ) > Pr L(h1 ). 1 1 , s1,2 , and s01 to be the representative scenarios Define s11 , s1,1 for hypotheses h1 , h1,1 , h1,2 , and h1 , respectively. Equation (9) then implies that h1 is underestimated when 1 1 ∩ h1,1 + Pr s1,2 ∩ h1,2 Pr s1,1 (15) 1 1 Pr s1,1 ∩ h1,1 + Pr s1,2 ∩ h1,2 + Pr s01 ∩ h1 Pr s11 ∩ h1 > . Pr s11 ∩ h1 + Pr s01 ∩ h1 Equation (15) immediately boils down to 1 1 (15 ) Pr s1,1 ∩ h1,1 + Pr s1,2 ∩ h1,2 > Pr s11 ∩ h1 , meaning that the probability of the representation s11 ∩ h1 of h1 is smaller than the sum of the probabilities of the representations 10. Proposition 3 implies that, if a hypothesis h1 is not represented with the most likely scenario, one can induce the conjunction fallacy by testing h1 against the conjoined hypothesis h∗1 = s1∗ ∩ h1 , where s1∗ is the most likely scenario for h1 and h∗1 ⊂ h1 is the element obtained by fitting such most likely scenario in hypothesis h1 itself. By construction, in this case Pr L(h∗1 ) ≥ Pr L(h1 ), so that the conjunction rule is violated.

WHAT COMES TO MIND

1419

1 1 ∩ h1,1 and s1,2 ∩ h1,2 of h1,1 and h1,2 , respectively. Online s1,1 Appendix 2 proves that this occurs if the following condition holds:

PROPOSITION 4. Suppose that b = 1. In one test, hypothesis h1 is tested against a set of alternatives. In another test, the hypothesis “h1,1 or h1,2 ” is tested against the same set of alternatives as h1 . Then, if s11 is a feasible scenario for h1,1 , h1,2 , or both, it follows that Pr L(h1,1 ) + Pr L(h1,2 ) > Pr L(h1 ). Local thinking leads to underestimation of implicit disjunctions. Intuitively, unpacking a hypothesis h1 into its constituent events reminds the local thinker of elements of h1 that he would otherwise fail to integrate into his representation. The sufficient condition for this to occur (that s11 must be a feasible scenario in the explicit disjunction) is very weak. For example, it is always fulfilled when the representation of the implicit disjunction s11 ∩ h1 is contained in a subresidual category of the explicit disjunction, such as “other.”

V. LOCAL THINKING AND HEURISTICS AND BIASES, WITH SPECIAL REFERENCE TO LINDA We now show how our model can rationalize some of the biases in probabilistic assessments. We cannot rationalize all of the experimental evidence, but rather we show that our model provides a unified account of several findings. At the end of the section, we discuss the experimental evidence that our model cannot directly explain. We perform our analysis in a flexible setup based on KT’s (1983) famous Linda experiment. Subjects are given a description of a young woman, called Linda, who is a stereotypical leftist and in particular was a college activist. They are then asked to check off in order of likelihood the various possibilities of what Linda is today. Subjects estimate that Linda is more likely to be “a bank teller and a feminist” than merely “a bank teller,” exhibiting the conjunction fallacy. We take advantage of this setup to show how a local thinker displays a variety of biases, including base rate neglect and the conjunction fallacy.11 In Section V.D, we examine the disjunction fallacy experiments. 11. In perhaps the most famous base-rate neglect experiment, KT (1974) gave subjects a personality description of a stereotypical engineer, and told them that he came from a group of 100 engineers and lawyers, and the share of engineers in the

1420

QUARTERLY JOURNAL OF ECONOMICS TABLE III PROBABILITY DISTRIBUTION OF ALL POSSIBLE TYPES A. College activists (A)

BT SW

F

NF

(2/3)(τ /4) (9/10)(2σ /3)

(1/3)(τ /4) (1/10)(2σ /3)

B. College nonactivists (NA)

BT SW

F

NF

(1/5)(3τ /4) (1/2)(σ /3)

(4/5)(3τ /4) (1/2)(σ /3)

Suppose that individuals can have one of two possible backgrounds, college activists (A) and nonactivists (NA), be in one of two occupations, bank teller (BT) or social worker (SW), and hold one of two current beliefs, feminist (F) or nonfeminist (NF). The probability distribution of all possible types is described in Table III, Panels A and B. Table III, Panel A, reports the frequency of activist (A) types, Table III, Panel B, the frequency of nonactivist (NA) types. (This full distribution of types is useful to study the effects of providing data d = A.) τ and σ are the base probabilities of a bank teller and a social worker in the population, respectively Pr(BT) = τ , Pr(SW) = σ . Table III builds in two main features. First, the majority of college activists are feminist, whereas the majority of nonactivists are nonfeminist, irrespective of their occupations (Pr(X, F, A) ≥ Pr(X, NF, A) and Pr(X, F, NA) ≤ Pr(X, NF, NA) for X = BT, SW). Second, social workers are relatively more feminist than bank tellers, irrespective of their college background (e.g., among activists, nine out of ten social workers are feminists, whereas only two out of three bank tellers are feminists; among nonactivists, half of social workers are feminists, whereas only one out of five bank tellers are feminists). Suppose that a local thinker with b = 1 is told that Linda is a former activist, d = A, and asked to assess probabilities that Linda

group. In assessing the odds that this person was an engineer or a lawyer, subjects mainly focused on the personality description, barely taking the base rates of the engineers in the group into account. The parallel between this experiment and the original Linda experiment is sufficiently clear to allow us to analyze base-rate neglect and the conjunction fallacy in the same setting.

WHAT COMES TO MIND

1421

is a bank teller (BT), a social worker (SW), or a feminist bank teller (BT, F). What comes to his mind? Because social workers are relatively more feminist than bank tellers, the agent represents a bank teller with a “nonfeminist” scenario and a social worker with a “feminist” scenario. Indeed, Pr(BT | A, NF) = (τ/12)/[(τ/12) + (2σ/30)] > Pr(BT | A, F) = (2τ/12)/[(2τ/12) + (9σ/15)], and Pr(SW | A, NF) < Pr(SW | A, F). Thus, after the data that Linda was an activist are provided, “bank teller” is represented by (BT, A, NF), and “social worker” by (SW, A, F). The hypothesis of “bank teller and feminist” is correctly represented by (BT, A, F) because it leaves no gaps to be filled. Using equation (11), we can then compute the local thinker’s odds ratios for various hypotheses, which provide a parsimonious way to study judgment biases. V.A. Neglect of Base Rates Consider the odds ratio between the local thinker’s assessment of “bank teller” and “social worker.” In the represented state space, this is equal to

(16)

1/3 3 τ Pr(BT, A, NF) = . = Pr(SW, A, F) 9/10 8 σ Pr L(SW | A) Pr L(BT | A)

As in (11), the rightmost term in (16) is the Bayesian odds ratio, whereas the bracketed term is the ratio of the two representations’ likelihoods. The bracketed term is smaller than one, implying not only that the local thinker underestimates the odds of Linda being a bank teller, but also that he neglects some of the information contained in the population odds of a bank teller, τ /σ . The local thinker underweights the base rate by a factor of (1/3)/(9/10) = 10/27 relative to a Bayesian. Neglect of base rates arises here because the local thinker represents the bank teller as a nonfeminist, a low-probability scenario given the data d = A. With this representation, he forgets that many formerly activist bank tellers are also feminists, which is base-rate neglect. The use of an unlikely scenario for “bank teller” renders biases more severe, but it is not necessary for baserate neglect, which is rather a natural consequence of the local thinker’s use of limited, stereotypical information and can also arise when both hypotheses are represented by the most likely scenario.

1422

QUARTERLY JOURNAL OF ECONOMICS

V.B. Conjunction Fallacy Consider now the local thinker’s odds ratio between “bank teller” and “bank teller and feminist.” Using parameter values in Table III, Panel A, this is equal to (17)

1/3 3 1 Pr(BT, A, M) = = < 1. = L Pr(BT, A, F) 1 2 2 Pr (BT, F | A) Pr L(BT | A)

The conjunction rule is violated because the local thinker represents the constituent event “bank teller” with a scenario, “nonfeminist,” which is unlikely given that Linda is a former activist. Why does the agent fail to realize that among former activists many bank tellers are feminists? Our answer is that the term “bank teller” brings to mind a representation that excludes feminist bank tellers, because “feminist” is a characteristic disproportionately associated with social workers, which does not match the image of a stereotypical bank teller. One alternative explanation of the conjunction fallacy discussed in KT (1983) holds that the subjects substitute the target assessment of Pr(d | h) for that of Pr(h | d).12 In our Linda example, this error can indeed yield the conjunction fallacy because Pr(A | BT) = 1/4 < Pr(A | F, BT) = 10/19. Intuitively, being feminist (on top of being a bank teller) can increase the chance of being Linda. KT (1983) addressed this possibility in some experiments. In one of them, subjects were told that the tennis player Bjorn Borg had reached the Wimbledon final and then asked to assess whether it was more likely that in the final Borg would lose the first set or whether he would lose the first set but win the match. Most subjects violated the conjunction rule by stating that the second outcome was more likely than the first. As we show in Online Appendix 3 using a model calibrated with actual data, our approach can explain this evidence, but a mechanical assessment of Pr(d | h) cannot. The reason, as KT point out, is that Pr(Borg has reached the final | Borg’s score in the final) is always equal to one, regardless of the final score. Most important, the conjunction fallacy explanation based on the substitution of Pr(d | h) for Pr(h | d) relies on the provision 12. In a personal communication, Xavier Gabaix proposed a “local prime” model complementary to our local thinking model. Such a model exploits the above intuition about the conjunction fallacy. Specifically, in the local prime model,

an agent assessing h1 , . . . , hn evaluates Pr L (hi | d) = Pr(d | hi )/[Pr(h1 | d) + · · · + Pr(hn | d)].

WHAT COMES TO MIND

1423

of data d. This story cannot thus explain the conjunction rule violations that occur in the absence of data provision. To see how our model can account for those, consider another experiment from KT (1983). Subjects are asked to compare the likelihoods of “A massive flood somewhere in North America in which more than 1,000 people drown” with that of “An earthquake in California causing a flood in which more than 1,000 people drown.” Most subjects find the latter event, which is a special case of the former, to be nonetheless more likely. We discuss this example formally in Online Appendix 3, but the intuition is straightforward. When earthquakes are not mentioned, massive floods are represented by an unlikely scenario of disastrous storms, as storms are a stereotypical cause of floods. In contrast, when earthquakes in California are explicitly mentioned, the local thinker realizes that these can cause much more disastrous floods, changes his representation, and attaches a higher probability to the outcome because earthquakes in California are quite common. This example vividly illustrates the key point that it is the hypothesis itself, rather than the data, that frames both the representation and the assessment. The general idea behind these types of conjunction fallacy is that either the data (Linda is a former activist) or the question itself (floods in North America) brings to mind a representative but unlikely scenario. This general principle can help explain other conjunction rule violations. For example, Kahneman and Frederick (2005) report that subjects estimate the annual number of murders in the state of Michigan to be smaller than that in the city of Detroit, which is in Michigan. Our model suggests that this might be explained by the fact that the stereotypical location in Michigan is rural and nonviolent, so subjects forget that the more violent city of Detroit is in the state of Michigan as well. V.C. The Role of Data and Insensitivity to Predictability Although base-rate neglect and the conjunction fallacy do not rely on data provision, the previous results illustrate the effects of data in our model. Suppose that a local thinker assesses the probabilities of bank teller, social worker, and feminist bank teller before being given any data. From Table III, Panel B, “social worker” is still represented by (SW, A, F) and “bank teller and feminist” by (BT, A, F). Crucially, however, “bank teller” is now represented by (BT, NA, NF). This is the only representation that changes after

1424

QUARTERLY JOURNAL OF ECONOMICS

d = A is provided. Before data are provided, then, we have (18) (19)

Pr L(BT)

Pr(BT, NA, M) (2/3)(6τ/8) 5τ = = , Pr(SW, A, F) (9/10)(2σ/3) 6σ Pr (SW)

3/5 60 18 Pr L(BT) Pr(BT, NA, M) = = > 1. = L Pr(BT, A, F) 10/19 19 5 Pr (BT, F) L

=

Biases now are either small or outright absent. Expression (18) gives an almost correct unconditional probability assessment for the population odds ratio of τ /σ . In expression (19), not only does the conjunction rule hold, but also the odds of “bank teller” are overestimated. So what happens when data are provided? As in Proposition 2, this is a case where data provision “destroys” the stereotype of only one of the hypotheses, “bank teller.” Before Linda’s college career is described, a bank teller is “nonactivist, nonfeminist.” This stereotype is very likely. However, after d = A is provided, the representation of “bank teller” becomes an unlikely one, because even for bank tellers it is extremely unlikely to have become “nonfeminist” after having been “activist.” The probability of Linda being a bank teller is thus underestimated, generating both severe base-rate neglect and the conjunction fallacy. This analysis illustrates the role of data not only in the Linda setup but also in the electoral campaign example. In both cases, the agent is given data (d = A or d = drink milk with hot dog) that is very informative about an attribute defining stereotypes (political orientation or familiarity). By changing the likelihood of the stereotype, such data induce drastic updating, even when the data themselves are scarcely informative about the target assessment (occupation or qualification). This overreaction to scarcely informative data provides a rationalization for the “insensitivity to predictability” displayed by experimental subjects. We formally show this point in Online Appendix 3 based on a famous KT experiment on practice talks. In sum, a local thinker’s use of stereotypes provides a unified explanation for several KT biases. To account for other biases, we need to move beyond the logic of representativeness as defined here. For instance, our model cannot directly reproduce the Cascells, Schoenberger, and Graboys (1978) evidence on physicians’ interpretations of clinical tests or the blue versus green cab experiment (KT 1982). KT themselves (1982, p. 154) explain why these biases cannot be directly attributed to representativeness.

WHAT COMES TO MIND

1425

We do not exclude the possibility that these biases are a product of local thinking, but progress in understanding different recall processes is needed to establish the connection. V.D. Disjunction and Car Mechanics Revisited Fischhoff, Slovic, and Lichtenstein (1978) document the violation of the disjunction rule experimentally. They asked car mechanics, as well as lay people, to estimate the probabilities of different causes of a car’s failure to start. They document that on average the probability assigned to the residual hypothesis—“The cause of failure is something other than the battery, fuel system, or the engine”—went up from .22 to .44 when that hypothesis was broken up into more specific causes (e.g., the starting system, the ignition system). Respondents, including experienced car mechanics, discounted hypotheses that were not explicitly mentioned. The underestimation of implicit disjunctions has been documented in many other experiments and is the key assumption behind Tversky and Koehler’s (1994) support theory. Proposition 4 allows us to consider the following model of the car mechanic experiment. There is only one dimension, the cause of a car’s failure to start (i.e., K = 1), so that X ≡ {battery, fuel, ignition}, where fuel stands for “fuel system” and ignition stands for “ignition system.” Assume without loss of generality that Pr(battery) > Pr(fuel) > Pr(ignition) > 0. This case meets the conditions of Proposition 4 because now no dimension is left free, so all hypotheses share the same scenario s = X. The agent is initially asked to assess the likelihood that the car’s failure to start is not due to battery troubles. That is, he is asked to assess the hypotheses h1 = {fuel, ignition}, h2 = {battery}. Because K = 1, there are no scenarios to fit. Yet, because the implicit disjunction h1 = {fuel, ignition} does not pin down an exact value for the car’s failure to start, by criterion (8 ) in the Appendix the agent represents it by selecting its most likely element, which is fuel. When hypotheses share no scenarios, the local thinker picks the most likely element within each hypothesis. He then attaches the probability (20)

Pr L(h1 ) =

Pr(fuel) Pr(fuel) + Pr(battery)

to the cause of the car’s failure to start being other than battery when this hypothesis is formulated as an implicit disjunction.

1426

QUARTERLY JOURNAL OF ECONOMICS

Now suppose that the implicit disjunction h1 is broken up into its constituent elements, h1,1 = fuel and h1,2 = ignition (e.g., the individual is asked to assess the likelihoods that the car’s failure to start is due to ignition troubles or to fuel system troubles separately). Clearly, the local thinker represents h1,1 by fuel and h1,2 by ignition. As before, he represents the other hypothesis h2 by battery. The local thinker now attaches greater probability to the car’s failure to start being other than the battery because (21) Pr(ignition) + Pr(fuel) Pr(ignition) + Pr(fuel) + Pr(battery) Pr(fuel) > Pr L(h1 ) = . Pr(fuel) + Pr(battery)

Pr L(ignition) + Pr L(fuel) =

In other words, we can account for the observed disjunction fallacy. The logic is the same as that of Proposition 4: the representation of the explicit disjunction adds to the representation of the implicit disjunction (x = fuel) an additional element (x = ignition), which boosts the assessed probability of the explicit disjunction. VI. AN APPLICATION TO DEMAND FOR INSURANCE Buying insurance is supposed to be one of the most compelling manifestations of economic rationality, in which risk-averse individuals hedge their risks. Yet both experimental and field evidence, summarized by Cutler and Zeckhauser (2004) and Kunreuther and Pauly (2005), reveal some striking anomalies in individual demand for insurance. Most famously, individuals vastly overpay for insurance against narrow low-probability risks, such as those of airplanes crashing or appliances breaking. They do so especially after the risk is brought to their attention, but not when risks remain unmentioned. In a similar vein, people prefer insurance policies with low deductibles, even when the incremental cost of insuring small losses is very high (Johnson et al. 1993; Sydnor 2009). Meanwhile, Johnson et al. (1993) present experimental evidence that individuals are willing to pay more for insurance policies that specify in detail the events being insured against than they do for policies insuring “all causes.” Our model, particularly the analysis of the disjunction fallacy, may shed light on this evidence. Suppose that an agent with a concave utility function u(.) faces a random wealth stream due to

WHAT COMES TO MIND

1427

probabilistic realizations of various accidents. For simplicity, we assume that at most one accident occurs. There are three contingencies, s = 0, 1, 2, each occurring with an ex ante probability πs . Contingency s = 0 is the status quo or no-loss contingency. In this state, the agent’s wealth is at its baseline level w0 . Contingencies 1 and 2 correspond to the realizations of distinct accidents, which entail wealth levels ws < w0 for s = 1, 2. A contingency s = 1, 2 then represents the income loss caused by a car accident, a specific reason for hospitalization, or a plane crash from a terrorist attack. We assume that π0 > max(π1 , π2 ), so that the status quo is the most likely event. We first show that a local thinker in this framework exhibits behavior consistent with Johnson et al.’s (1993) experiments. The authors find, for example, that, in plane crash insurance, subjects are willing to pay more in total for insurance against a crash caused by “any act of terrorism” plus insurance against a crash caused by “any non–terrorism related mechanical failure” than for insurance against a crash for “any reason” (p. 39). Likewise, subjects are willing to pay more in total for insurance policies paying for hospitalization costs in the events of “any disease” and “any accident” than for a policy that pays those costs in the event of hospitalization for “any reason” (p. 40). As a starting point, note that the maximum amount P that a rational thinker is willing to pay to insure his status quo income against “any risks” is given by (22)

u[w0 − P] = E[u(w)].

The rational thinker would pay the same amount P for insurance against any risk as for insurance against either s = 1 or s = 2 occurring, because he keeps all the outcomes in mind. Suppose now that a local thinker of order one (b = 1) considers the maximum price he is willing to pay to insure against any risk (i.e., against the event s = 0). For a local thinker, only one (representative) risk comes to mind. Suppose without loss of generality that π1 > π2 . Then, just as in the car mechanic example, only the more likely event s = 1 comes to the local thinker’s mind. As a consequence, he is willing to pay up to P L for coverage against any risk, defined by (23)

u[w0 − P L] = E[u(w) | s = 0, 1].

1428

QUARTERLY JOURNAL OF ECONOMICS

The local thinker’s maximum willingness to pay directly derives from his certainty-equivalent wealth conditional on the state belonging to the event “s = 0 or 1.” If, in contrast, the local thinker is explicitly asked to state the maximum willingness-to-pay for insuring against either s = 1, 2 occurring, then both events come to mind and his maximum price is identical to the rational thinker’s price of P. Putting these observations together, it is easy to show that the local thinker is willing to pay more for the unpacked coverage whenever u(w2 ) ≤ E[u(w) | s = 0, 1].

(24)

That is, condition (24) and thus the Johnson et al. (1993) experimental findings would be confirmed when, as in the experiments, the two events entailed identical losses so that w1 = w2 (plane crash due to one of two possible causes). In this case, insurance against s = 2 is valuable, and therefore the local thinker is willing to pay less for coverage against “any accident” than when all the accidents are listed because, in the former case, he does not recall s = 2. This partial representation of accidents leads the agent to underestimate his demand for insurance relative to the case in which all accidents are spelled out. The same logic illuminates overinsurance against specific risks, such as a broken appliance or small property damage, as documented by Cutler and Zeckhauser (2004) and Sydnor (2009). A local thinker would in fact pay more for insurance against a specific risk than a rational thinker. Consider again insurance against the wealth loss in state s = 1. A rational thinker’s reservation price P1 to insure against s = 1 is given by (25)

(π0 + π1 )

u[w0 − P1 ] + π2 u[w2 − P1 ] = E[u(w)].

Consider now a local thinker. When prompted to insure against s = 1, the local thinker perfectly represents this state; at the same time, he represents the state where no accident occurs with the status quo s = 0 due to the fact that π0 > π2 . A useful (but not important) consequence in this example is that a local thinker’s reservation price turns out to be given by the same condition (23) as his price for insurance against any risk. It follows immediately that P L > P1 ; the local thinker is willing to pay more for insurance against a specific risk than the rational thinker. Intuitively, with narrow accidents, the no-accident event becomes the residual

WHAT COMES TO MIND

1429

category. The disjunction fallacy implies that the local thinker underestimates the total probability of the residual category, which covers states in which such narrow insurance is not valuable. As a consequence, the local thinker pays more for narrow insurance than a rational agent would. This logic also illustrates the observation of Cutler and Zeckhauser (2004) and Kunreuther and Pauly (2005) that individuals do not insure low-probability risks, such as terrorism or earthquakes, under ordinary circumstances but buy excessive amounts of such insurance immediately after an accident (or some other reminder) that brings the risks to their attention. In our model, low-probability or otherwise nonsalient events are the least likely to be insured against because they are not representative, and hence do not come to mind. Unless explicitly prompted, a local thinker considers either the status quo or highprobability accidents that come to mind. Once an unlikely event occurs, however, or is explicitly brought to the local thinker’s attention, it becomes part of the representation of risky outcomes and is overinsured. Local thinking can thus provide a unified explanation of two anomalous aspects of demand for insurance: overinsurance against narrow and well-defined risks and underinsurance against broad or vaguely defined risks. The model might also help explain other insurance anomalies, such as the demand for life insurance rather than for annuities by the elderly parents of welloff children (Cutler and Zeckhauser 2004). We leave a discussion of these issues to future work. VII. CONCLUSIONS We have presented a simple model of intuitive judgment in which the agent receives some data and combines them with information retrieved from memory to evaluate a hypothesis. The central assumption of the model is that, in the first instance, information retrieval from memory is both limited and selective. Some, but not all, of the missing scenarios come to mind. Moreover, what primes the selective retrieval of scenarios from memory is the hypothesis itself, with scenarios most predictive of that hypothesis—the representative scenarios—being retrieved first. In many situations, such intuitive judgment works well and does not lead to large biases in probability assessments. But in situations where the representativeness and likelihood of scenarios

1430

QUARTERLY JOURNAL OF ECONOMICS

diverge, intuitive judgment becomes faulty. We showed that this simple model accounts for a significant number of experimental results, most of which are related to the representativeness heuristic. In particular, the model can explain the conjunction and disjunction fallacies exhibited by experimental subjects. The model also sheds light on some puzzling evidence concerning demand for insurance. To explain the evidence, we took a narrow view of how recall of various scenarios takes place. In reality, many other factors affect recall. Both availability and anchoring heuristics described by Kahneman and Tversky (1974) bear on how scenarios come to mind but through mechanisms other than those we elaborated. At a more general level, our model relates to the distinction, emphasized by Kahneman (2003), between System 1 (quick and intuitive) and System 2 (reasoned and deliberate) thinking. Local thinking can be thought of as a formal model of System 1. However, from our perspective, intuition and reasoning are not so radically different. Rather, they differ in what is retrieved from memory to make an evaluation. In the case of intuition, the retrieval is not only quick but also partial and selective. In the case of reasoning of the sort studied by economists, retrieval is complete. Indeed, in economic models, we typically think of people receiving limited information from the outside world, but then combining it with everything they know to make evaluations and decisions. The point of our model is that, at least in making quick decisions, people do not bring everything they know to bear on their thinking. Only some information is automatically recalled from passive memory, and—crucially to understanding the world—the things that are recalled might not even be the most useful. Heuristics, then, are not limited decisions. They are decisions like all others but based on limited and selected inputs from memory. System 1 and System 2 are examples of the same mode of thought; they differ in what comes to mind. APPENDIX: LOCAL THINKING WITH GENERAL HYPOTHESES AND DATA Hypotheses and data may constrain some dimensions of the state space X without restricting them to particular values, as we assumed in (7). Generally, (7 )

h ∩ d ≡ {x ∈ X | xi ∈ Hi } ,

for some i ∈ I,

WHAT COMES TO MIND

1431

where I ⊆ {1, . . . , K} is the set of dimensions constrained by h ∩ d, and Hi ⊂ Xi are the sets they specify for each i ∈ I. Dimensions i∈ / I are left free. The class of hypotheses in (7) is a special case of that in (7 ) when the sets Hi are singletons. To generalize the definition of representation of a hypothesis, we assume that agents follow a three-stage procedure. First, each hypothesis h ∩ d is decomposed into all of its constituent “elementary hypotheses,” defined as those that fix one exact value for each dimension in I. Second, for each elementary hypothesis, agents then consider all possible scenarios, according to Definition 1. Finally, agents order the set of elementary hypotheses together with the respective feasible scenarios according to their conditional probabilities.13 An agent with b = 1 would simply solve (8 )

max Pr [xI | s ∩ d] , xI ,s

where xI ≡ {x ∈ X : xi = xˆi }, where xˆi ∈ Hi , ∀i ∈ I. Thus, conditional on fixing xI , scenario s is the exact equivalent of the scenario in Definition 1. A solution to problem (8 ) always exists due to the finiteness of the problem. 1 ∩ d for hyThis procedure generates a representation sr1 ∩ xI,r pothesis hr , which is the general counterpart of the representation sr1 ∩ hr ∩ d used in the class of problems in (7). Accordingly, (8 ) k of hr that yields a ranking of all possible representations srk ∩ xI,r in turn ranks all elements in hr ∩ d in terms of their order of recall. Formula (9) can now be directly applied to calculate the local thinker’s probabilistic assessment. In the case of exhaustive hypotheses in the general class (7 ), that assessment can be written as k b k Pr s ∩ x | h ∩ d Pr(ht ∩ d) t t k=1 I,t . (9

) Pr L(ht | d) = N b k ∩ x k | h ∩ d Pr(h ∩ d) Pr s r r r r=1 k=1 I,r Expression (9

) is an immediate generalization of (9 ). Except for Proposition 1, which is proved only for problems in (7), 13. This assumption captures the idea that dimensions explicitly mentioned in the hypothesis are selected to maximize the probability of the latter. We could assume that filling gaps in hypotheses taking form (7 ) is equivalent to selecting scenarios, in the sense that the agent maximizes (8) subject to scenarios s ∈ h ∩ d. Our main results would still hold in this case, but all scenarios s ∈ hr ∩ d would be equally representative, as expression (8) would always be equal to 1. Assumption (8 ) captures the intuitive idea that the agent also orders the representativeness of elements belonging to ranges explicitly mentioned in the hypothesis itself.

1432

QUARTERLY JOURNAL OF ECONOMICS

all the results in the paper generalize to hypotheses of type (7 ). The only caveat is that in this case element srk ∩ hr ∩ d should be read as the intersection of the set of specific values chosen by the agent for representing hr with the data and the chosen scenario, k ∩ d, which is the kth ranked term according to that is, as srk ∩ xI,r

objective (8 ). UNIVERSITAT POMPEU FABRA, CREI, UNIVERSITAT POMPEU FABRA, CEPR HARVARD UNIVERSITY

REFERENCES Barberis, Nicholas, Andrei Shleifer, and Robert Vishny, “A Model of Investor Sentiment,” Journal of Financial Economics, 49 (1998), 307–343. Cascells, Ward, Arno Schoenberger, and Thomas Graboys, “Interpretations of Physicians of Clinical Laboratory Results,” New England Journal of Medicine, 299 (1978), 999–1001. Cutler, David, and Richard Zeckhauser, “Extending the Theory to Meet the Practice of Insurance,” in Brookings–Wharton Papers on Financial Services, Robert Litan and Richard Herring, eds. (Washington, DC: Brookings Institution, 2004). Fischhoff, Baruch, Paul Slovic, and Sarah Lichtenstein, “Fault Trees: Sensitivity of Assessed Failure Probabilities to Problem Representation,” Journal of Experimental Psychology: Human Perceptions and Performance, 4 (1978), 330–344. Griffin, Dale, and Amos Tversky, “The Weighing of Evidence and the Determinants of Confidence,” Cognitive Psychology, 24 (1992), 411–435. Jehiel, Philippe, “Analogy-Based Expectation Equilibrium,” Journal of Economic Theory, 123 (2005), 81–104. Johnson, Eric, John Hershey, Jacqueline Meszaros, and Howard Kunreuther, “Framing, Probability Distortions, and Insurance Decisions,” Journal of Risk and Uncertainty, 7 (1993), 35–51. Kahneman, Daniel, “Maps of Bounded Rationality: Psychology for Behavioral Economics,” American Economic Review, 93 (2003), 1449–1476. Kahneman, Daniel, and Shane Frederick, “A Model of Heuristic Judgment,” in The Cambridge Handbook of Thinking and Reasoning, Keith J. Holyoake and Robert G. Morrison, eds. (Cambridge, UK: Cambridge University Press, 2005). Kahneman, Daniel, and Amos Tversky, “Subjective Probability: A Judgment of Representativeness,” Cognitive Psychology, 3 (1972), 430–454. ——, “Judgment under Uncertainty: Heuristics and Biases,” Science, 185 (1974), 1124–1131. ——, “Evidential Impact of Base-Rates,” in Judgement under Uncertainty: Heuristics and Biases, Daniel Kahneman, Paul Slovic, and Amos Tversky, eds. (Cambridge, UK: Cambridge University Press, 1982). ——, “Extensional vs. Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review, 91 (1983), 293–315. ——, “Extensional versus Intuitive Reasoning,” in Heuristics and Biases: The Psychology of Intuitive Judgments, Thomas Gilovich, Dale W. Griffin, and Daniel Kahneman, eds. (Cambridge, UK: Cambridge University Press, 2002). Kunreuther, Howard, and Mark Pauly, “Insurance Decision-Making and Market Behavior,” Foundations and Trends in Microeconomics, 1 (2005), 63–127. Mullainathan, Sendhil, “Thinking through Categories,” Mimeo, Massachusetts Institute of Technology, 2000. ——, “A Memory-Based Model of Bounded Rationality,” Quarterly Journal of Economics, 117 (2002), 735–774. Mullainathan, Sendhil, Joshua Schwartzstein, and Andrei Shleifer, “Coarse Thinking and Persuasion,” Quarterly Journal of Economics, 123 (2008), 577–620.

WHAT COMES TO MIND

1433

Osborne, Martin, and Ariel Rubinstein, “Games with Procedurally Rational Players,” American Economic Review, 88 (1998), 834–847. Popkin, Samuel, The Reasoning Voter (Chicago: University of Chicago Press, 1991). Rabin, Matthew, “Inference by Believers in the Law of Small Numbers,” Quarterly Journal of Economics, 117 (2002), 775–816. Rabin, Matthew, and Joel L. Schrag, “First Impressions Matter: A Model of Confirmatory Bias,” Quarterly Journal of Economics, 114 (1999), 37–82. Schwartzstein, Joshua, Selective Attention and Learning, Unpublished Manuscript, Harvard University, 2009. Stewart, Neil, Nick Chater, and Gordon Brown, “Decision by Sampling,” Cognitive Psychology, 53 (2006), 1–26. Sydnor, Justin, “(Over)insuring Modest Risks” (available at http://wsomfaculty .case.edu/sydnor/deductibles.pdf, 2009). Tversky, Amos, and Derek Koehler, “Support Theory: A Nonextensional Representation of Subjective Probability,” Psychological Review, 101 (1994), 547–567. Wilson, Andrea, Bounded Memory and Biases in Information Processing, Unpublished Manuscript, Princeton University, 2002.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS∗ DEAN P. FOSTER AND H. PEYTON YOUNG We show that it is very difficult to devise performance-based compensation contracts that reward portfolio managers who generate excess returns while screening out managers who cannot generate such returns. Theoretical bounds are derived on the amount of fee manipulation that is possible under various performance contracts. We show that recent proposals to reform compensation practices, such as postponing bonuses and instituting clawback provisions, will not eliminate opportunities to game the system unless accompanied by transparency in managers’ positions and strategies. Indeed, there exists no compensation mechanism that separates skilled from unskilled managers solely on the basis of their returns histories.

I. BACKGROUND Incentives for financial managers are coming under increased scrutiny because of their tendency to encourage excessive risk taking. In particular, the asymmetric treatment of gains and losses gives managers an incentive to increase leverage and take on other forms of risk without necessarily increasing expected returns for investors. Various changes in the incentive structure have been proposed to deal with this problem, including postponing bonus payments, clawing back bonus payments if later performance is poor, and requiring managers to hold an equity stake in the funds that they manage. These apply both to managers of financial institutions, such as banks, and to managers of private investment pools, such as hedge funds.1 The purpose of this paper is to show that, although these and related reforms may moderate the incentives to game the system, gaming cannot be eliminated. The problem is especially acute when there is no transparency, so that investors cannot see the trading strategies that are producing the returns for which ∗ This paper is a contribution to the Santa Fe Institute’s research program on incentives and the functioning of economic institutions. We are indebted to Gabriel Kreindler, Pete Kyle, Andrew Lo, Andrew Patton, Tarun Ramadorai, Krishna Ramaswamy, Neil Shephard, Robert Stine, and two anonymous referees for helpful suggestions. An earlier version was entitled “The Hedge Fund Game: Incentives, Excess Returns, and Performance Mimics,” Working Paper 07-041, Wharton Financial Institutions Center, University of Pennsylvania, 2007. 1. For a general discussion of managerial incentives in the financial sector, see Bebchuk and Fried (2004) and Bebchuk and Spamann (2009). The literature on incentives and risk taking by portfolio managers will be discussed in greater detail in Section II. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1435

1436

QUARTERLY JOURNAL OF ECONOMICS

managers are being rewarded. In this setting, where managerial compensation is based solely on historical performance, we establish two main results. First, if a performance-based compensation contract does not levy out-of-pocket penalties for underperformance, then managers with no superior investment skill can capture a sizable amount of the fees that are intended to reward superior managers by mimicking the latter’s performance. The potential magnitude of fee capture has a concise analytical expression. Second, if a compensation contract imposes penalties that are sufficiently harsh to deter risk-neutral mimics, then it will also deter managers of arbitrarily high skill levels. In other words, there exist no performance-based compensation schemes that screen out risk-neutral mimics while rewarding managers who generate excess returns. This contrasts with statistical measures of performance, some of which can discriminate in the long run between “expert” and “nonexpert” managers.2 Our results are proved using a combination of game theory, probability theory, and elementary principles of mechanism design. One of the novel theoretical elements is the concept of performance mimicry. This is analogous to a common biological strategy known as “mimicry” in which one species sends a signal, such as a simulated mating call, in order to lure potential mates, which are then devoured. An example is the firefly Photuris versicolor, whose predaceous females imitate the mating signals of females from other species in order to attract passing males, some of which respond and are promptly eaten (Lloyd 1974). Of course, the imitation may be imperfect and the targets are not fooled all of the time, but they are fooled often enough for the strategy to confer a benefit on the mimic.3 In this paper we apply a variant of this idea to modeling the competition for customers in financial markets. We show that 2. There is a substantial literature on statistical tests that discriminate between true experts and those who merely pretend to be experts. In finance, Goetzmann et al. (2007) propose a class of measures of investment performance that we discuss in greater detail in Section VI. Somewhat more distantly related is the literature on how to distinguish between experts who can predict the probability of future events and imposters who manipulate their predictions in order to look good (Lehrer 2001; Sandroni 2003; Sandroni, Smorodinsky, and Vohra 2003; Olszewski and Sandroni 2008). Another paper that is thematically related is Spiegler (2006), who shows how “quacks” can survive in a market because of the difficulty that customers have in distinguishing them from the real thing. 3. Biologists have documented a wide range of mimicking repertoires, including males mimicking females and harmless species mimicking harmful ones in order to deter predators (Alcock 2005).

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1437

portfolio managers with no private information or special investment skills can generate returns over an extended period of time that look just like the returns that would be generated by highly skilled managers; moreover, they can do so without any knowledge of how the skilled managers actually produce such returns.4 Of course, a mimic cannot reproduce a skilled manager’s record forever; instead he or she reproduces it with a certain probability and pays for it by taking on a small probability of a large loss. In practice, however, this probability is sufficiently small so that the mimic can get away with the imitation for many years (in expectation) without being discovered. Our framework allows us to derive precise analytical expressions for (i) the probability with which an unskilled manager can mimic a skilled one over any specified length of time and (ii) the minimum amount the mimic can expect to earn in fees as a function of the compensation structure. The paper is organized as follows. In the next section we review the prior theoretical and empirical literature on performance manipulation. In Section III we introduce the model, which allows us to evaluate a very wide range of compensation contracts and different ways of manipulating them. Section IV shows how much fee capture is possible under any compensation arrangement that does not assess personal financial penalties on the manager. In Section V we explore the implications of this result through a series of concrete examples. Section VI discusses manipulationproof performance measures and why they do not solve the problem of designing manipulation-proof compensation schemes. In Section VII we derive an impossibility theorem, which shows that there is essentially no compensation scheme that is able to reward skilled managers and screen out unskilled managers based solely on their “track records.” Section VIII shows how to extend these results to allow for the inflow and outflow of money based on prior performance. Section IX concludes. 4. It should be emphasized that mimicry is not the same as cloning or replication (Kat and Palaro 2005; Hasanhodzic and Lo 2007). These strategies seek to reproduce the statistical properties of a given fund or class of funds, whereas mimicry seeks to fool investors into thinking that returns are being generated by one type of distribution when in fact they are being generated by a different (and less desirable) distribution. Mimicry is also distinct from strategy stealing, which is a game-theoretic concept that involves one player copying the entire strategy of another (Gale 1974). In our setting the performance mimic cannot steal the skilled manager’s investment strategy because if he or she knew the strategy then he or she too would be skilled.

1438

QUARTERLY JOURNAL OF ECONOMICS

II. RELATED LITERATURE The fact that standard compensation contracts give managers an incentive to manipulate returns is not a new observation; indeed there is a substantial prior literature on this issue. In particular, the two-part fee structure that is common in the hedge fund industry has two perverse features: the fees are convex in the level of performance, and gains and losses are treated asymmetrically. These features create incentives to take on increased risk, a point that has been discussed in both the empirical and theoretical finance literature (Starks 1987; Carpenter 2000; Lo 2001; Hodder and Jackwerth 2007). The approach taken here builds on this work by considering a much more general class of compensation contracts and by deriving theoretical bounds on how much manipulation is possible. Of the prior work on this topic, that of Lo (2001) is the closest to ours because he focuses explicitly on the question of how much money a strategic actor can make by deliberately manipulating the returns distribution using options trading strategies. Lo examines a hypothetical situation in which a manager takes short positions in S&P 500 put options that mature in one to three months and shows that such an approach would have generated very sizable excess returns relative to the market in the 1990s. (Of course this strategy could have lost a large amount of money if the market had gone down sufficiently.) The present paper builds on Lo’s approach by examining how far this type of manipulation can be taken and how much fee capture is theoretically possible. We do this by explicitly defining the strategy space that is available to potential entrants, and how they can use it to mimic high-performance managers. A related strand of the literature is concerned with the potential manipulation of standard performance measures, such as the Sharpe ratio, the appraisal ratio, and Jensen’s alpha. It is well known that these and other measures can be “gamed” by manipulating the returns distribution without generating excess returns in expectation (Lhabitant 2000; Ferson and Siegel 2001). It is also known, however, that one can design performance measures that are immune to many forms of manipulation. These take the form of constant relative risk aversion utility functions averaged over the returns history (Goetzmann et al. 2007). We shall discuss these connections further in Section VI. Our main conclusion, however, is that a similar possibility theorem does not hold for compensation mechanisms. At first this may seem surprising:

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1439

for example, why would it not suffice to pay fund managers according to a linear increasing function of one of the manipulation-proof measures mentioned above? The difficulty is that a compensation mechanism not only must reward managers according to their actual ability, but also must screen out managers who have no ability. In other words, the mechanism must create incentives for skilled managers to participate and for unskilled managers not to participate. This turns out to be considerably more demanding, because managers with different skill levels have different opportunity costs and therefore different incentive-compatibility constraints. III. THE MODEL Performance-based compensation contracts rely on two types of inputs: the returns generated by the fund manager and the returns generated by a benchmark portfolio that serves as a comparator. Consider first a benchmark portfolio that generates a sequence of returns in each of T periods. Throughout we shall assume that returns are reported at discrete intervals, say at the end of each month or each quarter (though the value of the asset may evolve in continuous time). Let rft be the risk-free rate in period t and let Xt be the total return of the benchmark portfolio in period t, where Xt is a nonnegative random variable whose distribution may depend on the prior realizations x1 , x2 , . . . , xt−1 . A fund that has initial value s0 > 0 and ispassively invested in the benchmark will therefore have value s0 1≤t≤T Xt by the end of the T th period. If the benchmark asset is risk-free then Xt = 1 + rft . Alternatively, Xt may represent the return on a broad market index such as the S&P 500, in which case it is stochastic, though we do not assume stationarity. Let the random variables Yt ≥ 0 denote the period-by-period returns generated by a particular managed portfolio, 1 ≤ t ≤ T . A compensation contract is typically based on a comparison between the returns Yt and the returns Xt generated by a suitably chosen benchmark. It will be mathematically convenient to express the returns of the managed portfolio as a multiple of the returns generated by the benchmark asset. Specifically, let us assume that Xt > 0 in each period t, and consider the random variable Mt ≥ 0 such that (1)

Yt = Mt Xt .

1440

QUARTERLY JOURNAL OF ECONOMICS

A compensation contract over T periods is a vector-valued 2T → RT +1 that specifies the payment to the manfunction φ: R+ ager in each period t = 0, 1, 2, . . . , T as a function of the amount of money invested and the realized sequences x = (x1 , x2 , . . . , xT ) and m = (m1 , m2 , . . . , mT ). We shall assume that the payment in period t depends only on the realizations x1 , . . . , xt and m1 , . . . , mt . We shall also assume that the payment is made at the end of the period, and cannot exceed the funds available at that point in time. (Payments due at the start of a period can always be taken out at the end of the preceding period, so this involves no real loss of generality. The payment in period zero, if any, corresponds to an upfront management fee.) This formulation is very general, and includes standard incentive schemes as well as commonly proposed reforms, such as “postponement” and “clawback” arrangements, in which bonuses earned in prior periods can be offset by penalties in later periods. These and a host of other variations are embedded in the x), can depend on assumption that the payment in period t, φt (m, the entire sequence of returns through period t. Let us consider a concrete example. Suppose that the contract calls for a 2% management fee that is taken out at the end of each year plus a 20% performance bonus on the return generated during the year in excess of the risk-free rate. Let the initial x), let st = size of the fund be s0 . Given a pair of realizations (m, x) be the size of the fund at the start of year t after any st (m, upfront fees have been deducted. Then the management fee at the end of the first year will be 0.02 m1 x1 s1 and the bonus will be 0.2 (m1 x1 − 1 − rf 1 )+ s1 . Hence (2)

φ1 = [0.02 m1 x1 + 0.2 (m1 x1 − 1 − rf 1 )+ ]s1 .

Letting s2 = m1 x1 s1 − φ1 and continuing recursively, we find that in each year t, (3)

φt = [0.02 mt xt + 0.2 (mt xt − 1 − rft )+ ]st .

Alternatively, suppose that the contract specifies a 2% management fee at the end of each year plus a one-time 20% performance bonus that is paid only at the end of T years. In x) = this case the size of the fund at the start of year t is st (m, s0 (0.98)t−1 1≤s≤t−1 ms xs . The management fee in the tth year

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1441

equals ⎡ (4)

x) = 0.02st (m, x)mt xt = ⎣0.02(0.98)t−1 φt (m,

⎤ ms xs ⎦ s0 .

1≤s≤t

The final performance bonus equals 20% of the cumulative excess to the risk-free rate, which comes to return relative 0.2[ 1≤t≤T mt xt − 1≤t≤T (1 + rft )]+ s0 . IV. PERFORMANCE MIMICRY We say that a manager has superior skill if, in expectation, he or she delivers excess returns relative to a benchmark portfolio (such as a broad-based market index), through either private information, superior predictive powers, or access to payoffs outside the benchmark payoff space. A manager has no skill if he or she cannot deliver excess returns relative to the benchmark portfolio. Investors should not be willing to pay managers with no skill, because the investors can obtain the same expected returns by investing passively in the benchmark. We claim, however, that under any performance-based compensation contract, either the unskilled managers can capture some of the fees intended for the skilled managers, or else the contract is sufficiently unattractive that both the skilled and unskilled managers will not wish to participate. We begin by examining the case where the contract calls only x) ≥ 0 for all t, m, x. (In for nonnegative payments, that is, φt (m, x) < 0 for Section VII we shall consider the situation where φt (m, some realizations m and x.) Note that nonnegative payments are perfectly consistent with clawback provisions, which reduce prior bonuses but do not normally lead to net assessments against the manager’s personal assets. Given realized sequences m and x, define the manager’s cut in period t to be the fraction of the available funds at the end of the period that the manager takes in fees, namely, (5)

x) = φt (m, x)/mt xt st (m, x). ct (m,

By assumption the fees are nonnegative and cannot exceed the funds available; hence 0 ≤ ct (m, x) ≤ 1 for all m, x. (If mt xt st (m, x) = x) = 1 and assume that the fund closes down.) The 0 we let ct (m,

1442

QUARTERLY JOURNAL OF ECONOMICS

2T cut function is the vector-valued function c : R+ → [0, 1]T +1 such x), c1 (m, x), . . . , cT (m, x)) for each pair (m, x). that c(m, x) = (c0 (m, In our earlier example with a 2% end-of-period management fee and a 20% annual bonus, the cut function is 1 + rft (6) c0 (m, x) = 0 and ct (m, x) = 0.02 + 0.2 1 − mt xt + for 1 ≤ t ≤ T .

PROPOSITION 1. Let φ be a nonnegative compensation contract over T periods that is benchmarked against a portfolio gener = (X1 , X2 , . . . , XT ) > 0, and let c be the associating returns X ated cut function. Given any target sequence of excess returns 0 (m) there exists a mimicking strategy M that delivers m ≥ 1, zero expected excess returns in every period (E[Mt0 ] = 1), such = x of the benchmark asset, the that for every realization X mimic’s expected fees in period t (conditional on x) are at least (7)

ct (m, x)[(1 − c0 (m, x)) · · · (1 − ct−1 (m, x))][x1 · · · xt ]s0 .

x)) · · · (1 − Note that, in this expression, the factor [(1 − c0 (m, ct−1 (m, x))] is the fraction left over after the manager has taken out his or her cut in previous periods. Hence the proposition says that in expectation, the mimic’s cut in period t is at least as large as the cut of a skilled manager who generates the excess returns sequence m with certainty. The difference is that the mimic’s cut is assessed on a fund that is compounding at the rate of the benchcut is based mark asset ( 1≤s≤t xs ), whereas the skilled manager’s on a portfolio compounding at the higher rate 1≤s≤t ms xs . It follows that the skilled manager will earn more than the mimic in expectation. The key point, however, is not that skilled managers earn more than mimics, but that mimics can earn a great deal compared to the alternative, which is not to enter the market at all. To understand the implications of this result, let us work through a simple example. Suppose that the benchmark asset consists of risk-free government bonds growing at a fixed rate of 4% per year. Consider a skilled manager who can deliver 10% over and above this every year, and is paid according to the standard two and twenty contract: a bonus equal to 20% of the excess return plus a management fee of 2%.5 In this case the excess annual 5. Of course it is unlikely that anyone would generate the same return year after year, but this assumption keeps the computations simple.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1443

return is (1.10)(1.04) − 1.04 = 0.104, so the performance bonus is 0.20(0.104) = 0.0208 per dollar in the fund at the start of the period. This comes to about 0.0208/[(1.10)(1.04)] = 0.0182 per dollar at the end of the period. By assumption the management fee is 0.02 per dollar at the end of the period. Therefore the cut, which is the total fee per dollar at the end of the period, is 0.0382 or 3.82%. Proposition 1 says that a manager with no skill has a mimicking strategy that in expectation earns at least 3.82% per year of a fund that is compounding at 4% per year before fees, and 0.027% after fees (1.04(1 − 0.0382) = 1.00027). As t becomes large, the probability goes to one that the fund will go bankrupt before then. However, the mimic’s expected earnings in any given year t are actually increasing with t, because in expectation the fund is compounding at a higher rate (4%) than the manager is taking out in fees (3.82%). The key to proving Proposition 1 is the following result. LEMMA . Consider any target sequence of excess returns m = (m1 , . . . , mT ) ≥ (1, 1, . . . , 1). A mimic has a strategy 0 (m) that, for every realized sequence of returns x of M the benchmark portfolio, generates the returns sequence (m1 x1 , . . . , mT xT ) with probability at least 1/ 1≤t≤T mt . We shall sketch the idea of the proof here; in the Appendix we show how to execute the strategy using puts and calls on standard market indices with Black–Scholes pricing. Proof Sketch. Choose a target excess-returns sequence m1 , m2 , . . . , mT ≥ 1. At the start of period 1 the mimic has capital equal to s0 . Assume that he or she invests it entirely in the benchmark asset. He or she then uses the capital as collateral to take a position in the options market. The options position amounts to placing a fair bet that bankrupts the fund with low probability (1 − 1/m1 ) and inflates it by the factor m1 with high probability (1/m1 ) by the end of the period. If the high-probability outcome occurs, the mimic has end-of-period capital equal to m1 x1 s0 , whereas if the low-probability outcome occurs, the fund goes bankrupt. The mimic repeats this construction in each successive period t using the corresponding value mt as the target. By the end of the T th period the strategy will have generated the returns sequence (m1 x1 , . . . , mT xT ) with probability 1/ 1≤t≤T mt , and this holds for every realization x of the benchmark portfolio. This concludes the outline of the proof of Lemma 1.

1444

QUARTERLY JOURNAL OF ECONOMICS

Proposition 1 is now proved as follows. Choose a particular Under the mimicking strategy sequence of excess returns m ≥ 1. defined in the Lemma, for every realization x and every period t, the mimic generates excess returns m1 , m2 , . . . , mt ≥ 1 with probability at least 1/m1 m2 · · · mt . With this same probability he or she earns x)[(1 − c0 (m, x)) · · · (1 − ct−1 (m, x))][x1 · · · xt ][m1 · · · mt ]s0 . ct (m, Thus, because his or her earnings are always nonnegative, expected earnings in period t must be at least ct (m, x)[(1 − x)) · · · (1 − ct−1 (m, x))][x1 · · · xt ]s0 . This concludes the proof of c0 (m, Proposition 1.6 V. DISCUSSION Mimicking strategies are straightforward to implement using standard derivatives, and they generate returns that look good for extended periods while providing no value-added to investors. (Recall that the investors can earn the same expected returns with possibly much lower variance by investing passively in the benchmark asset.) Similar strategies can be used to mimic distributions of returns as well as particular sequences of returns.7 In fact, however, there is no need to mimic a distribution of returns. Managers are paid on the basis of realized returns, not distributions. Hence all a mimic needs to do is target some particular sequence of excess returns that might have arisen from a distribution (and that generates high fees). Proposition 1 shows that he or she will earn at least as high a cut in expectation as a skilled manager would have earned had he or she generated the same sequence. Of course, the fund’s investors would not necessarily approve if they could see what the mimic was doing. The point of the analysis, however, is to show what can happen when investors 6. This construction is somewhat reminiscent of the doubling-up strategy, in which one keeps doubling one’s stake until a win occurs (Harrison and Kreps 1979). Our setup differs in several crucial respects, however: the manager only enters into a finite number of gambles and he or she cannot borrow to finance them. More generally, the mimicking strategy is not a method for beating the odds in the options markets; it is a method for manipulating the distribution of returns in order to earn large fees from investors. 7. Indeed, let Mt be a nonnegative random variable with expectation E[Mt ] = m ¯ t > 1 . Suppose that a mimic wishes to produce the distribution Mt Xt in period t, ¯ t )Mt where Xt is the return from the benchmark. The random variable Mt = (1/m represents a fair bet. The mimic can therefore implement Mt Xt with probability at least 1/m ¯ t by first placing the fair bet Mt and then inflating the fund by the factor m ¯ t using the strategy described in the Lemma.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1445

cannot observe the managers’ underlying strategies—a situation that is quite common in the hedge fund industry. Performance contracts that are based purely on reported returns, and that place no restrictions on managers’ strategies, are highly vulnerable to manipulation. Expression (7) in Proposition 1 shows how much fee capture is possible, and why it is very difficult to eliminate this problem by restructuring the compensation contract. One common proposal, for example, is to delay paying a performance bonus for a substantial period of time. To be concrete, let us suppose that a manager can only be paid a performance bonus after five years, at which point he or she will earn 20% of the total return from the fund in excess of the risk-free rate compounded over five years. For example, with a risk-free rate of 4%, he or she will earn a performance bonus equal to 0.20[s5 − (1.04)5 s0 ]+ , where s5 is the value of the fund at the end of year 5. Consider a hypothetical manager who earns multiplicative excess returns equal to 1.10 each year. Under the above contract, his or her bonus in year 5 would be 0.20[(1.10)5 (1.04)5 s0 − (1.04)5 s0 ] ≈ 0.149s0 , that is, about 15% of the amount initially invested. Let us compare this to the expected earnings of someone who generates apparent 10% excess returns using the mimicking strategy. The mimic’s strategy runs for five years with probability (1.10)−5 = 0.621; hence his or her expected bonus is about (0.621)(0.149)s0 = 0.0925s0 . Thus, with a five-year postponement, the mimic earns an expected bonus equal to more than 9% of the amount initially invested. Now consider a longer postponement, say ten years. The probability that the mimic’s strategy will run this long is (1.10)−10 ≈ 0.386. However, the bonus will be calculated on a larger base. Namely, if the mimic’s fund does keep running for ten years, the bonus will be 0.20[(1.10)10 (1.04)10 − (1.04)10 ]s0 ≈ 0.472s0 . Therefore the expected bonus will be approximately (0.386)(0.472s0 ) = 0.182s0 or about 18% of the amount initially invested. Indeed, it is straightforward to show that under this particular bonus scheme, the expected payment to the mimic increases the longer the postponement is.8 8. Per dollar of initial investment, the bonus in the final period T is 0.20[(1.10)T (1.04)T − (1.04)T ] and the probability of earning it is (1.10)−T . Hence the expected bonus is 0.20[(1.10)T (1.04)T − (1.04)T ]/(1.10)T = 0.20[1 − (1.1)−T ][1.04]T , which is increasing in T .

1446

QUARTERLY JOURNAL OF ECONOMICS

It is, of course, true that the longer the postponement, the greater the risk that the fund will go bankrupt before the mimic can collect his bonus. Thus postponement may act as a deterrent for mimics who are sufficiently risk-averse. However, this does not offer much comfort for several reasons. First, as we have just seen, the postponement must be quite long to have much of an impact. Second, not all mimics need be risk-neutral; it suffices that some of them are. Third, there is a simple way for a risk-averse mimic to diversify away his or her risk: run several funds in parallel (under different names) using independent mimicking strategies. Suppose, for example, that a mimic runs n independent funds of the type described above, each yielding 10% annual excess returns with probability 1/1.1 = 0.091. The probability that at least one of the funds survives for T years or more is 1 − (1 − 1/1.1T )n. This can be made as close to one as we like by choosing n to be sufficiently large.9 VI. PERFORMANCE MEASURES VERSUS PERFORMANCE PAYMENTS The preceding analysis leaves open the possibility that performance contracts with negative payments might solve the problem. Before turning to this case, however, it will be useful to consider the relationship between statistical measures of performance and performance-based compensation contracts. Some standard measures of performance, such as Jensen’s alpha or the Sharpe ratio, are easily gamed by manipulating the returns distribution. Other measures avoid some forms of manipulation, but (as we shall see) they do not solve the problem of how to pay for performance. Consider, for example, the following class of measures proposed by Goetzmann et al. (2007). Let u(x) = (1 − ρ)−1 x 1−ρ be a constant relative risk aversion utility function with ρ > 1. If a fund delivers the sequence of returns Mt (1 + rft ), 1 ≤ t ≤ T , one can define the performance measure ⎡ ⎤

1−ρ (8) G(m) = (1 − ρ)−1 ln ⎣(1/T ) mt ⎦ , ρ > 1. 1≤t≤T

9. A related point is that, in any large population of funds run by mimics, the probability is high that at least one of them will look extremely good, perhaps better than many funds run by skilled managers (though not necessarily better than the best of the funds run by skilled managers). Correcting for multiplicity poses quite a challenge in testing for excess returns in financial markets; for a further discussion of this issue, see Foster, Stine, and Young (2008).

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1447

A variant of this approach that is used by the rating firm Morningstar (2006) is ⎤−1/2 ⎡

(9) G∗ (m) = ⎣(1/T ) 1/m2t ⎦ − 1. 1≤t≤T

These and related measures rank managers according to their ability to generate excess returns in expectation. But to translate these (and other) statistical measures into monetary payments for performance leads to trouble. First, payments must be made on realized returns; one cannot wait forever to see whether the returns are positive in expectation. Second, if the payments are always nonnegative, then the mimic can capture some of them, as Proposition 1 shows. On the other hand, if the payments are allowed to be negative, then they are constrained by the managers’ ability to pay them.10 In the next section we will show that this leads to an impossibility theorem: if the penalties are sufficient to screen out the mimics, then they also screen out skilled managers of arbitrarily high ability. VII. PENALTIES Consider a general compensation mechanism φ that some x) < 0 for some values of times imposes penalties; that is, φt (m, m, x, and t. To simplify the exposition, we assume throughout this section that the benchmark asset is risk-free; that is, xt = 1 + rft for all t. Suppose that a fund starts with an initial amount s0 , which we can assume without loss of generality is s0 = 1. To illustrate the issues that arise when penalties are imposed, let us begin by considering the one-period case. Let (1 + rf 1 )m ≥ 0 be the fund’s total return in period 1, and let φ(m) be the manager’s fee as a function of m. The worst-case scenario (for the investors) is that m = 0. Assume that in this case the manager suffers a penalty φ(0) < 0. There are two cases to consider: (i) the penalty arises because the manager holds an equity stake of size |φ(0)| in the fund, which he or she loses when the fund goes bankrupt; or (ii) the penalty is held in escrow in a safe asset earning the risk-free rate, and is paid out to the investors if the fund goes bankrupt. The first case—the equity stake—would be an effective deterrent provided the mimic was sufficiently risk-averse and was 10. Note that if payments are linear and increasing in the performance measure (8), then arbitrarily large penalties will be imposed when mt is close to zero.

1448

QUARTERLY JOURNAL OF ECONOMICS

prevented from diversifying his or her risk across different funds. But an equity stake will not deter a risk-neutral mimic, because the expected return from the mimic’s strategy is precisely the riskfree rate, so his or her stake actually earns a positive amount in expectation, namely (1 + rf 1 )|φ(0)|, and in addition he or she earns positive fees from managing the portion of the fund that he or she does not own. Now consider the second case, in which future penalties are held in an escrow account earning the risk-free rate of return. For our purposes it suffices to consider the penalty when the fund goes bankrupt. To cover this event, the amount placed in escrow must be at least b = −φ(0)/(1 + rf 1 ) > 0. Fix some m∗ ≥ 1 and consider a risk-neutral mimic who generates the return m∗ (1 + rf 1 ) with probability 1/m∗ and goes bankrupt with probability 1 − 1/m∗ . To deter such a mimic, the fees earned during the period must be nonpositive in expectation; that is, (10)

φ(m∗ )/m∗ + φ(0)(1 − 1/m∗ ) ≤ 0.

Because a mimic can target any such m∗ , (10) must hold for all m∗ ≥ 1. Now consider a skilled manager who can generate the return m∗ with certainty. This manager must also put the amount b in escrow, because ex ante all managers are treated alike and the investors cannot distinguish between them. However, this involves an opportunity cost for the skilled manager, because by investing b in his or her own private fund he or she could have generated the return m∗ (1 + rf 1 )b. The resulting opportunity cost for the skilled manager is m∗ (1 + rf 1 )b − (1 + rf 1 )b = −(m∗ − 1)φ(0). Assuming that utility is linear in money (i.e., the manager is risk-neutral), he or she will not participate if the opportunity cost exceeds the fee, that is, if (10 )

φ(m∗ ) + (m∗ − 1)φ(0) ≤ 0.

Dividing (10 ) by m∗ , we see that it follows immediately from (10), which holds for all m∗ ≥ 1. We have therefore shown that, if a one-period contract deters all risk-neutral mimics, it also deters any risk-neutral manager who generates excess returns. The following generalizes this result to the case of multiple periods and randomly generated return sequences. PROPOSITION 2. There is no compensation mechanism that separates skilled from unskilled managers solely on the basis of

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1449

their returns histories. In particular, any compensation mechanism that deters unskilled risk-neutral mimics also deters all skilled risk-neutral managers who consistently generate returns in excess of the risk-free rate. Proof. Let xt = 1 + rft be the risk-free rate of return in period t. To simplify the notation we shall drop the xt s and let φt (m) denote the payment (possibly negative) in period t when the manager delivers the excess return sequence m. The previous argument shows why holding an equity stake in the fund itself does not act as a deterrent for a risk-neutral mimic. We shall therefore restrict ourselves to the situation where future penalties must be held in escrow. Let the Consider an arbitrary excess returns sequence m ≥ 1. 0 be constructed so that it stays in business mimic’s strategy M (m) through period t with probability exactly 1/(m1 · · · mt ). Consider some period t ≤ T . The probability that the fund survives to the start of period t without going bankrupt is 1/(m1 · · · mt−1 ). At the with probability 1/mt and end of period t, the mimic earns φt (m) φt (m1 , . . . , mt−1 , 0, . . . , 0) with probability (mt − 1)/mt . Hence the net present value of the period-t payments is (11)

φt (m) (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) . + (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft )

To deter a risk-neutral mimic, the net present value V 0 (m) of all payments must be nonpositive:

φt (m) (12) V 0 (m) = (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) 1≤t≤T (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) + ≤ 0. (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) (Although some of these payments may have to be held in escrow, this does not affect their net present value to the mimic, because they earn the risk-free rate until they are paid out.) Now consider a skilled manager who can deliver the sequence with certainty. (We shall consider distributions over such m ≥1 sequences in a moment.) Let B(m) be the set of periods t in which a penalty must be paid if the fund goes bankrupt during that period

1450

QUARTERLY JOURNAL OF ECONOMICS

and not before: (13)

B(m) = {t: φt (m1 , . . . , mt−1 , 0, . . . , 0) < 0}.

For each t ∈ B(m), let (14)

= −φt (m1 , . . . , mt−1 , 0, . . . , 0)/(1 + rft ) > 0. bt (m)

This is the amount that must be escrowed during the tth period to ensure that the investors can be paid if the fund goes bankrupt by the end of the period. The skilled manager evaluates the present value of all future fees, penalties, and escrow payments using his or her personaldiscount factor, which for period-t payments is = 1/ 1≤s≤t ms (1 + rfs ). δt = δt (m) at the end Consider any period t ∈ B(m). To earn the fee φt (m) in escrow at the start of of period t, the manager must put bt (m) the period (if not before).11 Conditional on delivering the sequence m, he or she will get this back with interest at the end of period For the skilled t, that is, will get back the amount (1 + rft )bt (m). manager, the present value of this period-t scenario is (15)

+ δt (1 + rft )bt (m) − δt−1 bt (m) δt φt (m) + δt [(mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0)] = δt φt (m) φt (m) = (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) . + (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft )

Now consider a period t ∈ / B(m). This is a period in which the manager earns a nonnegative fee even if the fund goes bankrupt; hence nothing must be held in escrow. The net present value of the fees in any such period is φt (m)/[(m 1 · · · mt ) (1 + rf 1 ) · · · (1 + rft )]. Thus, summed over all periods, the net 11. If penalties must be escrowed more than one period in advance, the opportunity cost to the skilled manager will be even greater and the contract even more unattractive; hence our conclusions still hold.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1451

present value of the fees for the skilled manager comes to

φt (m) (16) V (m) = (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) t∈B(m) (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) + (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft )

φt (m) + . (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) t∈B( / m)

/ B(m), Because mt ≥ 1 and φt (m1 , . . . , mt−1 , 0, . . . , 0) ≥ 0 for all t ∈ we know that

(mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) ≥ 0. (17) (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) t∈B( / m)

From (16) and (17) it follows that

φt (m) (18) V (m) ≤ (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) 1≤t≤T (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) + . (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) But the right-hand side of this expression must be nonpositive to deter the risk-neutral mimics (see expression (12)). It follows that any contract that is unattractive for the risk-neutral mimics is also unattractive for any risk-neutral skilled manager no matter what he or she generates. Because this excess returns sequence m ≥1 statement holds for every excess returns sequence, it also holds for any distribution over excess return sequences. This concludes the proof of Proposition 2. VIII. ATTRACTING NEW MONEY The preceding analysis shows that any compensation mechanism that rewards highly skilled portfolio managers can be gamed by mimics without delivering any value-added to investors. To achieve this, however, the mimic takes a calculated risk in each period that his fund will suffer a total loss. A manager who is concerned about building a long-term reputation may not want to take such risks; indeed he or she may make more money in the long run if his or her returns are lower and he or she stays in business longer, because this strategy will attract a steady inflow of new

1452

QUARTERLY JOURNAL OF ECONOMICS

money. However, although there is empirical evidence that past performance does affect the inflow of new money to some extent, the precise relationship between performance and flow is a matter of debate.12 Fortunately we can incorporate flow–performance relationships into our framework without committing ourselves to a specific model of how they work, and the previous results remain essentially unchanged. To see why, consider a benchmark asset generating returns and a manager who delivers excess returns M relative series X to X. Let Zt = Zt (m1 , . . . , mt−1 ; x1 , . . . , xt−1 ) be a random variable that describes how much net new money flows into the fund at the start of period t as a function of the returns in prior periods. In keeping with our general setup, we assume that Zt is a multiplicative random variable; that is, its realization zt represents the proportion by which the fund grows (or shrinks) at the start of period t compared to the amount that was in the fund at the end of period t − 1. Thus, if a fund starts at size 1, its total value at the start of period t is (19) Zt Ms Xs Zs . 1≤s≤t−1

over T years, a Given any excess returns sequence m ≥ 1 mimic can reproduce it with probability 1/ 1≤t≤T mt for all realizations of the benchmark returns. Because by hypothesis the flow of new money depends and x, it follows that the proba only on m bility is at least 1/ 1≤t≤T mt that the mimic will attract the same amount of new money into the fund as the skilled manager. What patterns of returns attract the largest inflow of new money is an open question that we shall not attempt to address here. However, there is some evidence to suggest that investors are attracted to returns that are steady even though they are not spectacular. Consider, for example, a fund that grows at 1% per month year in and year out. (The recent Ponzi scheme of Bernard Madoff grew to some $50 billion by offering returns of about this magnitude.) This can be generated by a mimic who delivers a monthly return of 0.66% on top of a risk-free rate of 0.33%. The probability that such a fund will go under in any given year is 1 − (1.0066)−12 = 0.076 or about 7.6%. In expectation, such a fund 12. See, for example, Gruber (1996); Massa, Goetzman, and Rouwenhorst (1999); Chevalier and Ellison (1997); Sirri and Tufano (1998); Berk and Green (2004).

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1453

will stay in business and continue to attract new money for about thirteen years. One could of course argue that portfolio managers might not want to take the risk involved in such schemes if they cared sufficiently about their reputations. Some managers might want to stay in business much longer than thirteen years; others might be averse to the damage that bankruptcy would do to their personal reputations or self-esteem. We do not deny that these considerations may serve as a deterrent for many people. But our argument only requires the existence of some people for whom the prospect of high expected earnings outweighs such concerns. The preceding results show that it is impossible to keep these types of managers out of the market without keeping everyone out. IX. CONCLUSIONS In this paper we have shown how mimicry can be used by portfolio managers to game performance fees. The framework allows us to estimate how much a mimic can earn under different incentive structures; it also shows that commonly advocated reforms of the incentive structure cannot be relied upon to screen out unskilled risk-neutral managers who do not deliver excess returns to investors. The analysis is somewhat unconventional from a game-theoretic standpoint, because we did not identify the set of players, their utility functions, or their strategy spaces. The reason is that we do not know how to specify any of these components with precision. To write down the players’ utility functions, for example, we would need to know their discount factors and degrees of risk aversion, and we would also need to know how their track records generate inflows of new money. Although it might be possible to characterize the equilibria of a fully specified game among investors and managers of different skill levels, this poses quite a challenge that would take us well beyond the framework of the present paper. The advantage of the mimicry argument is that we can draw inferences about the relationship between different players’ earnings without knowing the details of their payoff functions or how their track records attract new money. The argument is that, if someone is producing returns that earn large fees in expectation, then someone else (with no skill) can mimic the first type and also earn large fees in expectation without knowing anything about how the first type is actually doing it. In this paper we have shown how to apply this idea to financial markets.

1454

QUARTERLY JOURNAL OF ECONOMICS

We conjecture that it may prove useful in other situations where there are many players, the game is complex, and the equilibria are difficult to pin down precisely. APPENDIX Here we shall show explicitly how to implement the mimicking strategy that was described informally in the text, using puts and calls. We shall consider two situations: (i) the benchmark asset is risk-free, such as U.S. Treasury bills; (ii) the benchmark asset is a market index, such as the S&P 500. We shall call the first case the risk-free model and the second case the market index model. As is customary in the finance literature, we shall assume that the price of the market index evolves in continuous time τ according to a stochastic process of the form (20)

dPτ = μPτ dτ + σ Pτ dWτ ;

that is, Pτ is a geometric Brownian motion with mean μ and variance σ 2 . The reporting of results is done at discrete time periods, such as the end of a month or a quarter. Let t = 1, 2, 3 . . . denote these periods, and let rft denote the risk-free rate during period t. Similarly, let r˜ft denote the continuous-time risk-free rate during period t, which we shall assume is constant during the period and satisfies r˜ft ≤ μ. Without loss of generality, we may assume that each period is of length one, in which case er˜ft = 1 + rft . Mimicking strategies will be implemented using puts and calls on the market index, whose prices are determined by the Black–Scholes formula (see, for example, Hull [2009]). LEMMA . Consider any target sequence of excess returns m = (m1 , . . . , mT ) ≥ (1, 1, . . . , 1) relative to a benchmark asset, which can be either risk-free or a market index. A mimic has 0 (m) that, for every realized sequence of returns a strategy M x of the benchmark asset, generates the returns sequence (m1 x1 , . . . , mT xT ) with probability at least 1/ 1≤t≤T mt . Proof. The options with which one implements the strategy depend on whether the benchmark asset is risk-free or the market index. We shall treat the risk-free case first. Fix a target sequence of excess returns m = (m1 , . . . , mT ) ≥ (1, 1, . . . , 1). We need to show that the mimic has a strategy that

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1455

in each period t delivers the return mt (1 + rft ) with probability at least 1/mt . At the start of period t, the mimic invests everything in the risk-free asset (e.g., U.S. Treasury bills). He then writes (or shorts) a certain quantity q of cash-or-nothing puts that expire before the end of the period. Each such option pays one dollar if the market index is below the strike price at the time of expiration. Let be the length of time to expiration and let s be the strike price divided by the index’s current price; without loss of generality, we may assume that the current price is 1. Let denote the cumulative normal distribution function. Then (see Hull [2009, Section 24.7]) the option’s present value is e−˜rft v, where (21)

√ v = [(ln s − r˜ft + σ 2 /2)/σ ],

and the probability that the put will be exercised is (22)

√ p = [(ln s − μ + σ 2 /2)/σ ].

Assume that the value of the fund at the start of the period is w dollars. By selling q options, the mimic collects an additional e−˜rft vq dollars. By investing everything (including the proceeds from the options) in the risk-free asset, he or she can cover up to q options when they are exercised, provided that er˜ft w + vq ≥ q. Thus the maximum number of covered options the mimic can write is q = wer˜ft /(1 − v). The mimic chooses the time to expiration and the strike price s so that v satisfies v = 1 − 1/mt . With probability p the options are exercised and the fund is entirely cleaned out (i.e., paid to the option-holders). With probability 1 − p the options expire without being exercised, in which case the fund has grown by the factor mt er˜ft over the time interval . The mimic enters into this gamble only once per period, and the funds are invested in the risk-free asset during the remaining time. Hence the total return during the period is mt (1 + rft ) with probability 1 − p and zero with probability p. We claim that p ≤ v; indeed this follows immediately from (21) and (22) and the assumption that r˜ft ≤ μ. Therefore, if the mimic had wt−1 > 0 dollars in the fund at the start of period t, then by the end of the period he would have mt (1 + rft )wt−1 dollars with probability at least 1/mt = 1 − v and zero dollars with probability at most 1 − 1/mt . Therefore after T periods he would

1456

QUARTERLY JOURNAL OF ECONOMICS

have generated the target sequence of excess returns (m1 , . . . , mT ) with probability at least 1/ 1≤t≤T mt , as asserted in the Lemma. Next we consider the case where the benchmark asset is the market index. The basic idea is the same as before, except that in this case the mimic invests everything in the market index (rather than in Treasury bills), and he or she shorts asset-or-nothing options rather than cash-or-nothing options. (An asset-or-nothing option pays the holder one share if the market index closes above the strike price in the case of a call, or below it in the case of a put; otherwise the payout is zero.) As before, the mimic shorts the maximum number of options that he or she can cover, where the strike price and time to expiration are chosen so that the probability that they are exercised is at most 1 − 1/mt . With probability at least 1/mt , this strategy increases the number of shares of the market index held in the fund by a factor mt . Hence, with probability at least 1/mt , it delivers a total return equal to mt (Pt /Pt−1 ) = mt xt for every realization of the market index. It remains to be shown that the strike price s and time to expiration can be chosen so that the preceding conditions are satisfied. There are two cases to consider: μ − r˜ft ≥ σ 2 and μ − r˜ft < σ 2 . In the first case the mimic shorts an asset-or-nothing put, whose present value is √ (23) v = (ln s − r˜ft − σ 2 /2)/σ and whose probability of being exercised is √ (24) p = (ln s − μ + σ 2 /2)/σ . (See Hull [2009, Section 24.7].) From our assumption that μ − r˜ft ≥ σ 2 , it follows that p ≤ v, which is the desired conclusion. If, on the other hand, μ − r˜ft < σ 2 , then the mimic shorts asset-ornothing calls instead of asset-or-nothing puts. In this case the analog of formulas (23) and (24) ensures that p ≤ v (Hull [2009, Section 24.7]). Given any target sequence (m1 , m2 , . . . , mT ) ≥ (1, 1, . . . , 1), this strategy produces returns (m1 x1 , . . . , mT xT ) with probabil ity at least 1/ 1≤t≤T mt for every realization x of the benchmark asset. This concludes the proof of the Lemma. We remark that the probability bound 1/ 1≤t≤T mt is conservative. Indeed, the proof shows that the probability that the options are exercised may be strictly less than is required for the

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1457

conclusion to hold. Furthermore, in practice, the pricing formulas are not completely accurate for out-of-the-money options, which tend to be overvalued (the so-called “volatility smile”). This implies that the seller can realize an even larger premium for a given level of risk than is implied by the Black–Scholes formula. Of course, there will be some transaction costs in executing these strategies, and these will work in the opposite direction. Although it is beyond the scope of this paper to try to estimate such costs, the fact that the mimicking strategy requires only one trade per period in a standard market instrument suggests that these costs will be very low. In any event, it is easy to modify the argument to take such costs into account. Suppose that the cost of taking an options position is some fraction ε of the option’s payoff. In order to inflate the fund’s return in a given period t by the factor mt ≥ 1 after transaction costs, one would have to inflate the return by the factor mt = mt /(1 − ε) before transaction costs. To illustrate: the transaction cost for an out-of-the-money option on the S&P 500 is typically less than 2% of the option price. Assuming the exercise probability is around 10%, the payout, if it is exercised, will be about ten times as large, so ε will be about 0.2% of the option’s payoff. Thus, in this case, the mimicking strategy would achieve a given target mt net of costs with probability 0.998/mt instead of with probability 1/mt . WHARTON SCHOOL, UNIVERSITY OF PENNSYLVANIA UNIVERSITY OF OXFORD AND THE BROOKINGS INSTITUTION

REFERENCES Alcock, John, Animal Behavior: An Evolutionary Approach, 8th ed. (Sunderland, MA: Sinauer Associates, 2005). Bebchuk, Lucian A., and Jesse Fried, Pay without Performance (Cambridge, MA: Harvard University Press, 2004). Bebchuk, Lucian A., and Holger Spamann, “Regulating Bankers’ Pay,” Harvard Law and Economics Discussion Paper No. 641, 2009. Berk, Jonathan B., and Richard C. Green, “Mutual Fund Flows and Performance in Rational Markets,” Journal of Political Economy, 112 (2004), 1269–1295. Carpenter, Jennifer N., “Does Option Compensation Increase Managerial Risk Appetite?” Journal of Finance, 55 (2000), 2311–2331. Chevalier, Judith, and Glenn Ellison, “Risk Taking by Mutual Funds as a Response to Incentives,” Journal of Political Economy, 105 (1997), 1167–1200. Ferson, Wayne E., and Andrew F. Siegel, “The Efficient Use of Conditioning Information in Portfolios,” Journal of Finance, 3 (2001), 967–982. Foster, Dean P., Robert Stine, and H. Peyton Young, “A Martingale Test for Alpha,” Wharton Financial Institutions Center Working Paper 08-041, 2008. Gale, David, “A Curious Nim-Type Game,” American Mathematical Monthly, 81 (1974), 876–879.

1458

QUARTERLY JOURNAL OF ECONOMICS

Goetzmann, William, Jonathan Ingersoll, Matthew Spiegel, and Ivo Welch, “Portfolio Performance Manipulation and Manipulation-Proof Performance Measures,” Review of Financial Studies, 20 (2007), 1503–1546. Gruber, Martin J., “Another Puzzle: The Growth in Actively Managed Mutual Funds,” Journal of Finance, 51 (1996), 783–810. Harrison, J. Michael, and David M. Kreps, “Martingales and Arbitrage in Multiperiod Securities Markets,” Journal of Economic Theory, 20 (1979), 381–408. Hasanhodzic, Jasmina, and Andrew W. Lo, “Can Hedge-Fund Returns be Replicated? The Linear Case,” Journal of Investment Management, 5 (2007), 5–45. Hodder, James E., and Jens C. Jackwerth, “Incentive Contracts and Hedge Fund Management,” Journal of Financial and Quantitative Analysis, 42 (2007), 811–826. Hull, John C., Options, Futures and Other Derivatives, 7th ed. (New York: PrenticeHall, 2009). Kat, Harry M., and Helder P. Palaro, “Who Needs Hedge Funds: A Copula-Based Approach to Hedge Fund Return Replication,” Cass Business School, City University London, 2005. Lehrer, Ehud, “Any Inspection Rule Is Manipulable,” Econometrica, 69 (2001), 1333–1347. Lhabitant, Francois-Serge, “Derivatives in Portfolio Management: Why Beating the Market Is Easy,” Derivatives Quarterly, 6 (2000), 39–45. Lloyd, James E., “Aggressive Mimicry in Photuris Fireflies: Signal Repertoires by Femmes Fatales,” Science, 187 (1974), 452–453. Lo, Andrew W., “Risk Management for Hedge Funds: Introduction and Overview,” Financial Analysts’ Journal, 2001, 16–33. Massa, Massimo, William N. Goetzmann, and K. Geert Rouwenhorst, “Behavioral Factors in Mutual Fund Inflows,” Yale ICF Working Paper 00-14, 1999. Morningstar (http://corporate.morningstar.com, 2006). Olszewski, Wojciech, and Alvaro Sandroni, “Manipulability and FutureIndependent Tests,” Econometrica, 76 (2008), 1437–1480. Sandroni, Alvaro, “The Reproducible Properties of Correct Forecasts,” International Journal of Game Theory, 32 (2003), 151–159. Sandroni, Alvaro, Rann Smorodinsky, and Rakesh Vohra, “Calibration with Many Checking Rules,” Mathematics of Operations Research, 28 (2003), 141–153. Sirri, Erik R., and Peter Tufano, “Costly Search and Mutual Fund Inflows,” Journal of Finance, 53 (1998), 1589–1622. Spiegler, Ran, “The Market for Quacks,” Review of Economic Studies, 73 (2006), 1113–1131. Starks, Laura T., “Performance Incentive Fees: An Agency Theoretic Approach,” Financial and Quantitative Analysis, 22 (1987), 17–32.

DOES TERRORISM WORK?∗ ERIC D. GOULD AND ESTEBAN F. KLOR This paper examines whether terrorism is an effective tool for achieving political goals. By exploiting geographic variation in terror attacks in Israel from 1988 to 2006, we show that local terror attacks cause Israelis to be more willing to grant territorial concessions to the Palestinians. These effects are stronger for demographic groups that are traditionally right-wing in their political views. However, terror attacks beyond a certain threshold cause Israelis to adopt a less accommodating position. In addition, terror induces Israelis to vote increasingly for right-wing parties, as the right-wing parties move to the left in response to terror. Hence, terrorism appears to be an effective strategy in terms of shifting the entire political landscape to the left, although we do not assess whether it is more effective than non-violent means.

I. INTRODUCTION Terrorism is one of the most important, and yet complex, issues facing a large number of countries throughout the world. In recent years, several papers have analyzed the underlying causes and consequences of terrorism, as well as the strategies used by terror organizations in the pursuit of their goals.1 However, very little attention has been given to the question of whether terrorism works or not with respect to coercing the targeted country to grant political and/or territorial concessions. The lack of research on this subject is surprising, given that the answer to this question is critical to understanding why terror exists at all, and why it appears to be increasing over time in many parts of the world. This paper is the first to analyze whether terrorism is an effective strategy using a large sample of micro data and paying ∗ We thank Robert Barro, Larry Katz, Omer Moav, Daniele Paserman, and the anonymous referees for helpful comments and suggestions. We also benefited from the comments of audiences at Boston University, Tel Aviv University, the University of Chicago, the NBER 2009 Summer Institute (Economics of National Security Group), the conference on “The Political Economy of Terrorism and Insurgency” at UC San Diego, and the 2010 meetings of the American Economic Association. Noam Michelson provided expert research assistance. Esteban Klor thanks the NBER and Boston University for their warm hospitality while he was working on this project. The authors thank the Maurice Falk Institute for Economic Research and the Israeli Science Foundation for financial support. 1. For the causes of terrorism, see Krueger and Maleckova (2003), Li (2005), Abadie (2006), Berrebi (2007), Krueger and Laitin (2008), and Piazza (2008). For the consequences of terrorism, see the recent surveys by Enders and Sandler (2006) and Krueger (2007), as well as Becker and Rubinstein (2008) and Gould and Stecklov (2009), among many others. For the strategies of terrorist groups, see Kydd and Walter (2002, 2006), Berman and Laitin (2005, 2008), Bloom (2005), Bueno de Mesquita (2005b), Berrebi and Klor (2006), Benmelech and Berrebi (2007), Bueno de Mesquita and Dickson (2007), Rohner and Frey (2007), Baliga and Sj¨ostr¨om (2009), and Benmelech, Berrebi, and Klor (2009). C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1459

1460

QUARTERLY JOURNAL OF ECONOMICS

particular attention to establishing causality.2 To do this, we exploit variation in a large number of terror attacks over time and across locations in Israel from 1988 to 2006, and examine whether local terror attacks cause Israeli citizens to become more willing to grant territorial concessions to the Palestinians. In addition, we examine whether terror attacks cause Israelis to change their preferences over political parties, attitudes toward establishing a Palestinian state, and whether or not they define themselves as being “right-wing.”3 Our results indicate that terror attacks have pushed Israelis leftward in their political opinions by making them more likely to support granting concessions to the Palestinians. As a result, this paper presents the first comprehensive analysis showing that terrorism can be an effective strategy. However, our findings also indicate that terrorism beyond a certain threshold can backfire on the political goals of terrorist factions, by reducing the targeted population’s willingness to make political and/or territorial concessions. As stated above, the existing evidence on the effectiveness of terrorism is sparse. In the political science literature, there are currently two opposing views on this issue. The first one claims that terrorism is rising around the world simply because it works. Most notably, Pape (2003, 2005) claims that terrorists achieved “significant policy changes” in six of the eleven terrorist campaigns that he analyzed. In addition, Pape (2003, 2005) argues that terrorism is particularly effective against democracies because the electorate typically is highly sensitive to civilian casualties from terrorist acts, which induces its leaders to grant concessions to terrorist factions. Authoritarian regimes, in contrast, are responsive only to the preferences of the ruling elite, and therefore are less likely to accede to terrorist demands in response to civilian casualties.4 2. As Abrahms (2007) points out, the effectiveness of terrorism can be measured in terms of its “combat effectiveness” and its “strategic effectiveness.” The former refers to the amount of physical damage and the number of human casualties resulting from terror activity, whereas the latter refers to whether terror is able to achieve political goals. The focus of our research is to assess the “strategic effectiveness” of terror. 3. The main policy difference between the “left-wing” and “right-wing” in the Israeli context is related to the conflict with the Palestinians, and their attitudes toward the Arab world, rather than to social and economic issues. 4. To provide empirical support for this theory, Pape (2003, 2005) shows that democracies are disproportionately more likely to be the victim of an international suicide terror attack. Karol and Miguel (2007) provide empirical support for voters’ sensitivity to casualties by showing that American casualties in Iraq caused George Bush to receive significantly fewer votes in several key states in the 2004

DOES TERRORISM WORK?

1461

The opposing view argues not only that there is very little evidence that terrorism is effective (Abrahms 2006), but that in fact terror is not an effective tool.5 Abrahms (2006) examined 28 terrorist groups, and argues that terrorists achieved their political goals only 7% of the time (in contrast to the more than 50% success rate reported in Pape [2003] with a different sample). Moreover, he argues that terrorism against democracies is ineffective because democracies are more effective in counterterrorism. In support of this claim, Abrahms (2007) presents evidence that democracies are less likely to be the target of terror activities than autocratic regimes, and that democracies are less likely to make territorial or ideological concessions. Using the Worldwide Incidents Tracking System database of international and domestic terror incidents from 2004 to midway through 2005, Abrahms (2007) shows that terror incidents decline with the level of a country’s “freedom index,” and that the freedom index is uncorrelated with the level of casualties from terror. In particular, Abrahms (2007) shows that, among the ten countries with the highest numbers of terror casualties, only two are “free countries” (India and Philippines), whereas the rest are “not free” (Iraq, Afghanistan, Russia, and Pakistan) or “partially free” (Nigeria, Nepal, Colombia, and Uganda).6 This evidence leads Abrahms (2007) to conclude that terrorism is not an effective strategy against democratic countries. Thus, a summary of the literature reveals that very few studies have even attempted to analyze the strategic effectiveness of terrorism, and there is little agreement among those that have. Thus, whether terror is effective or not is not only important in terms of understanding why terrorism exists, it is still very much an open question in terms of the evidence. Furthermore, as described above, the existing evidence is based on analyzing a small sample of countries and making assessments about the success of

elections. Weinberg and Eubank (1994) and Eubank and Weinberg (2001) also show that democracies are more likely to host terror groups and be the target of terror attacks. They claim that terrorism is particularly effective against democracies due to constitutional constraints that limit policies of retaliation against terrorism in these types of regimes. 5. Abrahms (2007) criticizes the analysis in Pape (2003) for being based on very few countries. Out of the eleven terrorist campaigns that Pape (2003) analyzed, six were based in Israel, whereas four of the remaining five were in Turkey or Sri Lanka. 6. The evidence in Abrahms (2007) notwithstanding, the findings of Abadie (2006), Blomberg and Hess (2008), and Krueger and Laitin (2008) suggest that political reforms toward more democratic forms of government are associated with an increase in terrorist activity. Jackson Wade and Reiter (2007) dispute this claim.

1462

QUARTERLY JOURNAL OF ECONOMICS

terror campaigns against them (Pape 2003, 2005; Abrahms 2006, 2007). However, comparisons across countries are problematic for a number of reasons. First, it is difficult to control for all the factors that may be correlated with the level of terrorism, political stability, level of freedom, etc. All of these factors are most likely to be endogenously determined, and jointly influenced by geography, colonial history, ethnic composition, and religious affiliation. Second, terrorist groups may be emerging endogenously in certain countries according to the success rate of other strategies, and according to the expected success rate of terrorist strategies (Iyengar and Monten 2008). In addition, one cannot ignore the fact that most of the countries (listed above) that suffer high levels of terror are near each other geographically and share similar characteristics in terms of long-standing border conflicts intermixed with ethnic and religious tensions. Controlling for these factors is difficult to do in a cross section of countries, making it problematic to infer causality from the existing evidence. Finally, it is often difficult to assess whether terror is effective when the goals of the terrorists are not even clear. For example, it is not easy to define the political goals of the September 11 attacks (Byman 2003). Therefore, it is nearly impossible to apply a standard definition of “success” in comparing terrorist groups across different countries. In this paper, we overcome the empirical obstacles mentioned above by focusing on the Israeli–Palestinian conflict and using individual-level data on the political attitudes of Israelis toward making concessions to the Palestinians. Our focus on one conflict allows us to abstract from the empirical difficulties associated with controlling for all the factors that could be influencing the presence and effectiveness of terror across countries. In addition, restricting our analysis to one conflict enables us to avoid the difficult task of trying to create objective and consistent measures for whether terror seems to be effective across different conflicts, which are often not comparable to one another.7 7. Terror factions are intricate and multifaceted organizations. There is a growing consensus that terror organizations strategically choose their targets and their operatives (Berman and Laitin 2005, 2008; Bueno de Mesquita 2005b; Benmelech and Berrebi 2007; Benmelech, Berrebi, and Klor 2009). The main goals behind terror campaigns, however, are not always clear or well defined, and seem to differ not only across conflicts, but even over time for any given terror organization. Alternative goals notwithstanding, the main objective of the majority of terror campaigns is to impose costs on the targeted population to pressure a government into granting political and/or territorial concessions. A large number of articles can be cited in support of this claim. For formal theoretical models, see Lapan and Sandler (1993), Bueno de Mesquita (2005a), and Berrebi and Klor (2006).

DOES TERRORISM WORK?

1463

Using repeated cross sections of Jewish Israelis (not including those in the West Bank or Gaza Strip) from 1988 to 2006, we control for subdistrict fixed effects and aggregate year effects, and test whether variation in the level of terror across subdistricts over time can explain variation across subdistricts over time in political attitudes. We pay particular attention to distinguishing between political attitudes and party preferences, which is important because the platforms of parties could be endogenously changing in response to terror. Our results show that terrorism significantly affects the preferences and attitudes of Jewish Israelis. In particular, local terror attacks induce the local population to exhibit a higher willingness to grant territorial concessions. However, the effects of terrorism are nonlinear—terror makes Israelis more accommodating up to a certain point, but beyond this threshold, more terror attacks harden the stance of Israelis toward making concessions. That said, the level of terror fatalities rarely reaches the critical threshold in any given locality. Out of 102 subdistrict–year combinations in our data set, there are only seven cases where the marginal effect was negative (Jerusalem in 1996 and 2003 and Afula, Hadera, Sharon, Tel Aviv, and Zefat in 2003), and only one case (Jerusalem in 2003) where the estimated total effect is negative. As a result, the total effect of terror on the preferences of the Israeli population is almost always toward moderation. Hence, terror attacks appear to be strategically effective in coercing Israelis to support territorial concessions. At the same time, our analysis shows that terror increases the likelihood that voters support right-wing parties (similar to that of Berrebi and Klor [2008]). This result does not contradict our finding that terror causes moderation. The evidence suggests that terrorism brought about a leftward shift of the entire political map in Israel over the last twenty years, including the positions of right-wing parties that are traditionally less willing to grant territorial concessions to the Palestinians. This finding highlights how critical it is to distinguish between the effects of terror on political attitudes and its effects on party preferences, because the platforms of parties are moving endogenously in response to terrorism. Therefore, our overall results show that terrorism has been an effective tool by shifting the entire political landscape toward a more accommodating position. Although we cannot determine whether terrorism is an optimal strategy, these findings suggest that terrorism may be increasing over time and spreading to other regions precisely because

1464

QUARTERLY JOURNAL OF ECONOMICS

it appears to be a successful strategy for achieving political goals.

II. THE DATA II.A. Data on the Political Attitudes of Israeli Citizens Our analysis uses data on the political attitudes of Jewish Israeli citizens (who do not reside in Gaza and the West Bank) along with data on the occurrences of terror attacks.8 The data on the attitudes of Israeli citizens come from the Israel National Election Studies (INES), which consist of surveys carried out before every parliamentary election in Israel since 1969.9 These surveys are based on a representative sample of Israeli voters, and focus on a wide array of substantive issues affecting Israeli society. For example, the surveys include questions about economic and political values, trust in government, social welfare, and the desired relationship between state and religion. In addition, there are several questions regarding the political preferences of the respondent and his or her policy position regarding the Israeli–Palestinian conflict. Because our goal is to understand changes over time in the political opinions of Israelis, our analysis focuses on the questions that appear repeatedly across surveys for the six parliamentary elections from 1988 until 2006. These include questions regarding which party the voter is supporting in the upcoming elections and his or her self-described political tendency (from right wing to left wing). In addition, the survey asks whether the respondent favors granting territorial concessions to the Palestinians as part of a peace agreement, and whether Israel should agree to the establishment of a Palestinian state. The surveys also contain a rich set of demographic information, such as gender, age, education level, ethnicity, immigrant status, monthly expenditures, and notably, the location of residence for each respondent. This geographic information is particularly important for our identification strategy because we do not 8. We omit Arab Israelis and Jewish Israeli citizens residing in Gaza and the West Bank because these populations where not consistently included in the surveys. The survey includes Arab Israelis only starting from 1996 and Jewish settlers only since 1999. 9. The INES questionnaires and data are available online at the INES website (www.ines.tau.ac.il). See Arian and Shamir (2008) for the latest edited volume of studies based on the INES data.

DOES TERRORISM WORK?

1465

want to rely on aggregate time trends to identify the causal effect of terror on political attitudes. Instead, we control for aggregate time trends and exploit the geographic variation in terror attacks across eighteen different subdistricts to explain the changes in political attitudes across locations. Table I presents summary statistics for the attitudes of Jewish Israeli citizens, computed separately for each sample year. The main variable of interest refers to the respondent’s willingness to make territorial concessions to the Palestinians. This question appears in every survey, though not in the same format. In the surveys from 1988 and 1992, individuals were asked to consider three options regarding the long-term solution for the West Bank and Gaza Strip. We coded the person as being willing to make concessions if he or she chose the option: “In return for a peace agreement, a return of most of Judea, Samaria and the Gaza Strip.”10 In the surveys from 1996 and 1999, individuals were asked to rank from one to seven how much they agree (one refers to “strongly disagree” and seven refers to “strongly agree”) with the statement “Israel should give back territories to the Palestinians for peace.” We coded individuals as being willing to make concessions if they chose five or above. Finally, in 2003 and 2006, individuals were given four options regarding their opinion on “To what extent do you agree or disagree to exchange territories for peace?” The four options were strongly agree, agree, disagree, and strongly disagree. We coded individuals as being willing to make concessions if they responded with “agree” or “strongly agree.” This variable is our main variable of interest because it unequivocally captures the respondent’s willingness to grant territorial concessions to the Palestinians, which is consistent with the goals of the terrorist factions. For example, Abdel Karim Aweis, a leader of the Al Aksa Martyrs Brigades (one of the factions linked to the Fatah movement), asserted in an interview with the New York Times that the “goal of his group was to increase losses in Israel to a point at which the Israeli public would demand a withdrawal from the West Bank and Gaza Strip” (Greenberg 2002). Table I shows an upward trend over time in the willingness of Israelis to make concessions—from 39% in 1988 to 57% in 2006. However, because there were changes in the structure 10. The other two options available in the survey are “Annexation of Judea, Samaria and the Gaza Strip” and “Status quo, keeping the present situation as it is.”

0.498 (0.500) 0.290 (0.454) 0.427 (0.495) 0.443 (0.497) 1192 78 11 0.0179 0.0025

0.395 (0.489) 0.260 (0.439) 0.504 (0.500) 0.529 (0.499) 873 25 6 0.0063 0.0015

1992

0.0146

141 71 0.0290

0.428 (0.495) 0.482 (0.500) 0.393 (0.489) 0.438 (0.496) 1168

1996

0.0004

44 2 0.0077

0.502 (0.500) 0.554 (0.497) 0.389 (0.488) 0.380 (0.486) 1060

1999

0.0422

408 275 0.0626

0.550 (0.498) 0.486 (0.500) 0.517 (0.500) 0.463 (0.499) 1058

2003

0.0028

198 19 0.0294

0.568 (0.495) 0.668 (0.471) 0.413 (0.493) 0.328 (0.469) 1505

2006

Notes: Entries in the first four rows of the table represent the averages of the respective variables for each survey. Standard deviations appear in parentheses. The number of observations refers to the total numbers of Israeli Jewish individuals (who do not reside in Gaza or the West Bank) interviewed in each survey. The exact number of observations for each variable varies slightly because not all respondents answered each question. Source. Israeli National Elections Study (INES). The last four rows report the number of fatalities from terror attacks, and the number of fatalities per capita (per 1,000 individuals) from terror attacks. Source. B’tselem.

N Terror fatalities Number of terror fatalities since previous elections Number of terror fatalities within a year of elections Number of terror fatalities per capita since previous elections (per 1,000 individuals) Number of terror fatalities per capita within a year of elections (per 1,000 individuals)

Vote for right bloc of political parties

Agree to the establishment of a Palestinian state in the territories as part of a peace settlement Right-wing political tendency

Political attitudes Agree to territorial concessions to the Palestinians

1988

TABLE I ATTITUDES TOWARD THE CONFLICT, SUPPORT FOR DIFFERENT POLITICAL PARTIES, AND TERROR FATALITIES, BY YEAR

1466 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1467

of the question over time, we employ several strategies to show that our results do not come from those changes. First, all of the regressions control for year effects, which should neutralize aggregate year-specific differences in how individuals interpreted the question. Second, because the major change to the wording occurred between 1992 and 1996, we show that the results are virtually identical when all of the surveys are used (1988–2006), or when the analysis is restricted to periods when there was very little change in the question (1996–2006) or no change at all (2003– 2006). Third, it is not entirely clear whether those who responded with a “four” on the seven-point scale in 1996 and 1999 should be considered willing to make concessions or not. Therefore, we show that the results are very similar if we code them as being willing to make concessions or unwilling to make concessions. Table I also shows the evolution over time of the other variables used to measure the reaction of Israelis to terror attacks. One measure is the person’s willingness to agree to the establishment of a Palestinian state in the territories as part of a peace settlement. This question included four options (strongly agree, agree, disagree, and strongly disagree) regarding the person’s willingness to “establish a Palestinian state as part of a permanent solution to the conflict.” We divided the sample into two groups by coding together individuals who agree or strongly agree with the creation of a Palestinian state, versus individuals who disagree or strongly disagree with this position. Table I shows that the proportion of individuals who agree or strongly agree with the creation of a Palestinian state increases from 0.26 in 1988 to 0.67 in 2006.11 The third variable in Table I refers to the respondent’s selfclassification across the left–right political spectrum. If the respondent defined himself or herself as being on the “right” or “moderate right” end of the spectrum, then he or she was coded as identifying with a right-wing political tendency.12 Table I depicts a generally downward trend in the percentage of self-described 11. One possible caveat of this question is that the survey does not provide a clear definition of “territories.” As a consequence, respondents may interpret territories as areas already under the control of the Palestinian Authority. If that is the case, for these respondents, the creation of a Palestinian state does not really entail any further territorial concessions. In our sample, 25% of the individuals who agree to the establishment of a Palestinian state do not agree to further territorial concessions. They compose 12% of the entire sample. 12. The exact wording of the question is, “With which political tendency do you identify?” It included seven possible answers: left, moderate left, center, moderate right, right, religious, and none of them. We classified an individual as identifying

1468

QUARTERLY JOURNAL OF ECONOMICS

“right-wingers” between 1988 and 2006, although there was a short-lived increase from 1999 to 2003. Finally, our last outcome measure depicts whether the individual intends to vote for a party belonging to the “right-wing bloc” in the upcoming parliamentary elections. The surveys ask every individual: “If the elections for the Knesset (Israeli parliament) were held today, for which party would you vote?” We assign parties to the right bloc following the categorization developed by Shamir and Arian (1999). According to their definition, the right bloc of parties includes the Likud party, all of the religious parties, all of the nationalist parties (Tzomet, Moledet, National Union), and parties identified with Russian immigrants. We choose to focus on the right bloc instead of on separate parties because the number of small parties and the electoral system were not constant across each election period.13 Table I shows that support for the right bloc fluctuates over time in a fashion similar to the self-described right-wing political tendency. We observe a steady decrease in the support for the right bloc between 1988 and 1999, with an increase in 2003 followed by a sharp decrease in 2006 (due to the appearance of Kadima, a new centrist party, in those elections). Table II depicts the political attitudes of respondents tabulated by their demographic characteristics. The table shows that (i) men and women share similar political preferences; (ii) the willingness to make concessions (and other left-leaning views) increases with age, with education, and with the degree of being secular (versus religious); (iii) individuals with an Asian–African family background (Sephardic Jews) are more likely to oppose concessions and support parties in the right bloc; and (iv) there are no clear differences between the attitudes of immigrants and of native-born Israelis. Overall, the data display a tendency for Israelis to become more accommodating in their views over time—more willing to grant concessions, less “right-wing” in their own self-description, with the right-wing political tendency if the individual’s answer to this question was “moderate right” or “right.” 13. In contrast to the other elections, the parliamentary elections of 1996 and 1999 allowed split-ticket voting, whereby each voter cast a ballot in support of a political party for the parliamentary elections and a different ballot for the elections for Prime Minister. This different system may have had an effect on the relative support obtained by the different parties. Consequently, political preferences in these elections may not be directly comparable at the party level to voter preferences in the parliamentary elections of 1988, 1992, 2003, and 2006.

0.473 0.47 0.47 0.41 0.46 0.54 0.40 0.58 0.55 0.25 0.49 0.46 0.37 0.53 0.44 0.46 0.54

0.48 0.51 0.43 0.51 0.55 0.44 0.58 0.57 0.28 0.48 0.51 0.41 0.54 0.44 0.50 0.57

Agree to Palestinian state

0.497

Note. Entries in the table show the means over the entire sample period. Source. Authors’ calculations using survey data from INES.

All Gender Male Female Age 15–29 30–45 46 and older Years of schooling Elementary and secondary Higher education Religiosity Secular Observant Place of birth Immigrant Native Israeli Ethnic background African–Asian ethnicity Non-African–Asian ethnicity Household expenditures Less than average About average More than average

Agree to territorial concessions

0.47 0.45 0.39

0.54 0.37

0.42 0.45

0.37 0.64

0.49 0.36

0.48 0.45 0.39

0.43 0.46

0.440

Right-wing political tendency

TABLE II POLITICAL ATTITUDES BY DEMOGRAPHIC CHARACTERISTICS

0.46 0.43 0.36

0.53 0.36

0.42 0.42

0.33 0.69

0.48 0.35

0.47 0.43 0.37

0.41 0.43

0.421

Vote for right bloc of political parties

0.39 0.34 0.27

0.37 0.63

0.39 0.61

0.74 0.26

0.57 0.43

0.32 0.30 0.37

0.51 0.49

1.00

Share of sample population

DOES TERRORISM WORK?

1469

1470

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I Agree to Concessions over Time: All Right-Leaning Israelis

and more amenable to the creation of a Palestinian state. Interestingly, an increase in the willingness to grant concessions occurred even within individuals that consider themselves “rightwing.” This pattern is shown in Figures I and II. Although there were changes in the composition of people who define themselves as being right-wing over time, these figures highlight the general shift in the political landscape over time toward a more accommodating position regarding Palestinian demands. The question we address is whether this shift is causally related to the terrorist tactics employed by various Palestinian factions. II.B. Data on Israeli Fatalities in Terror Attacks Information on Israeli fatalities from terror attacks is taken from B’tselem, an Israeli human rights organization. B’tselem’s data (thought to be accurate, reliable, and comprehensive) are widely used in studies focusing on the Israeli–Palestinian conflict (Becker and Rubinstein 2008, Jaeger and Paserman 2008, Jaeger et al. 2008, Gould and Stecklov 2009, and others). The data include information on the date, location, and circumstances of each terror attack, which allows us to classify every Israeli fatality according

DOES TERRORISM WORK?

1471

FIGURE II Agree to Concessions over Time: All Left-Leaning Israelis

to the subdistrict where the incident took place. Our measure of fatalities includes only civilian (noncombatant) casualties that did not occur in the West Bank or Gaza Strip. There is substantial time series and geographic variation in Israeli fatalities, which has been used in many of the papers cited above to identify the effect of terror on other outcomes. Figure III and Table A.1 in the Online Appendix depict the total number of fatalities across subdistricts, and show that terror factions especially targeted the most populated subdistricts of the country (Jerusalem, Tel Aviv, and Haifa). In addition, subdistricts such as Hadera and Afula, which are close to particularly radical cities under the control of the Palestinian Authority (e.g., Jenin), suffer from a higher than average level of terror fatalities. Table I presents the number of Israeli fatalities over time. The most violent period occurred between 1999 and 2003, which coincided with the outbreak of the second Palestinian uprising. Overall, there seems to be an upward trend in terror activity over time, and this coincided with the shift in the political landscape toward a higher willingness to make concessions. However, these two trends are not necessarily causally related. For this reason,

1472

QUARTERLY JOURNAL OF ECONOMICS

FIGURE III Distribution of Terror Fatalities across Subdistricts Total number of terror fatalities across subdistricts between July 23, 1984 (date of 1984 parliamentary elections), and March 28, 2006 (date of 2006 parliamentary elections). Adapted by permission from the Central Bureau of Statistics, Israel, map.

DOES TERRORISM WORK?

1473

FIGURE IV Agree to Concessions and Terror Fatalities: Changes from 1988 to 2003

our strategy is to exploit geographic variation in the trends over time across locations, rather than looking at the aggregate trends. Figure IV presents a first look at whether the increase in fatalities per capita within a subdistrict between 1988 and 2003 (the peak year of terror) is correlated with the average change in political views within each subdistrict. The line in Figure IV is the fitted quadratic curve estimated by OLS using the sample of subdistricts depicted in the figure. The figure displays a positive relationship between the change in fatalities per capita in a subdistrict and the change in the average willingness to grant concessions within that subdistrict. However, the relationship seems to get weaker at higher levels of terror. This nonlinear pattern is also found in Figures V and VI, which show the relationship between changes in local terror fatalities per capita and changes in the other outcomes: support for a Palestinian state and support for the right-wing parties. These patterns are all consistent with the idea that terror has induced Israelis to become more accommodating to Palestinian interests, while increasing their support for the right bloc. But these figures are just a cursory look at the

1474

QUARTERLY JOURNAL OF ECONOMICS

FIGURE V Support for Palestinian State and Terror Fatalities: Changes from 1988 to 2003

broad patterns in the data. The next section presents our main empirical strategy. III. EMPIRICAL STRATEGY Our empirical strategy is designed to identify the causal effect of terrorism on the political preferences of the Jewish Israeli population. Our unit of observation is the individual, and we model his or her views as a function of his or her personal characteristics and location of residence, the survey year, and the level of recent terror activity in the individual’s subdistrict. Specifically, we estimate the following linear regression model: (1) viewi jt = α1 · terror jt + α2 · terror2jt + β · xi jt + γt + μ j + εi jt , where viewi jt is a dummy variable equal to one if individual i who lives in subdistrict j in year t holds an accommodating position toward Palestinian demands, and zero otherwise; terror jt is the number of terror fatalities per capita in subdistrict j before the elections in t; γ t is a fixed effect for each election year that controls for aggregate trends in political preferences; μ j is

DOES TERRORISM WORK?

1475

FIGURE VI Support for Right Bloc of Political Parties and Terror Fatalities: Changes from 1988 to 2003

a fixed effect unique to subdistrict j; and xi jt is a vector of individual and subdistrict-level characteristics. These characteristics include all the characteristics listed in Table II—the individual’s gender, age (and age squared), years of schooling, schooling interacted with age, level of religious observance, immigrant status, ethnicity (Asian–African origin versus all other groups), level of expenditures (designed to control for income), number of persons in the household, and number of rooms in the individual’s house (another proxy for income). The xi jt vector also includes characteristics that vary at the subdistrict–year level, and are presented in Table III (computed from the Israeli Labor Force Survey). These subdistrict characteristics include the unemployment rate and demographic variables such as the population share by gender, education levels, religiosity, immigrant status, ethnicity, age groups, household size, and marital status. Unobserved determinants of the individual’s views are captured by the error term, εi jt . The goal of the proposed econometric specification is to identify α 1 and α 2 , which represent the causal effect of local terror activity on an individual’s political attitudes. Identification of α 1

Higher education

−0.6252 [0.71] 4.2688 [5.48] .6807 102

Terror fatalities per capita within a year of the survey Linear effect −0.0610 0.6862 [0.31] [0.51] Quadratic effect 1.0685 −5.3373 [2.80] [4.62] P-value for effect of terrorism .9175 .4053 N 102 102

Asian–African ethnicity

Population size

1.8236 [1.36] −11.0021 [14.09] .0860 102

Below 14 years old 1.0316 [1.47] −14.3044 [11.59] .2889 102

Above 14 years old

2.834∗ [1.50] −25.033 [16.5] .1441 102

Total

0.4502 −178.699 [0.49] [658.3] −3.6838 2,577.698 [4.84] [7,768.0] .6219 .9366 102 102 Average number of individuals in the household

−0.3086 [0.65] 0.0702 [5.11] .5499 102

Immigrants

−0.22459 [0.41] −0.08029 [3.23] .3254 102

Married

−0.07139 [0.14] 0.25239 [1.53] .7055 102

Unemployment

Notes. Each column presents the results of a separate OLS regression where the dependent variable, obtained from the Israeli Labor Force Survey, appears at the top of each column. In addition to terror fatalities per capita within a year before the survey, all regressions include subdistrict and year fixed effects. Robust standard errors, adjusted for clustering at the subdistrict–year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Above 45

30 to 45

Below 30

0.0796 [0.17] −0.9097 [1.50] .7585 102

Ultra-orthodox Jews

Partition by age

Terror fatalities per capita within a year of the survey Linear effect −0.0846 0.0832 [0.20] [0.48] Quadratic effect 1.2400 0.3925 [1.94] [5.32] P-value for effect of terrorism .4645 .8753 N 102 102

Male

TABLE III THE EFFECT OF LOCAL TERROR FATALITIES ON OBSERVABLE CHARACTERISTICS OF THE LOCAL POPULATION

1476 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1477

and α 2 is based on the idea that terror attacks differentially affect the political views of individuals in relation to their proximity to the attack. This may happen because the salience of the conflict could depend on a person’s proximity to terror attacks, or it may be the case that terror attacks impose a higher cost on the local population than on the rest of the country.14 For example, terror attacks pose a greater threat to the personal security of local versus nonlocal residents. Also, terror attacks typically cause local residents to alter their daily routine (modes of transportation, leisure activities, etc.) in costly ways due to the perceived changes in their personal security (Gordon and Arian 2001).15 Based on the concave relationship observed in Figures IV to VI, we allow for a nonlinear effect in equation (1) by including a quadratic term for the local level of terror, but we also estimate models that assume a linear relationship (i.e., restricting α 2 = 0). There are several reasons that terror may affect a person’s political views. Terror attacks increase the cost of denying terrorist groups what they are seeking, and therefore could cause individuals to become more accommodating toward terrorist demands. On the other hand, terror attacks could increase hatred for the other side or make a peaceful solution appear less plausible, leaving individuals less willing to adopt an accommodating position. Therefore, terror could theoretically produce either a softening or a hardening of one’s stance regarding the goals of a terrorist faction, and the effect could be nonlinear if an increase in attacks changed the way an individual viewed the conflict or dealt with terror. For example, the impact of initial attacks, which tend to be more shocking and unexpected, could be substantively different than attacks which occur after individuals have already dealt with several previous attacks. Additionally, attacks beyond a certain threshold could alter an individual’s views about the goals 14. In addition, there is convincing evidence that the local effect of violence is amplified by the coverage of the local media (Karol and Miguel 2007; Sheafer and Dvir-Gvirsman 2010). However, in the Israeli context, where almost all media are at the national level, this mechanism is unlikely to generate substantially different effects across geographic areas. 15. Becker and Rubinstein (2008) show that terror attacks induce a significant decline in bus tickets sold, and in expenditures in restaurants, coffee shops, and pubs. They also find an increase in expenditures on taxis, particularly in large cities, after a bus bombing. Similarly, Spilerman and Stecklov (2009) find that sales in a popular chain of Jerusalem coffee shops decline in the days following attacks, particularly in locations more open to attacks such as those in city centers. Moreover, this decline in sales is larger after more fatal attacks. Hence, the evidence consistently suggests that the effect of terror attacks varies according to the proximity and severity of the attacks.

1478

QUARTERLY JOURNAL OF ECONOMICS

and rationality of the other side, thus changing a person’s willingness to make concessions to terror groups. By including fixed effects for each subdistrict and survey year, we are essentially examining whether changes over time in terror activity within a subdistrict are correlated with the changes over time in political views within that subdistrict, after controlling for the national trend and a rich set of personal and subdistrict-level characteristics. Our identifying assumption in (1), therefore, is that local terror attacks are not correlated with omitted variables that affect political attitudes, and that terrorist groups are not choosing targets based on the trend in local political attitudes (i.e., no reverse causality). To understand this identifying assumption, we sketch out the following conceptual framework for a terrorist group’s decisionmaking process over how much terror to produce in subdistrict j during year t.16 Because terror varies at the subdistrict–year level, we start by aggregating our empirical model, (1), to that level by using the mean of each variable by subdistrict and year: (2)

view jt = α1 · terror jt + α2 · terror2jt + β · x jt + γt + μ j + ε jt ,

where view jt is the share of individuals in subdistrict j at time t who hold an accommodating position toward Palestinian demands, and the coefficients α 1 , α 2 , and β are assumed to be fixed over time and across subdistricts. We assume that the cost of producing terror in subdistrict j increases with the number of terror attacks per capita and the local population size, N jt : cost jt = N jt λ0 jt + λ1 jt · (terror attack) jt (3) + λ2 jt · (terror attack)2jt , where the coefficients λ0 jt , λ1 jt , and λ2 jt may vary across subdistricts and over time.17 The relationship between terror attacks per capita and terror fatalities per capita is given by (4)

terror jt = δ · (terror attack) jt + v jt ,

16. We thank Robert Barro for suggesting the following framework. 17. The assumption that the cost of producing terror attacks increases with a subdistrict’s population size is consistent with the observation that terror organizations assign more highly skilled terrorists to attack more populated areas. The strategic assignment of terrorists may reflect differences in the value of attacking each target (Benmelech and Berrebi 2007), or may be indicative of the terrorists’ response to an optimal counterterrorism policy that raises the failure probability of attacks to valuable targets (Powell 2007).

DOES TERRORISM WORK?

1479

where the random term v jt captures the idea that the relationship between terror attempts and the resulting number of fatalities is not predetermined. Terrorists care about the total number of individuals willing to make concessions to them at time t, net of the costs of producing terror. Formally, they maximize j [θt N jt · view jt − cost jt ], where θ t > 0 captures the idea that the overall payoff to producing terror may change over time due to changes in political developments. The optimal level of terror fatalities in subdistrict j at time t is obtained by equating the marginal cost to the marginal benefit of terror, represented by (5) terror jt = max{δ · (θt · α1 · δ − λ1 jt )/[2 · (λ2 jt − δ 2 · θt · α2 )] + η jt , 0}, where η jt = λ2 jt · v jt /(λ2 jt − δ 2 · θt · α2 ) is a stochastic shock. Therefore, the optimal level of terror in subdistrict j at time t is determined not only by the parameters governing the political response in (2) that we want to estimate (α 1 and α 2 ), but also by the cost parameters of producing terror in subdistrict j over time (λ1 jt and λ2 jt ) and the random outcome of planned attacks (η jt ). Assuming that η jt is independently determined by the random circumstances surrounding each attack, estimation of our parameters of interest (α 1 and α 2 ) in (1) and (2) yields consistent coefficients if the unobserved political preferences in subdistrict j, ε jt , are uncorrelated with changes over time in the costs of producing terror, λ1 jt and λ2 jt. The costs of producing terror in a given area could be changing over time due to the building of the security wall between Israel and the Palestinian territories, or due to policy changes regarding border closures, police presence, and the deployment of security guards at restaurants, schools, and buses. To the extent that these changes are occurring at the aggregate level, they will be absorbed by the aggregate time effects in (2). However, some of these preventive efforts may be differentially changing over time across subdistricts according to the local level of terror. For example, it is likely that Israeli authorities would set a higher priority to beefing up security in areas that historically have been targeted more frequently (i.e., Jerusalem, Tel Aviv, Haifa, Netanya, etc.). It is not clear why these changes would be systematically correlated with changes in unobserved political preferences within an area, which would violate our identifying assumption, but one possibility is that individuals with certain views may be differentially

1480

QUARTERLY JOURNAL OF ECONOMICS

migrating away from heavily targeted areas to safer areas, or there might be a correlation purely by coincidence. In order to address this issue, we perform a set of balancing tests to examine whether there is a systematic relationship between the observable characteristics of the local population and the local level of terror. More specifically, we test whether there is a linear or nonlinear relationship between terror jt and the variables contained in x jt . If there is no relationship between terror and observable factors that affect political preferences, then it seems reasonable to assume that terror jt is not correlated with unobservable political preferences, ε jt , which is the condition needed to obtain consistent estimates of α 1 and α 2 . Table III presents this analysis by regressing various characteristics of the local population on the local level of recent terror activity, while controlling for subdistrict and year fixed effects. The inclusion of fixed effects by year and subdistrict allows us to test whether changes in the characteristics of the local population over time vary systematically with the local level of recent terror activity. The subdistrict-level characteristics used in Table III capture the main demographic and economic characteristics of the local population available in the Israel Labor Force Survey of each election year. In addition, Table III examines whether local terror is related to the size of the local population, which sheds light on whether terror induces overall out-migration from areas with high levels of attacks. The results in Table III show that terror is not significantly related to population size, which suggests that terror does not induce Israelis to migrate to calmer areas. In addition, terrorism is not correlated with changes in the demographic composition or unemployment rate of the subdistrict.18 In particular, recent levels of terrorism are not correlated with the percentage of the subdistrict’s population that is ultra-orthodox or from an Asian– African background—two groups that are typically more rightwing in their views. The fact that terror is not correlated with observable characteristics that are strong predictors of political views supports our assumption that terror is not correlated with unobservable factors

18. The unemployment rate is defined as the number of Jewish males over the age of 24 who are in the labor force but have not worked for the past twelve months. Similar insignificant results are obtained if we include those out of the labor force (but not in school) as being unemployed.

DOES TERRORISM WORK?

1481

that affect an individual’s political preferences.19 In addition, the evidence in Table III provides support for our assumption that there is no reverse causality in equation (1)—terror groups do not target areas according to changes in local demographic characteristics that affect political preferences, and therefore, it seems unlikely that terror groups are targeting areas based on the local trends in political preferences.20 To provide further support for our assumption that there is no reverse causality, Table IV examines whether the political views of individuals within a locality are correlated with local levels of terror in the next election cycle: 2

(6) terror jt+1 = π1 · view jt + π2 · view jt + π3 · x jt + γt + μ j + τ jt , where terror jt+1 measures the number of fatalities per capita in subdistrict j between parliamentary elections in years t and t + 1; view jt is the share of residents with an accommodating view in subdistrict j in the survey taken before parliamentary elections in year t; γ t is a fixed effect for each election year; μ j is a fixed effect unique to subdistrict j; and x jt is a vector of demographic characteristics in subdistrict j before the elections in year t. The estimation of equation (6) is done in Table IV, which tests for a linear or nonlinear effect of current views on attacks in the next period, as well as testing the robustness of the relationship to the inclusion of additional controls. Also, the last column regresses terror jt+1 on the first difference in political views (view jt − view jt−1 ) and its square, in order to test whether terror groups target areas that recently underwent a specific type of change in political attitudes. Table IV performs this analysis using all five ways of measuring political views and preferences, and shows that there is no significant relationship between changes in local political views and terror attacks in the next period. Given that terror groups are not using recent changes 19. The analysis in Berrebi and Klor (2008), based on 240 municipalities and local councils, is consistent with this conclusion. They show that terrorism did not affect net migration across localities or political participation of the electorate during the period at issue. 20. Reverse causality also appears unlikely based on theoretical and practical grounds. Theoretically, it is not clear why terror groups would target particular areas based on the contemporaneous changes in their political attitudes. In practice, it would be very hard to do so, given that the information on the trends in political attitudes is not widespread, and only becomes available in the future after an election, and the election survey, takes place. This makes it hard for groups to target areas based on current changes in local political views.

P-value for effect of political attitudes N

Quadratic effect

P-value for effect of political attitudes N Right wing political tendency Linear effect

Quadratic effect

P-value for effect of political attitudes N Support for creation of a Palestinian state Linear effect

Quadratic effect

Support for granting territorial concessions Linear effect

0.0046 [0.113] 0.0180 [0.132] .7629 87 0.0337 [0.079] 0.0335 [0.094] .1302 87

−0.0019 [0.092] 0.0497 [0.120] .1974 87 −0.0173 [0.075] 0.0364 [0.073] .7287 87

0.0405 [0.024]

0.0158 [0.027]

87

87

87

0.0383 [0.103] −0.0732 [0.123] .6237 87

Adding survey data (3)

0.0450 [0.118] −0.0272 [0.160] .5534 87

Nonlinear specification (2)

0.0227 [0.027]

Linear specification (1)

0.1313 [0.129] −0.0631 [0.141] .0656 87

−0.0150 [0.118] 0.0059 [0.127] .9636 87

−0.0415 [0.101] −0.0271 [0.129] .3407 87

Adding subdistrict characteristics (4)

TABLE IV THE EFFECT OF LOCAL POLITICAL ATTITUDES ON FUTURE LOCAL LEVELS OF TERROR FATALITIES PER CAPITA

0.0239 [0.174] −0.0301 [0.153] .9458 67

−0.0899 [0.071] 0.0651 [0.102] .3365 67

−0.2148∗ [0.118] 0.2329 [0.138] .2075 67

First differences (5)

1482 QUARTERLY JOURNAL OF ECONOMICS

Nonlinear specification (2) −0.0285 [0.017] 0.0245 [0.031] .2494 87 −0.0555 [0.132] 0.0803 [0.142] .7892 87

−0.0365 [0.137] 0.0492 [0.136] .902 87

Adding subdistrict characteristics (4)

−0.0105 [0.015] 0.0357 [0.027] .4138 87

Adding survey data (3)

−0.0094 [0.145] 0.0182 [0.147] .9663 67

−0.0051 [0.014] 0.0087 [0.028] .8597 67

First differences (5)

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is the number of terror fatalities per capita in the next election cycle. In addition to the respective proxy for the preferences of the subdistrict’s population listed at the top of each panel, column (1) includes year and subdistrict fixed effects. Column (2) adds to column (1) a quadratic effect of the political preferences. Column (3) adds to column (2) the subdistrict’s average for age, schooling, schooling interacted with age, expenditures, number of persons in the household, number of rooms in the household’s apartment, religiosity, and percentage of males, immigrants, individuals coming from former Soviet bloc of countries, and individuals with Sephardic ethnicity. Column (4) adds to the specification in column (3) subdistrict-specific time trends and the subdistricts’ characteristics obtained from the LFS (specified in the note to Table VI). Column (5) presents a regression where all the explanatory variables used in column (4) are first-differenced. Robust standard errors, adjusted for clustering at the subdistrict level, appear in brackets. The P-value for the effect of political attitudes tests the hypothesis that the joint effect of all variables measuring political attitudes included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Factor analysis using support for Palestinian state and right-wing tendency Linear effect 0.0086 0.0103 [0.014] [0.015] Quadratic effect 0.0209 [0.026] P-value for effect of political attitudes .6886 N 87 87 Vote for a party in the right bloc Linear effect −0.0229 −0.0906 [0.039] [0.166] Quadratic effect 0.0668 [0.146] P-value for effect of political attitudes .8095 N 87 87

Linear specification (1)

TABLE IV (CONTINUED)

DOES TERRORISM WORK?

1483

1484

QUARTERLY JOURNAL OF ECONOMICS

in political views to choose their targets, it seems reasonable to assume that they are not using contemporaneous changes in political views to choose their targets either. Overall, the analysis in Tables III and IV shows that terror is not systematically related to observable factors that affect political views, and political attitudes in the current period are not related in any way to terror attacks in the next period. These findings provide support for the identifying assumption that local terror attacks are not correlated with omitted factors that affect the local trend in political views, and that terrorist groups are not choosing their targets based on the contemporaneous changes in local political preferences (i.e., no reverse causality). Furthermore, we will make our identifying assumption less restrictive by including subdistrict-specific linear time trends in (1). The inclusion of subdistrict-specific time trends means that we do not have to assume that terror is uncorrelated with omitted factors which affect the linear trend in local political preferences. Rather, our assumption will be that terror is not correlated with factors that produce deviations from the local linear trend in unobserved political views. In this manner, we will examine whether the results are robust to alternative sources of identification. IV. MAIN RESULTS FOR THE EFFECT OF TERROR ON POLITICAL ATTITUDES We now analyze the effect of terror on our main outcome variable: a person’s willingness to make territorial concessions to the Palestinians. For most of the analysis, we measure the local level of terror by the number of fatalities per capita in the twelve months prior to the elections in year t for each subdistrict j. However, we also present evidence regarding the robustness of the results to alternative ways of defining the local level of terror activity. Table V presents the effect of terrorism on a person’s willingness to make territorial concessions, estimated using a linear specification. The sensitivity of the coefficient is examined by adding more control variables to the specification in successive columns. For example, the first column does not include any other controls, the next three columns include fixed effects for each year and subdistrict by themselves and together, the fifth column adds a rich set of personal characteristics, the sixth column adds additional controls at the subdistrict level (those used in Table III), and the final column adds subdistrict-specific linear time trends.

Less than average

Expenditures (base category: more than average): About average

Religiously observant

From former Soviet bloc

African–Asian ethnicity

Immigrant

Years of schooling × age

Years of schooling

Male

Age square

Terror fatalities per capita within a year before the survey Personal characteristics Age

Variable 0.5144 [0.901]

(1) 0.2617 [0.872]

(2) 0.5666 [0.500]

(3) 0.3392 [0.498]

(4)

−0.0323∗∗ [0.0153] −0.0741∗∗∗ [0.0173]

0.0155∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0125 [0.0126] 0.0347∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0337∗ [0.0184] −0.0775∗∗∗ [0.0184] −0.1313∗∗∗ [0.0276] −0.2252∗∗∗ [0.0187]

0.0154∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0123 [0.0127] 0.0349∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0310∗ [0.0182] −0.0762∗∗∗ [0.0185] −0.1363∗∗∗ [0.0277] −0.2245∗∗∗ [0.0190] −0.0319∗∗ [0.0152] −0.0745∗∗∗ [0.0171]

0.2874 [0.452]

(6)

0.4112 [0.452]

(5)

TABLE V THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS—LINEAR SPECIFICATION

−0.0331∗∗ [0.0152] −0.0726∗∗∗ [0.0175]

0.0152∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0116 [0.0126] 0.0342∗∗∗ [0.0078] −0.0004∗∗∗ [0.000] −0.0334∗ [0.0186] −0.0751∗∗∗ [0.0182] −0.1331∗∗∗ [0.0277] −0.2255∗∗∗ [0.0189]

0.3129 [0.445]

(7)

DOES TERRORISM WORK?

1485

No No No No 6,494 .001

(1) No Yes No No 6,494 .016

(2) Yes No No No 6,494 .029

(3) Yes Yes No No 6,494 .043

(4) Yes Yes No No 6,098 .1421

(5)

Yes Yes Yes No 6,098 .1467

(6)

Yes Yes Yes Yes 6,098 .1499

(7)

Notes. Estimated using OLS. The dependent variable is an indicator for agreeing to territorial concessions to Palestinians. Columns (5) through (7) also include dummy variables for each number of individuals in the household and number of rooms in the household’s residence. The subdistrict time varying characteristics used in columns (6) and (7) were calculated from the Israel Labor Survey, and include the unemployment rate, the mean number of children below the age of 14 in a household, mean number of individuals above the age of 14 in a household, mean number of individuals per household, percent married, percent male, percentage of individuals between the ages of 30 and 45, percentage of individuals above 45 years old, percentage of individuals with higher education, percent ultra-orthodox Jews, percent immigrants, and percent African–Asian ethnicity. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. ∗ indicates statistically significant at 10% level, ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Subdistricts fixed effects Years fixed effects Subdistrict time varying characteristics Subdistrict-specific linear time trends N R2

Variable

TABLE V (CONTINUED)

1486 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1487

The results in Table V suggest that there is no linear effect of terror activity on a person’s willingness to grant concessions to the Palestinians. This result is consistently found as additional control variables are added for personal and subdistrict-level characteristics. In contrast, many of the coefficients on the other controls are highly significant: the willingness to grant concessions increases with income, education, and age (up to a point), and is also higher for natives versus immigrants, secular versus religious, and individuals who did not immigrate from the former Soviet Union and do not come from an Asian–African ethnic background. However, due to the concave pattern exhibited in Figure IV, we now include a quadratic term for the local level of terror in order to see whether the treatment effect is nonlinear. As shown in Table VI, the results are very different now, with the linear term and the quadratic term highly significant across specifications.21 The coefficients suggest that terrorism increases an individual’s willingness to grant concessions up to a point, and then further terror attacks reduce the willingness to grant concessions. This pattern is found in the simple specification that includes no other controls, and also in subsequent specifications, which gradually add a rich array of personal and subdistrict characteristics, fixed effects for each year, fixed effects for each subdistrict, and subdistrict-specific linear time trends. The robustness of the results to the inclusion or exclusion of so many other factors suggests that local terror activity is uncorrelated with a large variety of personal and subdistrict-level characteristics that affect political preferences. This pattern is consistent with our balancing tests in Tables III and IV, and supports our assumption that terror is an exogenous event. Furthermore, the results appear to be robust across a variety of identifying assumptions, which change with each specification in Table VI. The magnitudes of the coefficients in column (5) of Table VI imply that the total effect of terror fatalities, relative to the case where there is no terror activity at all, is positive until 0.105 local casualties per capita are reached, whereas terror activity beyond that threshold makes Israelis adopt a more hard-line stance toward the Palestinians. Similarly, the marginal effect of terror on 21. The standard errors account for clustering at the subdistrict–year level. Clustering only at the subdistrict level yields very similar conclusions regarding the indicated significance levels. These results are available from the authors upon request.

(1)

(2)

(3)

Religiously observant

From former Soviet bloc

African–Asian ethnicity

Immigrant

Years of schooling × age

Years of schooling

Male

Age square

Terror fatalities per capita within a year before the survey Linear effect 4.285∗ 5.538∗∗∗ 2.029 [2.26] [1.89] [1.75] Quadratic effect −43.459∗∗ −59.346∗∗∗ −16.590 [19.74] [15.48] [16.63] Personal characteristics Age

Variable 3.526∗∗∗ [1.27] −34.180∗∗∗ [12.23]

(4)

0.0155∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0140 [0.0128] 0.0351∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0308∗ [0.0183] −0.0768∗∗∗ [0.0184] −0.1358∗∗∗ [0.0280] −0.2230∗∗∗ [0.0191]

3.648∗∗∗ [1.18] −34.683∗∗∗ [10.80]

(5)

0.0157∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0140 [0.0127] 0.0350∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0336∗ [0.0185] −0.0778∗∗∗ [0.0183] −0.1307∗∗∗ [0.0277] −0.2238∗∗∗ [0.0187]

3.943∗∗∗ [0.93] −38.044∗∗∗ [9.07]

(6)

(7)

0.0153∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0134 [0.0128] 0.0343∗∗∗ [0.0078] −0.0004∗∗∗ [0.000] −0.0328∗ [0.0186] −0.0743∗∗∗ [0.0182] −0.1332∗∗∗ [0.0278] −0.2255∗∗∗ [0.0189]

5.585∗∗∗ [1.19] −56.648∗∗∗ [12.46]

TABLE VI THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS—NONLINEAR SPECIFICATION

1488 QUARTERLY JOURNAL OF ECONOMICS

(1)

No Yes No No 6,494 .021 .0001

No No No

No

6,494 .004 .0754

(2)

6,494 .029 .4555

No

Yes No No

(3)

6,494 .0448 .0219

No

Yes Yes No

(4)

6,098 .1436 .0076

No

−0.0321∗∗ [0.0153] −0.0755∗∗∗ [0.0170] Yes Yes No

(5)

6,098 .1484 .0002

No

−0.0325∗∗ [0.0153] −0.0748∗∗∗ [0.0172] Yes Yes Yes

(6)

6,098 .1518 .0001

Yes

−0.0321∗∗ [0.0153] −0.0715∗∗∗ [0.0175] Yes Yes Yes

(7)

Notes. Estimated using OLS. The dependent variable is an indicator for agreeing to territorial concessions to Palestinians. Columns (5) through (7) also include dummy variables for each number of individuals in the household and number of rooms in the household’s residence. The subdistrict time varying characteristics used in columns (6) and (7) were calculated from the Israel Labor Survey, and include the unemployment rate, the mean number of children below the age of 14 in a household, mean number of individuals above the age of 14 in a household, mean number of individuals per household, percent married, percent male, percentage of individuals between the ages of 30 to 45, percentage of individuals above 45 years old, percentage of individuals with higher education, percent ultra-orthodox Jews, percent immigrants, and percent Asian–African ethnicity. Robust standard errors, adjusted for clustering at the subdistrict–year level, appear in brackets. The P-value on the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Subdistricts fixed effects Years fixed effects Subdistrict time varying characteristics Subdistrict-specific linear time trends N R2 P-value for effect of terrorism

Less than average

Expenditures (base category: more than average): About average

Variable

TABLE VI (CONTINUED)

DOES TERRORISM WORK?

1489

1490

QUARTERLY JOURNAL OF ECONOMICS

granting concessions is positive until 0.053 casualties per capita are reached, and then additional casualties reduce the willingness to concede territory. That is, a moderate amount of terror is effective, but then it can backfire on the terrorist group. For 102 cases where data are available by subdistrict and year, there are only seven cases that reach levels high enough so that the marginal effect is negative (Jerusalem in 1996 and 2003, and Afula, Hadera, Sharon, Tel Aviv, and Zefat in 2003), and only one case where the estimated total effect is negative (Jerusalem had 0.11 fatalities per capita in 2003), thereby hardening the stance of Jerusalem residents toward the Palestinians in 2003. Given that most of the observations lie within the region where the marginal effect is positive, the estimated nonlinear effect is mostly indicative of a declining effect of terrorism on views, rather than a reversal of the sign of the effect. This overall pattern suggests that individuals respond differently to initial attacks versus later attacks, which may be caused by the declining shock value of attacks, or may be due to changes in the probability that an individual places on the rationality, and perhaps the ultimate goals, of the terrorist group.22 To describe the magnitude of the coefficients, we computed the predicted value for each person’s willingness to grant concessions in 2003 using the actual local level of fatalities in the previous twelve months, and compared that to the scenario whereby there were no attacks in the year prior to those elections. The difference in these predicted effects represents the change in a person’s views due to the attacks leading up to the 2003 elections. The median value of this predicted change in views (using column (5) in Table VI) for the whole sample suggests that the median individual became 4.2 percentage points more likely to support granting territorial concessions. This is more than half of the effect of being from an Ashkenazi background versus an Asian–African background, for which the estimated effect is 7.7 percentage points. In addition, the effect is larger than the estimated effect of being native versus an immigrant (3.1 percentage points), and almost 22. Several theoretical studies show that a nonlinear pattern may emerge when Israelis (or the Israeli government) do not have complete information on the level of extremism of terror factions (Kydd and Walter 2002; Bueno de Mesquita 2005a; Berrebi and Klor 2006). According to these studies, at low levels of attacks, Israelis increase their support for concessions in order to placate terror factions. For higher levels of attacks, however, Israelis start to believe with higher probability that they are facing an extremist faction that cannot be placated with partial concessions. Therefore, in this range of attacks, Israelis start adopting a nonconciliatory position toward terror factions.

DOES TERRORISM WORK?

1491

one-fifth of the effect of being religiously observant (22.3 percentage points). Therefore, these findings are significant not only in the statistical sense, but also in terms of the magnitudes. Table VII explores whether the results are heterogeneous across different subsamples of the population according to gender, age, level of expenditures, education, religious observance, immigrant status, and ethnicity. Estimates are presented from our two main specifications, which appear in columns (5) and (7) in Table VI. Both specifications include a rich set of personal characteristics and fixed effects for each year and subdistrict, but the latter specification includes subdistrict-specific linear time trends and additional controls that vary by subdistrict and year. The estimates in Table VII are generally significant for all groups, with larger effects for Israelis who are younger, religious, female, non-native, less educated, poorer (lower expenditures), and from an Asian–African background. Table II showed that these groups, except for the gender and immigrant categories, tend to be more right-wing than their counterparts. Therefore, Table VII suggests that the effect of terror is larger for particular groups which typically support right-wing political parties.23 Figure VII demonstrates this more concretely by graphing the predicted nonlinear relationship in the upper panels of Table VII for several subgroups. This figure shows that the predicted effect is larger for the whole range of observed attacks for particularly right-wing groups (religious, Asian–African origin, less educated, and younger Israelis). These findings illustrate how the political map is changing over time as the right wing is shifting to the left in response to terror. V. ROBUSTNESS TESTS AND ALTERNATIVE SPECIFICATIONS V.A. Alternative Definitions of Local Terror and Willingness to Grant Concessions Table VIII presents results using three alternative definitions for the local level of terror in each subdistrict around the time of the election. The first column in the upper panel uses 23. This pattern is more pronounced for the specification without subdistrictspecific linear time trends. However, the same conclusions are reached in Table A.4 in the Online Appendix, which uses the number of terror fatalities as the treatment variable (instead of fatalities per capita), and also in Table X, which examines the effect of terror on alternative outcomes.

Males

Below 30

30 to 45

Partition by age Above 45

Effect of terror fatalities per capita using only survey data Linear effect 4.6758∗∗∗ 2.7568∗ 4.2452∗∗∗ 2.8184∗ 2.8471 [1.36] [1.43] [1.50] [1.68] [1.75] Quadratic effect −38.113∗∗∗ −32.607∗∗∗ −40.018∗∗∗ −20.643 −30.317∗ [13.53] [12.45] [14.23] [15.56] [16.14] P-value for effect of terrorism 0.0024 0.0047 0.0189 0.1658 0.1618 Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 7.6632∗∗∗ 4.4340∗∗∗ 5.7143∗∗∗ 4.6022∗∗∗ 5.4123∗∗∗ [1.46] [1.87] [2.08] [1.78] [2.15] Quadratic effect −77.189∗∗∗ −46.567∗∗ −53.679∗∗∗ −49.005∗∗∗ −55.192∗∗∗ [14.97] [20.04] [19.35] [18.70] [20.41] P-value for effect of terrorism .0000 .0655 .0223 .0349 .0300 N 2,991 3,107 1,995 1,890 2,213

Females

Partition by gender

Above average

3.7498∗∗∗ 2.9905∗ [1.50] [1.71] −33.246∗∗ −29.743∗ [14.60] [15.38] 0.0487 0.1607

Average

8.4501∗∗∗ 3.9925∗∗∗ 5.0405∗∗∗ [2.37] [1.56] [2.16] −98.509∗∗∗ −31.267∗∗ −49.181∗∗ [25.14] [15.04] [22.23] .0008 .0386 .0713 2,277 2,122 1,699

3.1391∗ [1.88] −34.774∗∗ [17.00] 0.1034

Below average

Partition by expenditures

TABLE VII THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS, BY SUBGROUPS

1492 QUARTERLY JOURNAL OF ECONOMICS

Academic education

6.5823∗∗∗ [1.73] −76.013∗∗∗ [19.38] .0007 2,287

3.8223∗∗∗ [1.44] −40.823∗∗∗ [15.09] .0291 3,749

3,811

4.9349∗∗∗ [1.43] −43.977∗∗∗ [13.94] .0036

2.0451∗ [1.05] −22.686∗∗∗ [9.37] 0.0475

Other

Note. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for agreeing to territorial concessions to Palestinians. In addition to terror fatalities per capita within a year before the survey, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict–year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

6.6334∗∗∗ [1.96] −61.902∗∗∗ [18.04] 0.0036

African–Asian

Partition by ethnicity

2.3811∗ [1.31] −23.580∗ [12.89] 0.1858

Native Israeli

Immigrant

Secular

Religious

Partition by country of birth

Partition by religiosity

Effect of terror fatalities per capita using only survey data Linear effect 4.7347∗∗∗ 2.0906 2.4068∗ 6.1954∗∗∗ 5.9583∗∗∗ [1.44] [1.60] [1.38] [1.71] [1.83] Quadratic effect −37.560∗∗∗ −29.565∗∗ −24.580∗ −56.509∗∗∗ −58.674∗∗∗ [14.14] [14.68] [13.22] [17.00] [17.45] P-value for effect of 0.0026 0.0105 0.1830 0.0023 0.0047 terrorism Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 5.8537∗∗∗ 5.7778∗∗∗ 5.9378∗∗∗ 4.8865∗∗ 9.1530∗∗∗ [1.57] [2.03] [1.53] [2.20] [2.40] Quadratic effect −59.425∗∗∗ −56.762∗∗∗ −56.964∗∗∗ −64.616∗∗∗ −94.835∗∗∗ [17.19] [19.96] [15.56] [22.03] [22.83] P-value for effect of .0015 .0195 .0010 .0029 .0004 terrorism N 3,514 2,584 4,530 1,568 2,349

Below academic

Partition by education

TABLE VII (CONTINUED)

DOES TERRORISM WORK?

1493

1494

QUARTERLY JOURNAL OF ECONOMICS

FIGURE VII Effect of Terror Fatalities on Willingness to Grant Concessions by Subgroups Each graph depicts the nonlinear relationship between terror fatalities per capita and a person’s willingness to grant territorial concessions using the estimated coefficients in the upper panel of Table VII.

the number of attacks per capita in the last twelve months, rather than the number of fatalities. The number of attacks is defined as the number of incidents with at least one fatality, which essentially gives equal weight to all fatal attacks regardless of the number killed. The results using this measure are very similar, in the sense of displaying a highly significant concave pattern. The second and third columns in the upper panel of Table VIII use terror “since the last elections” rather than “in the last twelve months” as the treatment variable. The estimates reveal the same nonlinear pattern using “total fatalities” or “total attacks” as the measure of local terror, and are significant in three out of four specifications. (The coefficients are not significant for the specification that uses “total fatalities since the previous election” without subdistrict-specific time trends.) However, the estimates are smaller in magnitude, which suggests that recent terror activity has a bigger impact than attacks occurring in the more distant

Attacks since previous elections

Fatalities since previous elections

Effect of terror fatalities per capita using only survey data 6.059∗ 0.5682 Linear effect 15.2331∗∗∗ [5.45] [3.10] [0.68] Quadratic effect −619.6∗∗ −171.5∗∗∗ −3.282 [289] [72] [4.51] P-value for effect of terrorism .0177 .0489 .7077 Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 25.486∗∗∗ 10.387∗∗∗ 2.2995∗∗∗ [5.37] [3.58] [0.60] Quadratic effect −1,236.2∗∗∗ −285.4∗∗∗ −14.089∗∗∗ [312] [107] [4.55] P-value for effect of terrorism .0000 .0180 .0010 N 6,098 6,098 6,098

Attacks within a year before the survey

Alternative proxies for level of terrorism

2.7388∗∗∗ [0.82] −30.984∗∗∗ [7.02] .0000 4.5820∗∗∗ [0.65] −46.777∗∗∗ [6.03] .0000 2,292 2.1419 [1.31] −21.814 [14.40] .2426 4,229

From 2003 to 2006a

1.5921∗ [0.95] −18.375∗∗ [9.05] .1592

From 1996 to 2006

Restricting the sample to different time periods

TABLE VIII THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS, ROBUSTNESS TESTS

DOES TERRORISM WORK?

1495

.0613

.0161

4.4180∗∗∗ [1.51] −39.832∗∗∗ [15.17]

Excluding Jerusalem

.0005

6.0163∗∗∗ [1.70] −63.176∗∗∗ [15.80]

Excluding Jerusalem and Tel Aviv

.0107 6,098

.0000 5,398

.0000 4,176

.0000 6,098

6.3722∗∗∗ [1.33] −64.165∗∗∗ [13.78]

.0064

4.2215∗∗∗ [1.36] −39.882∗∗∗ [12.61]

Marginal effects using a probit model

8.5875∗∗∗ [1.90] −145.069∗∗∗ [50.73] 621.608 [362.98] .0000 6,098

3.5676∗ [2.01] −32.302 [43.83] −16.372 [258.72] .0044

Including a higher-order polynomial

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for agreeing to territorial concessions to Palestinians. In addition to the respective proxy for the severity of terrorism, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. The marginal effects of the probit model are calculated at the means. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. a The regressions at the bottom using only observations from years 2003 and 2006 do not include subdistrict specific time trends because there are only two periods for each subdistrict. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

P-value for effect of terrorism N

Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 3.1428∗∗∗ 6.7684∗∗∗ 7.9114∗∗∗ [1.29] [1.26] [1.57] Quadratic effect −37.916∗∗∗ −67.448∗∗∗ −87.880∗∗∗ [12.90] [14.62] [19.84] Cubic effect

P-value for effect of terrorism

Effect of terror fatalities per capita using only survey data Linear effect 2.8397∗∗ [1.28] Quadratic effect −27.441∗∗∗ [11.45] Cubic effect

Using an alternative definition of agree to concessions

TABLE VIII (CONTINUED)

1496 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1497

past.24 However, the effect of terrorism on an individual’s political preferences does not completely disappear even when measured over a longer time period. Table A.2 in the Online Appendix examines whether terror attacks by different Palestinian factions have similar effects on the views of Israelis. Because the goals of the different groups are not always clear and may not be consistent with each other, we distinguish between attacks perpetrated by Islamic-based groups (Hamas and Islamic Jihad) and the rest. Most of the attacks are perpetrated by Islamic groups, and Table A.2 shows that similar results are obtained when we use terror fatalities committed only by Islamic groups. Although Table A.2 shows that terror fatalities caused by Islamic groups have a slightly smaller effect on Israeli political views than overall fatalities, this difference is not significant. These findings suggest that although Palestinian groups may not have identical goals, Israelis do not seem to distinguish between the attacks of different groups. This result, however, could be due to the lack of awareness of who is perpetrating each attack. In Table A.3 in the Online Appendix, we also show that adding “attacks in neighboring subdistricts” to the specification has no influence on the estimated effect of local terror, although the effect of terror in neighboring areas is often significant but much smaller in magnitude. This finding demonstrates that our main results are not coming from the correlation of local attacks with factors associated with attacks at the broader regional level. Finally, it is worth noting that using the number of fatalities as the treatment variable, rather than fatalities per capita, yields very similar results as well—including much stronger effects for traditionally right-wing demographic groups (see Table A.4 in the Online Appendix). The first column in the bottom panel of Table VIII uses an alternative coding scheme for a person’s willingness to grant concessions. As noted in Section II, we coded a response of “four” (the middle value of the range of answers) in survey years 1996 and 1999 as being “unwilling” to make concessions. Because this value lies in the center of the range of answers (from one to seven), we 24. A possible explanation for the weaker results using terrorism “since the last election” versus the “last twelve months” is that the differential costs of a terror attack on local versus nonlocal residents dissipate over time, and therefore, the effects of attacks from a few years ago become subsumed in the aggregate year effects.

1498

QUARTERLY JOURNAL OF ECONOMICS

now test whether the results are sensitive to coding this value as being “willing” to make concessions. Table VIII shows that the coefficients using this alternative coding scheme are still highly significant. V.B. Changes in the Coding of the Question about Concessions Because the wording of the question regarding a person’s willingness to make concessions changed over time, we now explore whether our main results are a product of those changes rather than representing a causal effect. Up to now, we dealt with this issue by including fixed effects for each year in the regressions. Table VI showed that including or excluding fixed effects for each year does not affect the results, which suggests that changes in the structure of the questionnaires are not likely to be creating spurious estimates. Table VIII presents more evidence for this conclusion by restricting the sample to periods when there was very little change in the question (1996–2006) and to periods when there was no change at all (2003–2006).25 As shown in the upper panel, the coefficient estimates are still significant for the period 1996–2006, although not for the specification that includes subdistrict-specific time trends. For the years 2003–2006, the results are highly significant even for this short time period, which incidentally did not include the largest wave of attacks (i.e., 1999– 2003).26 These findings show that changes in the wording of the question over time are not responsible for our main results. V.C. Specifying the Nonlinearity We now examine the specification of the nonlinear effect of terror on political attitudes. In addition, we check whether the nonlinear effect is coming entirely from the observations from Jerusalem, as it appears in Figure IV. Table VIII shows that the results are still highly significant and nonlinear after Jerusalem residents are omitted from the sample, and after Tel Aviv and Jerusalem residents are omitted from the sample. This finding suggests that the nonlinear effect of terror on political attitudes is not entirely due to the perhaps unique characteristics and 25. From 1999 to 2003, there was a small change in the structure of the question, as the range of answers went from one to seven to one to four. 26. Similar results are obtained if we start the sample in 1992 or 1999. Starting the sample in 1992 yields coefficients (standard errors) for the specification in column (5) of Table VI equal to 2.66 (1.09) and −27.16 (10.18) for the linear and quadratic terms, respectively. Starting the sample in 1999 produces 2.11 (1.17) and −24.00 (11.09), respectively.

1499

DOES TERRORISM WORK?

experiences of Jerusalem or Tel Aviv. Table VIII also shows that using a probit model instead of a linear probability model yields very similar results.27 In addition, the last column in the bottom panel of Table VIII adds a cubic term for the level of local terror per capita, in order to see whether the nonlinearity should be modeled with a higher-order polynomial. The cubic term is not significant, which suggests that using a quadratic specification is sufficient. V.D. Alternative Panel Data Model Specifications This section tests the robustness of the results to alternative panel data models. Because our data set consists of repeated cross sections of individuals, we aggregate the data to the subdistrict– year level to exploit these other methods. As a baseline model, we estimate the fixed-effect model used until now (column (5) of Table VI) at that level of aggregation. Specifically, we estimate equation (2) from Section III. Although the number of observations is reduced to 86, the first column of Table IX shows that the results for this model are still highly significant.28 Table IX also presents a first-differences model as an alternative way to control for the fixed effect of each subdistrict. The results in column (3) are very similar to those for the fixed-effect model, thereby providing support for the strict exogeneity assumption. In addition, we consider an alternative specification of the “levels” model that includes a lagged dependent variable as a control variable: (7)

2

view jt = ρ · view jt−1 + α1 · terror jt + α2 · terror jt + β · x jt + γt + μ j + ε jt .

This “dynamic panel data model” captures the idea that political attitudes may be moving slowly, so that attitudes at time t are correlated with those at t − 1, and terrorist activity may introduce an innovation to the evolution of political views. This model is estimated using OLS in the second column of Table IX, which shows results very similar to those for the model without the 27. The table presents the marginal effects evaluated at the means from the probit model. In the rest of the paper, we choose to present results from a linear probability model instead of a probit, because the interpretation of the marginal effects using a nonlinear specification is not straightforward in a probit model. 28. This regression also differs from the one using individual-level data by giving equal weight to all subdistricts, whereas the models based on individual-level data essentially give more weight to those subdistricts with more observations.

.0112 86

4.608∗∗ [2.27] −56.489∗∗∗ [21.75] −0.0108 [0.154] .0374 66

(2)

.0034 66

3.478∗ [1.78] −39.225∗∗ [18.44]

First differences (3) 3.9685∗∗∗ [1.63] −55.361∗∗∗ [16.17] −0.1463 [0.19] .0000 48

Arellano–Bond estimation (4)

4.7583∗∗∗ [1.83] −57.622∗∗∗ [18.02] 0.0448 [0.14] .0007 66

System dynamic panel data estimation (5)

Notes. Each column in each panel presents the results of a separate regression where the dependent variable is an indicator for agreeing to territorial concessions to Palestinians. In addition to terror fatalities per capita 12 months before the survey, all regressions include the same covariates as specification (5) in Table VI aggregated at the subdistrict–year level. The first two columns present a fixed-effect estimation. Column (3) uses a first differences estimation while column (4) presents a general method of moments estimate based on Arellano and Bond (1991), and column (5) uses additional moment conditions based on Arellano and Bover (1995) and Blundell and Bond (1998). Robust standard errors appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

P-value for effect of terrorism N

Terror fatalities per capita within a year before the survey Linear effect 5.2083∗∗∗ [1.75] Quadratic effect −57.306∗∗∗ [18.29] Lagged support for granting territorial concessions

(1)

OLS fixed effects

TABLE IX THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS, AGGREGATING DATA AT SUBDISTRICT LEVEL

1500 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1501

lagged dependent variable. To address concerns about whether OLS yields consistent estimates when a lagged dependent variable is included in the fixed-effects model, the model is also estimated with GMM following Arellano and Bond (1991) in column (4) and using additional moment conditions (see Arellano and Bover [1995] and Blundell and Bond [1998]) in column (5). Using both methods, the results are very similar to previous results. Overall, the nonlinear effect of terror on political accommodation is robust to all the most widely used panel data methods. V.E. Alternative Measures of Political Attitudes We now examine whether terror has affected the two other available ways of defining political views: support for a Palestinian state, and defining oneself as being right-wing. To do this, we create a summary measure using the first factor from a factor analysis on both views. The first factor explains 71% of the variation, and the factor loadings can be interpreted as giving a more positive weight to more accommodating positions. Specifically, the factor loadings on the first factor are 0.85 on “support for a Palestinian state” and −0.85 on “defining oneself as having a right-wing political tendency.” As such, positive values of the first factor indicate a more left-wing position on the conflict. Results for this summary measure are presented in Table X. As seen before, there is a significant nonlinear effect of terrorism on this summary measure—terrorist attacks induce individuals to shift their views toward a more accommodating stance, but after a certain threshold, additional attacks cause Israelis to adopt a more hard-line stance versus the Palestinians. Table X also presents the results for our summary measure of alternative attitudes for different subpopulations. Again, the nonlinear pattern is significant for most demographic groups, but the effects are once again much stronger for people who are religious, less educated, and from an Asian–African ethnic background. These findings demonstrate that terrorism has had a pervasive impact within many subgroups of the population, with the strongest impact on groups that are particularly known for holding right-wing views. The finding of stronger effects for these traditionally right-wing groups across all outcomes highlights the dramatic shift in the political map in Israel. Overall, the results in this section show that the significant effect of terrorism on political attitudes, and its nonlinear pattern, is robust to alternative ways of defining an individual’s views

Female

Male

Below average

3.2673 3.049 [3.24] [4.51] −65.77∗∗ −45.44 [29.2] [40.0] .0039 .1956

Above 45

Above average

8.726∗∗∗ 2.2953 [2.69] [3.42] −91.85∗∗∗ −37.90 [25.8] [31.0] .0024 .2489

Average

Partition by expenditures

Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 5.7566∗∗ 8.2814∗∗ 4.9736∗∗ 3.2349 7.2096∗ 3.3378 1.7381 11.7436∗∗∗ 7.0940∗ [2.75] [4.12] [2.30] [4.35] [3.91] [4.42] [4.66] [3.32] [4.19] Quadratic effect −64.085∗∗∗ −83.184∗∗ −66.643∗∗∗ −37.925 −49.318 −68.630 −55.341 −81.651∗∗∗ −94.575∗∗ [24.7] [37.0] [24.2] [41.6] [37.6] [42.9] [40.3] [31.1] [41.6] P-value for effect of terrorism .0340 .0860 .0236 .6285 .1416 .0825 .0275 .0013 .0300 N 5,122 2,518 2,604 1,785 1,635 1,702 1,902 1,809 1,411

4.197 [3.66] −19.56 [34.5] .2135

30 to 45

Partition by age Below 30

Effect of terror fatalities per capita using only survey data Linear effect 4.7977∗ 5.885∗ 5.299∗ 3.072 [2.56] [3.24] [2.75] [3.80] Quadratic effect −56.65∗∗∗ −56.155∗ −74.78∗∗∗ −35.96 [22.7] [29.8] [23.5] [33.5] P-value for effect of terrorism .0323 .1718 .0009 .5116

Entire sample

Partition by gender

TABLE X THE EFFECT OF TERROR FATALITIES ON A SUMMARY MEASURE OF TWO ALTERNATIVE ATTITUDES TOWARD PALESTINIANS BASED ON FACTOR ANALYSIS

1502 QUARTERLY JOURNAL OF ECONOMICS

Academic education Secular

Religious

Partition by religiosity Immigrant

Other

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for accommodating views toward the Palestinians using factor analysis based on the two attitudes discussed in the text. In addition to terror fatalities per capita within a year before the survey, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

0.9514 [3.52] −0.288 [32.9] .8069 3,068

10.778∗∗∗ 0.724 [3.50] [3.22] −119.75∗∗∗ −20.55 [33.5] [29.1] .0023 .4357

African– Asian

Partition by ethnicity

9.9804∗∗∗ 11.9123∗∗∗ [2.77] [3.34] −106.52∗∗∗ −155.76∗∗∗ [27.6] [34.5] .0010 .0001 3,180 2,054

4.599∗ [2.70] −59.52∗∗∗ [25.0] .0388

Native Israeli

Partition by country of birth

Effect of terror fatalities per capita using only survey data Linear effect 6.137∗∗∗ 1.975 0.104 12.223∗∗∗ 4.525 [2.35] [4.60] [3.48] [3.44] [3.27] Quadratic effect −59.19∗∗∗ −49.23 −19.33 −111.97∗∗∗ −56.78∗ [24.0] [42.1] [31.5] [31.2] [32.2] P-value for effect of terrorism .0365 .0786 .2902 .0024 .1947 Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 8.7890∗∗∗ 1.8101 4.3563 10.6170∗∗ −1.0630 [2.45] [4.85] [4.01] [4.66] [4.30] −49.211 −124.19∗∗∗ −4.858 Quadratic effect −94.435∗∗∗ −19.621 [25.1] [42.1] [36.0] [43.4] [37.6] P-value for effect of terrorism .0013 .8894 .3748 .0136 .7262 N 3,123 1,999 3,784 1,338 1,942

Below academic

Partition by education

TABLE X (CONTINUED)

DOES TERRORISM WORK?

1503

1504

QUARTERLY JOURNAL OF ECONOMICS

toward the Palestinians.29 Moreover, the similarity of the results for all three measures of political views provides further evidence against the possibility that changes in the structure of a particular question in the survey over time could be responsible for our main results.

VI. THE EFFECT OF TERROR ON VOTING VERSUS POLITICAL VIEWS We now examine the effect of terror on voting preferences using the outcome variable “support for the right-wing bloc” in the upcoming elections. As we will see, this variable is fundamentally different in its nature from the previous outcome measures. Table XI presents this analysis for the linear specification—the quadratic term was not included because it was insignificant in most specifications. In contrast to the political outcomes previously analyzed, the linear effect is generally positive, which suggests that terror attacks encourage Israelis to vote for right-wing parties. This finding is consistent with Berrebi and Klor (2008), who used data on actual voting patterns at the local level (rather than our measure, which uses the respondent’s voting intentions in the upcoming elections) to show that local attacks turned voters toward right-wing parties. Combined with our previous results, this suggests that terrorism is causing Israelis to vote increasingly for right-wing parties, whereas at the same time, they are turning left in their political views. The difference in the pattern of results can be reconciled by the idea that the platforms of the political parties are endogenously changing over time. This shift over time is evident from a casual inspection of the parties’ official platforms. For example, the platform of the right-wing Likud party during the 1988 elections stated on its first page that “The State of Israel has the right to sovereignty in Judea, Samaria, and the Gaza Strip,” and that “there will be no territorial division, no Palestinian state, foreign sovereignty, or foreign self-determination (in the land of Israel).” This stands in stark contrast to the Likud’s platform before the 2009 elections, which stated that “The Likud is prepared to make (territorial) concessions in exchange for a true and reliable peace 29. The overall results using the “support Palestinian state” measure versus the other measures are weaker, but there is still the familiar pattern whereby strong and significant effects are found for individuals with an Asian–African background.

Secular

Below academic

1.5026∗ [0.87] 2,319

1.6769∗∗∗ [0.67]

Below average

1.1741∗∗ [0.56]

Effect of terror fatalities per capita using only survey data Linear effect 0.6903 0.9045∗ 1.1535∗∗∗ −0.2812 [0.50] [0.47] [0.48] [0.58]

0.3775 [0.48] 3,794

0.6675 [0.50]

Native Israeli

1.0530 [0.77] 2,323

0.6118 [0.66]

African– Asian

0.1191 [0.54] 3,873

0.8741∗∗ [0.39]

Other

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for voting for a party in the right bloc of political parties. In addition to terror fatalities per capita within a year before the survey, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Effect of terror fatalities per capita including subdistrict time trends and characteristics −0.2633 0.8770 0.7232 1.2452 Linear effect 0.8561∗ [0.50] [0.65] [0.60] [0.73] [0.84] N 3,566 2,630 4,604 1,592 2,402

Immigrant

1.7822∗∗∗ [0.71] 1,724

0.7184 [0.71]

Above average

Partition by ethnicity

−1.5186∗∗ [0.65] 2,153

−0.1969 [0.53]

Average

Partition by expenditures

Partition by country of birth

Religious

Academic education

Partition by religiosity

Partition by education

Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 0.7639∗ −0.8345 2.1207∗∗∗ 1.0983∗ −0.1460 1.1542 [0.41] [0.60] [0.53] [0.63] [0.68] [0.83] N 6,196 3,041 3,155 2,022 1,916 2,258

1.1080∗ [0.58]

Below 30

Effect of terror fatalities per capita using only survey data Linear effect 0.7195∗∗ −0.2216 1.7809∗∗∗ 1.3511∗∗∗ −0.4158 [0.35] [0.50] [0.43] [0.53] [0.51]

Male

Above 45

Female

Partition by age 30 to 45

Entire sample

Partition by gender

TABLE XI THE EFFECT OF TERROR FATALITIES ON VOTES FOR A PARTY IN THE RIGHT BLOC OF POLITICAL PARTIES

DOES TERRORISM WORK?

1505

1506

QUARTERLY JOURNAL OF ECONOMICS

agreement.” Arguably, the Likud’s position in 2009 is to the left of the left-wing Labor party’s platform in 1988.30 Table XI also presents the analysis within each subgroup, and shows that the linear effect of terrorism on supporting right-wing parties is found predominantly among individuals who are male, non-native, secular, highly educated, and not from an Asian– African ethnic background population. The last three groups are quite notable, because our previous results indicated that terrorism leads to a more accommodating attitude particularly among individuals who are religious, less educated, and from an Asian– African background. In contrast, we now find that the shift toward right-wing parties occurred within subgroups that are strongly identified with left-wing parties, rather than those groups who are typically right-wing. The stronger results for left-leaning groups on the probability of voting right-wing, combined with our evidence that they are not shifting toward less accommodating political views regarding Palestinian demands, shows that these groups are increasing their support for right-wing parties only because the right-wing parties are moving to the left. As a result, the overall pattern of results suggests that terror is shifting the entire political landscape by moving public opinion to the left, and moving the right-wing parties accordingly. This pattern of results is consistent with the theory of policy voting (Kiewiet 1981). According to this theory, parties benefit from the salience of issues to which they are widely viewed as attaching the highest priority. Thus, in periods of repeated terror attacks, voters increase their support for the right bloc of political parties because terror attacks amplify the salience of the security dimension in political discourse, and the right bloc is typically identified as placing a stronger emphasis on security-related issues compared to the left bloc. Given that right-wing parties benefit from the increasing prominence of the security issue during a wave of terror, theoretical models of candidate location predict that this causes the disadvantaged candidate, which in this scenario is the left bloc of parties, to move away from the center (Groseclose 2001; Aragones and Palfrey 2002). In contrast, the advantaged candidate—the right bloc—moves toward the center 30. For the 1988 elections, the Labor-Alignment party platform “ruled out the establishment of another separate state within the territorial area between Israel and Jordan.” For the 2009 elections, however, Labor supported the creation of a Palestinian state together with the evacuation of isolated settlements.

DOES TERRORISM WORK?

1507

of the political map. As a result, the left bloc loses support to the right bloc, as the right bloc moves to the left. This is one possible explanation for the pattern of results uncovered in our analysis, but the theoretical mechanisms behind these findings deserve further attention. VII. CONCLUSIONS This paper presents the first systematic examination of whether terrorism is an effective strategy for achieving political goals, while paying particular attention to the issue of causality. Our results show that terror attacks by Palestinian factions have moved the Israeli electorate toward a more accommodating stance regarding the political objectives of the Palestinians. At the same time, terrorism induces Israelis to vote increasingly for right-wing parties, as the right-wing parties (and particular demographic groups that tend to be right-wing in their views) are shifting to the left in response to terror.31 These findings highlight the importance of examining how terrorism affects political views, not just voting patterns, in assessing the effectiveness of terror. Looking at the effect of terrorism only on voting patterns in order to infer its effect on political views would lead to the opposite conclusion, at least in the context of the Israeli–Palestinian conflict. Although terrorism in small doses appears to be an effective political tool, our results suggest that terror activity beyond a certain threshold seems to backfire on the goals of terrorist factions, by hardening the stance of the targeted population. This finding could be one explanation for why terrorist factions tend to implement their tactics in episodes that are rather limited in scale and diverse in terms of geographic placement. Others have argued that Palestinian terrorism has worked in exacting political concessions (Dershowitz 2002; Hoffman 2006). Their claim, however, is that terrorism raised the salience of the Israeli–Palestinian conflict, which increased pressure from the 31. We show significant effects on public opinion, but for terror to be effective, this should result in changes in public policy. Our finding that there has been a dramatic shift in the political platforms of the parties is consistent with the idea that terror has led to a significant change in the policies of the main political parties. In practice, this policy shift is perhaps best exemplified by the Israeli unilateral withdrawal from the entire Gaza Strip and parts of the West Bank in 2005 (in the aftermath of the second Palestinian uprising), which was carried out by a government led by the right-wing Likud party.

1508

QUARTERLY JOURNAL OF ECONOMICS

international community on the Israeli government. Our paper shows that terrorism works not only because of the possibility of fostering international pressure, but also because it creates domestic political pressure from the targeted electorate. Many conflicts in history have been settled by peaceful means (the racial conflict in South Africa, the civil rights movement in the United States, the British occupation of India, etc.). Understanding when conflicts are conducted peacefully rather than violently is a complicated issue that deserves more attention. It may well be the case that a more peaceful, diplomatic strategy would have been more effective in achieving Palestinian goals. Moreover, the apparent political effectiveness of Palestinian terrorism may not have been worth the economic, social, and human cost to the Palestinian population over time, as the conflict remains unsettled to this day. However, by showing that terror can be an effective political tool, our findings not only provide insights into how the Israeli–Palestinian conflict has evolved over time, but also shed light on why terror appears to be increasing in many parts of the world. Effective and comprehensive counterterrorism policies—which may consist of deterrence, raising the costs to terrorists, and diplomatic efforts—have to take into account the political gains that can be obtained through terrorism. THE HEBREW UNIVERSITY OF JERUSALEM, CENTRE FOR ECONOMIC POLICY RESEARCH, AND INSTITUTE FOR THE STUDY OF LABOR THE HEBREW UNIVERSITY OF JERUSALEM AND CENTRE FOR ECONOMIC POLICY RESEARCH

REFERENCES Abadie, Alberto, “Poverty, Political Freedom, and the Roots of Terrorism,” American Economic Review, 96 (2006), 50–56. Abrahms, Max, “Why Terrorism Does Not Work,” International Security, 31 (2006), 42–78. ——, “Why Democracies Make Superior Counterterrorists,” Security Studies, 16 (2007), 223–253. Aragones, Enriqueta, and Thomas R. Palfrey, “Mixed Equilibrium in a Downsian Model with a Favored Candidate,” Journal of Economic Theory, 103 (2002), 131–161. Arellano, Manuel, and Stephen Bond, “Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations,” Review of Economic Studies, 58 (1991), 277–297. Arellano, Manuel, and Olympia Bover, “Another Look at the Instrumental Variable Estimation of Error-Components Models,” Journal of Econometrics, 68 (1995), 29–51. Arian, Asher, and Michal Shamir, eds. The Elections in Israel—2006 (New Brunswick, NJ: Transaction, 2008).

DOES TERRORISM WORK?

1509

Baliga, Sandeep, and Tomas Sj¨ostr¨om, “Decoding Terror,” Mimeo, Northwestern University, 2009. Becker, Gary S., and Yona Rubinstein, “Fear and the Response to Terrorism: An Economic Analysis,” Mimeo, Brown University, 2008. Benmelech, Efraim, and Claude Berrebi, “Human Capital and the Productivity of Suicide Bombers,” Journal of Economic Perspectives, 21 (2007), 223–238. Benmelech, Efraim, Claude Berrebi, and Esteban F. Klor, “Economic Conditions and the Quality of Suicide Terrorism,” Mimeo, The Hebrew University of Jerusalem, 2009. Berman, Eli, and David D. Laitin, “Hard Targets: Theory and Evidence on Suicide Attacks,” NBER Working Paper No. 11740, 2005. ——, “Religion, Terrorism and Public Goods: Testing the Club Model,” Journal of Public Economics, 92 (2008), 1942–1967. Berrebi, Claude, “Evidence about the Link between Education, Poverty and Terrorism among Palestinians,” Peace Economics, Peace Science and Public Policy, 13 (2007), Article 2. Berrebi, Claude and Esteban F. Klor, “On Terrorism and Electoral Outcomes: Theory and Evidence from the Israeli–Palestinian Conflict,” Journal of Conflict Resolution, 50 (2006), 899–925. ——, “Are Voters Sensitive to Terrorism? Direct Evidence from the Israeli Electorate,” American Political Science Review, 102 (2008), 279–301. Blomberg, S. Brock, and Gregory D. Hess, “The Lexus and the Olive Branch: Globalization, Democratization and Terrorism,” in Terrorism, Economic Development, and Political Openness, Philip Keefer and Norman Loayza, eds. (New York: Cambridge University Press, 2008). Bloom, Mia, Dying to Kill: The Allure of Suicide Terror (New York: Columbia University Press, 2005). Blundell, Richard, and Stephen Bond, “Initial Conditions and Moment Restrictions in Dynamic Panel-Data Models,” Journal of Econometrics, 87 (1998), 115– 143. Bueno de Mesquita, Ethan, “Conciliation, Counterterrorism, and Patterns of Terrorist Violence,” International Organization, 51 (2005a), 145–176. ——, “The Quality of Terror,” American Journal of Political Science, 49 (2005b), 515–530. Bueno de Mesquita, Ethan, and Eric S. Dickson, “The Propaganda of the Deed: Terrorism, Counterterrorism, and Mobilization,” American Journal of Political Science, 51 (2007), 364–381. Byman, Daniel L., “Al-Qaeda as an Adversary: Do We Understand Our Enemy?” World Politics, 56 (2003), 139–163. Dershowitz, Alan, Why Terrorism Works (New Haven, CT: Yale University Press, 2002). Enders, Walter, and Todd Sandler, The Political Economy of Terrorism (Cambridge, UK: Cambridge University Press, 2006). Eubank, William L., and Leonard B. Weinberg, “Terrorism and Democracy: Perpetrators and Victims,” Terrorism and Political Violence, 13 (2001), 155–164. Gordon, Carol, and Asher Arian, “Threat and Decision Making,” Journal of Conflict Resolution, 45 (2001), 196–215. Gould, Eric, and Guy Stecklov, “Terror and the Costs of Crime,” Journal of Public Economics, 93 (2009), 1175–1188. Greenberg, Joel, “Mideast Turmoil: Palestinian Suicide Planner Expresses Joy over His Missions,” New York Times, May 9, 2002. Groseclose, Tim, “A Model of Candidate Location When One Candidate Has a Valence Advantage,” American Journal of Political Science, 45 (2001), 862– 886. Hoffman, Bruce, Inside Terrorism (New York: Columbia University Press, 2006). Iyengar, Radha, and Jonathan Monten, “Is There an ‘Emboldenment’ Effect? Evidence from the Insurgency in Iraq,” NBER Working Paper No. 13839, 2008. Jackson Wade, Sara E., and Dan Reiter, “Does Democracy Matter? Regime Type and Suicide Terrorism,” Journal of Conflict Resolution, 51 (2007), 329–348. Jaeger, David A., Esteban F. Klor, Sami H. Miaari, and M. Daniele Paserman, “The Struggle for Palestinian Hearts and Minds: Violence and Public Opinion in the Second Intifada,” NBER Working Paper No. 13956, 2008.

1510

QUARTERLY JOURNAL OF ECONOMICS

Jaeger, David A., and M. Daniele Paserman, “The Cycle of Violence? An Empirical Analysis of Fatalities in the Palestinian–Israeli Conflict,” American Economic Review, 98 (2008), 1591–1604. Karol, David, and Edward Miguel, “The Electoral Cost of War: Iraq Casualties and the 2004 U.S. Presidential Election,” Journal of Politics, 69 (2007), 633–648. Kiewet, D. Roderick, “Policy-Oriented Voting in Response to Economic Issues,” American Political Science Review, 75 (1981), 448–459. Krueger, Alan B., What Makes a Terrorist: Economics and the Roots of Terrorism (Princeton, NJ: Princeton University Press, 2007). Krueger, Alan B., and David D. Laitin, “Kto Kogo?: A Cross-Country Study of the Origins and Targets of Terrorism,” in Terrorism, Economic Development, and Political Openness, Philip Keefer and Norman Loayza, eds. (New York: Cambridge University Press, 2008). Krueger, Alan B., and Jitka Maleckova, “Education, Poverty and Terrorism: Is There a Causal Connection?” Journal of Economic Perspectives, 17 (2003), 119–144. Kydd, Andrew, and Barbara F. Walter, “Sabotaging the Peace: The Politics of Extremist Violence,” International Organization, 56 (2002), 263–296. ——, “The Strategies of Terrorism,” International Security, 31 (2006), 49–80. Lapan, Harvey E., and Todd Sandler, “Terrorism and Signaling,” European Journal of Political Economy, 9 (1993), 383–398. Li, Quan, “Does Democracy Promote or Reduce Transnational Terrorist Incidents?” Journal of Conflict Resolution, 49 (2005), 278–297. Pape, Robert A., “The Strategic Logic of Suicide Terrorism,” American Political Science Review, 97 (2003), 1–19. ——, Dying to Win: The Strategic Logic of Suicide Terrorism (New York: Random House Trade Paperbacks, 2005). Piazza, James A., “A Supply-Side View of Suicide Terrorism: A Cross-National Study,” Journal of Politics, 70 (2008), 28–39. Powell, Robert, “Allocating Defensive Resources with Private Information about Vulnerability,” American Political Science Review, 101 (2007), 799–809. Rohner, Dominic, and Bruno S. Frey, “Blood and Ink! The Common-Interest-Game between Terrorists and the Media,” Public Choice, 133 (2007), 129–145. Shamir, Michal, and Asher Arian, “Collective Identity and Electoral Competition in Israel,” American Political Science Review, 93 (1999), 265–277. Sheafer, Tamir, and Shira Dvir-Gvirsman, “The Spoiler Effect: Framing Attitudes and Expectations toward Peace.” Journal of Peace Research, 47 (2010), 205– 215. Spilerman, Seymour, and Guy Stecklov, “Societal Responses to Terrorist Attacks,” Annual Review of Sociology, 35 (2009), 167–189. Weinberg, Leonard B., and William L. Eubank, “Does Democracy Encourage Terrorism?” Terrorism and Political Violence, 6 (1994), 417–443.

POLITICAL SELECTION AND PERSISTENCE OF BAD GOVERNMENTS∗ DARON ACEMOGLU GEORGY EGOROV KONSTANTIN SONIN We study dynamic selection of governments under different political institutions, with a special focus on institutional “flexibility.” A government consists of a subset of the individuals in the society. The competence level of the government in office determines collective utilities (e.g., by determining the amount and quality of public goods), and each individual derives additional utility from being part of the government (e.g., rents from holding office). We characterize the dynamic evolution of governments and determine the structure of stable governments, which arise and persist in equilibrium. In our model, perfect democracy, where current members of the government do not have veto power over changes in governments, always leads to the emergence of the most competent government. However, any deviation from perfect democracy, to any regime with incumbency veto power, destroys this result. There is always at least one other, less competent government that is also stable and can persist forever, and even the least competent government can persist forever in office. We also show that there is a nonmonotonic relationship between the degree of incumbency veto power and the quality of government. In contrast, in the presence of stochastic shocks or changes in the environment, a regime with less incumbency veto power has greater flexibility and greater probability that high-competence governments will come to power. This result suggests that a particular advantage of “democratic regimes” (with a limited number of veto players) may be their greater adaptability to changes rather than their performance under given conditions. Finally, we show that “royalty-like” dictatorships may be more successful than “junta-like” dictatorships because in these regimes veto players are less afraid of change.

I. INTRODUCTION A central role of (successful) political institutions is to ensure the selection of the right (honest, competent, motivated) politicians. Besley (2005, p. 43), for example, quotes James Madison to emphasize the importance of the selection of politicians for the success of a society: ∗ Daron Acemoglu gratefully acknowledges financial support from the National Science Foundation and the Canadian Institute for Advanced Research. We thank four anonymous referees, the editors, Robert Barro and Elhanan Helpman, and participants at the California Institute of Technology Political Economy Seminar, the “Determinants of Social Conflict” conference in Madrid, the Frontiers of Political Economics conference in Moscow, the NBER summer institute in Cambridge, the Third Game Theory World Congress in Evanston, and the University of Chicago theory seminar, and Renee Bowen and Robert Powell in particular, for helpful comments. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1511

1512

QUARTERLY JOURNAL OF ECONOMICS

The aim of every political Constitution, is or ought to be, first to obtain for rulers men who possess most wisdom to discern, and most virtue to pursue, the common good of society; and in the next place, to take the most effectual precautions for keeping them virtuous whilst they continue to hold their public trust.

Equally important, but less often stressed, is the “flexibility” of institutions, meaning their ability to deal with shocks and changing situations.1 In this paper, we construct a dynamic model of government formation to highlight the potential sources of inefficiency in the selection of governments and to identify features of political processes that create “institutional flexibility.”2 The “government” is made up of a subset of the citizens (e.g., each three-player group may be a government, etc.). Each (potential) government has a different level of competence, determining the collective utility it provides to citizens (e.g., the level of public goods). Each individual also receives rents from being part of the government (additional income, utility of office, or rents from corruption). New governments are put in place by a combination of “votes” from the citizens and “consent” from current government members. We parameterize different political regimes with the extent of necessary consent of current government members, which we refer to as incumbency veto power.3 A “perfect” democracy can be thought of as a situation in which there is no incumbency veto power and no such consent is necessary. Many political institutions, in contrast, provide additional decision making or blocking power to current government members. For instance, in many democracies, various sources of incumbency veto power 1. For instance, the skills necessary for successful wartime politicians and governments are very different from those that are useful for the successful management of the economy during peacetime, as illustrated perhaps most clearly by Winston Churchill’s political career. 2. Even though we model changes in the underlying environment and the competences of different governments as resulting from stochastic shocks, in practice these may also result from deterministic changes in the nature of the economy. For example, authoritarian regimes such as the rule of General Park in South Korea or Lee Kuan Yew in Singapore may be beneficial or less damaging during the early stages of development, whereas a different style of government, with greater participation, may be necessary as the economy develops and becomes more complex. Acemoglu, Aghion, and Zilibotti (2006) suggest that “appropriate” institutions may be a function of the distance of an economy from the world technology frontier, and Aghion, Alesina, and Trebbi (2009) provide empirical evidence consistent with this pattern. 3. The role of veto players in politics is studied in Tsebelis (2002); “individual veto players” in Tsebelis (2002) are similar to “members of royalty” discussed below. Instead, incumbency veto power in our model implies that some of the current members of the government need to consent to changes (and the identity of those providing their consent is not important).

POLITICAL SELECTION AND BAD GOVERNMENTS

1513

make the government in power harder to oust than instituting it anew would have been had it been out of power (see, e.g., Cox and Katz [1996] for a discussion of such incumbency veto power in mature democracies). In nondemocratic societies, the potential veto power of current government members is more pronounced, so one might naturally think that consent from several members of the current government would be required before a change was implemented. In this light, we take incumbency veto power as an inverse measure of democracy, though it only captures one stylized dimension of how democratic a regime is. The first contribution of our paper is to provide a general and tractable framework for the study of dynamic political selection issues and to provide a detailed characterization of the structure (and efficiency) of the selection of politicians under different political institutions (assuming sufficiently forward-looking players). Perfect democracy always ensures the emergence of the best (most competent) government. In contrast, under any other arrangement, incompetent and bad governments can emerge and persist despite the absence of information-related challenges to selecting good politicians. For example, even a small departure from perfect democracy, whereby only one member of the current government needs to consent to a new government, may make the worst possible government persist forever. The intuitive explanation for why even a small degree of incumbency veto power might lead to such outcomes is as follows: improvements away from a bad (or even the worst) government might lead to another potential government that is itself unstable and will open the way for a further round of changes. If this process ultimately leads to a government that does not have any common members with the initial government, then it may fail to get the support of any of the initial government members. In this case, the initial government survives even though it has a low, or even possibly the lowest, level of competence. This result provides a potential explanation for why many autocratic or semiautocratic regimes, including those currently in power in Iran, Russia, Venezuela, and Zimbabwe, resist the inclusion of “competent technocrats” in the government—because they are afraid that these technocrats can later become supporters of further reform, ultimately unseating even the most powerful current incumbents.4 4. For example, on Iranian politics and resistance to the inclusion of technocrats during Khomeini’s reign, see Menashri (2001), and more recently under

1514

QUARTERLY JOURNAL OF ECONOMICS

Another important implication of these dynamic interactions in political selection is that, beyond perfect democracy, there is no obvious ranking among different shades of imperfect democracy. Any of these different regimes may lead to better governments in the long run. This result is consistent with the empirical findings in the literature that show no clear-cut relationship between democracy and economic performance (e.g., Barro [1996]; Przeworski and Limongi [1997]; Minier [1998]). In fact, under all regimes except perfect democracy, the competence of the equilibrium government and the success of the society depend strongly on the identity of the initial members of the government, which is in line with the emphasis in the recent political science and economics literatures on the role that leaders may play under weak institutions (see, for example, Brooker [2000] or Jones and Olken [2005], who show that the death of an autocrat leads to a significant change in growth and this does not happen with democratic leaders). Our second contribution relates to the study of institutional flexibility. For this purpose, we enrich the above-mentioned framework with shocks that change the competence of different types of governments (thus capturing potential changes in the needs of the society for different types of skills and expertise). Although a systematic analysis of this class of dynamic games is challenging, we provide a characterization of the structure of equilibria when stochastic shocks are sufficiently infrequent and players are sufficiently patient. Using this characterization, we show how the quality (competence level) of governments evolves in the presence of stochastic shocks and how this evolution is impacted by political institutions. Whereas without shocks a greater degree of democracy (fewer veto players) does not necessarily guarantee a better government, in the stochastic environment it leads to a greater institutional flexibility and to better outcomes in the long run (in particular, a higher probability that the best government will be in power). Intuitively, this is because a regime with fewer veto players enables greater adaptability to changes in the environment (which alter the relative ranking of governments in terms of quality).5 At a slightly more technical level, this result reflects Ahmadinejad’s presidency, see Alfoneh (2008). On Russian politics under Vladimir Putin, see Baker and Glasser (2007). On Zimbabwe under Mugabe, see Meredith (2007). 5. The stochastic analysis also shows that random shocks to the identity of the members of the government may sometimes lead to better governments in

POLITICAL SELECTION AND BAD GOVERNMENTS

1515

the fact that in a regime with limited incumbency veto power, there are “relatively few” other stable governments near a stable government, so a shock that destabilizes the current government likely leads to a big jump in competence. Finally, we also show that in the presence of shocks, “royaltylike” nondemocratic regimes, where some individuals must always be in the government, may lead to better long-run outcomes than “junta-like” regimes, where a subset of the current members of the junta can block change (even though no specific member is essential). The royalty-like regimes might sometimes allow greater adaptation to change because one (or more) of the members of the initial government is secure in his or her position. In contrast, as discussed above, without such security the fear of further changes might block all competence-increasing reforms in government.6 We now illustrate some of the basic ideas with a simple example. EXAMPLE 1. Suppose that the society consists of n ≥ 6 individuals, and that any k = 3 individuals could form a government. A change in government requires both the support of the majority of the population and the consent of l = 1 member of the government, so that there is a “minimal” degree of incumbency veto power. Suppose that individual j has a level of competence γ j and order the individuals, without loss of any generality, in descending order according to their competence, so γ1 > γ2 > · · · > γn. The competence of a government is the sum of the competences of its three members. Each individual obtains utility from the competence level of the government and also a large rent from being in office, so that each prefers to be in office regardless of the competence level of the government. Suppose also that individuals have a sufficiently high discount factor so that the future matters a lot relative to the present.

the long run because they destroy stable incompetent governments. Besley (2005, p. 50) writes, “History suggests that four main methods of selection to political office are available: drawing lots, heredity, the use of force and voting.” Our model suggests why, somewhat paradoxically, drawing lots, which was used in ancient Greece, might sometimes lead to better long-run outcomes than the alternatives. 6. This and several of the results for junta-like regimes discussed above contrast with the presumption in the existing literature that a greater number of veto players increases policy stability (e.g., Tsebelis [2002]). In particular, the presence of a veto player (or member of “royalty”) would increase stability when players were not forward-looking or discounted the future very heavily, but we show that it can reduce stability when they are forward-looking and patient.

1516

QUARTERLY JOURNAL OF ECONOMICS

It is straightforward to determine the stable governments that will persist and remain in power once formed. Evidently, {1, 2, 3} is a stable government, because it has the highest level of competence, so neither a majority of outsiders nor members of the government would like to initiate a change (some outsiders may want to initiate a change: for example, 4, 5, and 6 would prefer government {4, 5, 6}, but they do not have the power to enforce such a change). In contrast, governments of the form {1, i, j}, {i, 2, j}, and {i, j, 3} are unstable (for i, j > 3), which means that starting with these governments, there will necessarily be a change. In particular, in each of these cases, {1, 2, 3} will receive support both from one current member of government and from the rest of the population, who would be willing to see a more competent government. Consider next the case where n = 6 and suppose that the society starts with the government {4, 5, 6}. This is also a stable government, even though it is the lowest-competence government and thus the worst possible option for the society as a whole. This is because any change in government must result in a new government of one of the following three forms: {1, i, j}, {i, 2, j}, or {i, j, 3}. But we know that all of these types of governments are unstable. Therefore, any of the more competent governments will ultimately take the society to {1, 2, 3}, which does not include any of the members of the initial government. Because individuals are relatively patient, none of the initial members of the government would support (consent to) a change that will ultimately exclude them. As a consequence, the initial worst government persists forever. Returning to our discussion of the unwillingness of certain governments to include skilled technocrats, this example shows why such a technocrat, for example individual 1, will not be included in the government {4, 5, 6}, even though he would potentially increase the quality and competence of the government substantially. One can further verify that {4, 5, 6} is also a stable government when l = 3, because in this case any change requires the support of all three members of government and none of them would consent to a change that removed him or her from the government. In contrast, under l = 2, {4, 5, 6} is not a stable government, and thus the quality of the government

POLITICAL SELECTION AND BAD GOVERNMENTS

1517

is higher under intermediate incumbency veto power, l = 2, than under l = 1 or l = 3. Now consider the same environment as above but with potential changes in the competences of the agents. For example, individual 4 may see an increase in competence, so that he or she becomes the third most competent agent (i.e., γ4 ∈ (γ3 , γ2 )). Suppose that shocks are sufficiently infrequent so that the stability of governments in periods without shocks is given by the same reasoning as in the nonstochastic case. Consider the situation starting with the government {4, 5, 6} and suppose l = 1. Then this government remains in power until the shock occurs. Nevertheless, the equilibrium government will eventually converge to {1, 2, 3}. At some point a shock will change the relative competences of agents 3 and 4, and the government {4, 5, 6} will become unstable; individual 4 will support the emergence of the government {1, 2, 4}, which now has the highest competence. In contrast, when l = 3, the ruling government remains in power even after the shock. This simple example thus illustrates how, even though a regime with fewer veto players does not ensure better outcomes in nonstochastic environments, it may provide greater flexibility and hence better long-run outcomes in the presence of shocks. Our paper is related to several different literatures. Although much of the literature on political economy focuses on the role of political institutions in providing (or failing to provide) the right incentives to politicians (see, among others, Niskanen [1971]; Barro [1973]; Ferejohn [1986]; Shleifer and Vishny [1993]; Besley and Case [1995]; Persson, Roland, and Tabellini [1997]; and Acemoglu, Robinson, and Verdier [2004]), there is also a small (but growing) literature investigating the selection of politicians, most notably Banks and Sundaram (1998), Besley (2005), and Diermeier, Keane, and Merlo (2005). The main challenge facing the society and the design of political institutions in these papers is that the ability and motivations of politicians are not observed by voters or outside parties. Although such information-related selection issues are undoubtedly important, our paper focuses on the difficulty of ensuring that the “right” government is selected even when information is perfect and common. Also differently from these literatures, we emphasize the importance of institutional flexibility in the face of shocks.

1518

QUARTERLY JOURNAL OF ECONOMICS

Osborne and Slivinski (1996), Besley and Coate (1997, 1998), Bueno de Mesquita et al. (2003), Caselli and Morelli (2004), Messner and Polborn (2004), Mattozzi and Merlo (2008), Padro-i-Miquel (2007), and Besley and Kudamatsu (2009) provide alternative and complementary “theories of bad governments/politicians.” For example, Bueno de Mesquita et al. (2003) emphasize the composition of the “selectorate,” the group of players that can select governments, as an important factor leading to inefficient policies. In Bueno de Mesquita et al. (2003), Padroi-Miquel (2007), and Besley and Kudamatsu (2009), the fear of future instability also contributes to the emergence of inefficient policies. Caselli and Morelli (2004) suggest that voters might be unwilling to replace a corrupt incumbent by a challenger whom they expect to be equally corrupt. Mattozzi and Merlo (2008) argue that more competent politicians have higher opportunity costs of entering politics.7 However, these papers do not develop the potential persistence in bad governments resulting from the dynamics of government formation and do not focus on the importance of institutional flexibility. We are also not aware of other papers providing a comparison of different political regimes in terms of the selection of politicians under nonstochastic and stochastic conditions.8 Also closely related are prior analyses of dynamic political equilibria in the context of club formation, as in Roberts (1999) ` Maschler, and Shalev (2001), as well as dynamic and Barbera, analyses of choice of constitutions and equilibrium political institutions as in Barbera` and Jackson (2004), Messner and Polborn (2004), Acemoglu and Robinson (2000, 2006), and Lagunoff (2006). Our recent work, Acemoglu, Egorov, and Sonin (2008), provides a general framework for the analysis of the dynamics of constitutions, coalitions, and clubs. The current paper is a continuation of this line of research. It differs from our previous work in a number of important dimensions. First, the focus here is on the substantive questions concerning the relationship between different 7. McKelvey and Reizman (1992) suggest that seniority rules in the Senate and the House create an endogenous advantage for the incumbent members, and current members of these bodies will have an incentive to introduce such seniority rules. 8. Our results are also related to recent work on the persistence of bad governments and inefficient institutions, including Acemoglu and Robinson (2008), Acemoglu, Ticchi, and Vindigni (2010), and Egorov and Sonin (2010). Acemoglu (2008) also emphasizes the potential benefits of democracy in the long run but through a different channel—because the alternative, oligarchy, creates entry barriers and sclerosis.

POLITICAL SELECTION AND BAD GOVERNMENTS

1519

political institutions and the selection of politicians and governments, which is new, relatively unexplored, and (in our view) important. Second, this paper extends our previous work by allowing stochastic shocks and enables us to investigate issues of institutional flexibility. Third, it involves a structure of preferences to which our previous results cannot be directly applied.9 The rest of the paper is organized as follows. Section II introduces the model. Section III introduces the concept of (Markov) political equilibrium, which allows a general and tractable characterization of equilibria in this class of games. Section IV provides our main results on the comparison of different regimes in terms of selection of governments and politicians. Section V extends the analysis to allow stochastic changes in the competences of members of the society and presents a comparison of different regimes in the presence of stochastic shocks. Section VI concludes. The Appendix contains the proofs of some of our main results; analyzes an extensive-form game with explicitly specified proposal and voting procedures, and shows the equivalence between the Markov perfect equilibria (MPEs) of this game and the (simpler) notion of political equilibrium we use in the text; and provides additional examples illustrating some of the claims we make in the text. Online Appendix B contains the remaining proofs. II. MODEL We consider a dynamic game in discrete time indexed by t = 0, 1, 2, . . .. The population is represented by the set I and consists of n < ∞ individuals. We refer to nonempty subsets of I as coalitions and denote the set of coalitions by C. We also designate a subset of coalitions G ⊂ C as the set of feasible governments. For example, the set of feasible governments could consist of all groups of individuals of size k0 (for some integer k0 ) or all groups of individuals of size greater than k1 and less than some other integer k2 . To simplify the discussion, we define k¯ = maxG∈G |G|, so k¯ is the upper bound for the size of any feasible government: that ¯ It is natural to presume that k¯ < n/2. is, for any G ∈ G, |G| ≤ k. In each period, the society is ruled by one of the feasible governments Gt ∈ G. The initial government G0 is given as part 9. In particular, the results in Acemoglu, Egorov, and Sonin (2008) apply under a set of acyclicity conditions. Such acyclicity does not hold in the current paper (see Online Appendix B). This makes the general characterization of the structure of equilibria both more challenging and of some methodological interest.

1520

QUARTERLY JOURNAL OF ECONOMICS

of the description of the game and Gt for t > 0 is determined in equilibrium as a result of the political process described below. The government in power at any date affects three aspects of the society: 1. It influences collective utilities (for example, by providing public goods or influencing how competently the government functions). 2. It determines individual utilities (members of the government may receive additional utility because of rents of being in office or corruption). 3. It indirectly influences the future evolution of governments by shaping the distribution of political power in the society (for example, by creating incumbency advantage in democracies or providing greater decision-making power or veto rights to members of the government under alternative political institutions). We now describe each of these in turn. The influence of the government on collective utilities is modeled via its competence. In particular, at each date t, there exists a function t : G → R designating the competence of each feasible government G ∈ G (at t ∈ R as government G’s competence, with that date). We refer to G the convention that higher values correspond to greater competence. In Section IV, we will assume that each individual has a certain level of competence or ability, and the competence of a government is a function of the abilities of its members. For now, this additional assumption is not necessary. Note also that the function t depends on time. This generality is introduced to allow for changes in the environment (in particular, changes in the relative competences of different individuals and governments). Individual utilities are determined by the competence of the government that is in power at that date and by whether the individual in question is part of the government. More specifically, each individual i ∈ I at time τ has discounted (expected) utility given by (1)

Uiτ = E

∞ t=τ

β (t−τ ) uit ,

POLITICAL SELECTION AND BAD GOVERNMENTS

1521

where β ∈ (0, 1) is the discount factor and uit is individual’s stage payoff, given by (2)

t t uit = wi (Gt , G t ) = wi (G ),

t where in the second equality we suppress dependence on G t to simplify notation; we will do this throughout unless special emphasis is necessary. Throughout, we impose the following assumptions on wi .

ASSUMPTION 1. The function wi satisfies the following properties: t 1. For each i ∈ I and any G, H ∈ G such that G > tH : if i ∈ G or i ∈ / H, then wi (G) > wi (H). 2. For any G, H ∈ G and any i ∈ G \ H, wi (G) > wi (H).

Part 1 of this assumption is a relatively mild restriction on payoffs. It implies that all else equal, more competent governments give higher stage payoffs. In particular, if an individual belongs to both governments G and H, and G is more competent than H, then he or she prefers G. The same conclusion also holds when the individual is not a member of either of these two governments or is only a member of G (and not of H). Therefore, this part of the assumption implies that the only situation in which an individual may prefer a less competent government to a more competent one is when he or she is a member of the former but not of the latter. This simply captures the presence of rents from holding office or additional income from being in government due to higher salaries or corruption. The interesting interactions in our setup result from the “conflict of interest”: individuals prefer to be in the government even when this does not benefit the rest of the society. Part 2 of the assumption strengthens the first part and imposes the condition that this conflict of interest is always present; that is, individuals receive higher payoffs from governments that include them than from those that exclude them (regardless of the competence levels of the two governments). We impose both parts of this assumption throughout. It is important to note that Assumption 1 implies that all voters who are not part of the government care about a onedimensional government competence; this feature simplifies the analysis considerably. Nevertheless, the tractability of our framework makes it possible to enrich this environment by allowing other sources of disagreement or conflict of interest among voters, and we return to this issue in the Conclusions.

1522

QUARTERLY JOURNAL OF ECONOMICS

EXAMPLE 2. As an example, suppose that the competence of government G, G , is the amount of public good produced in the economy under feasible government G, and (3)

wi (G) = vi (G ) + bi I{i∈G} ,

where vi : R → R is a strictly increasing function (for each i ∈ I) corresponding to the utility from public good for individual i, bi is a measure of the rents that individual i obtains from being in office, and I X is the indicator of event X. If bi ≥ 0 for each i ∈ I, then (3) satisfies part 1 of Assumption 1. In addition, if bi is sufficiently large for each i, then each individual prefers to be a member of the government, even if this government has a very low level of competence; thus part 2 of Assumption 1 is also satisfied. Finally, the government in power influences the determination of future governments whenever consent of some current government members is necessary for change. We represent the set of individuals (regular citizens and government members) who can, collectively, induce a change in government by specifying the set of winning coalitions, WG , which is a function of current government G (for each G ∈ G). This is an economical way of summarizing the relevant information, because the set of winning coalitions is precisely the set of subsets of the society that are able to force (or to block) a change in government. We impose only a minimal amount of structure on the set of winning coalitions. ASSUMPTION 2. For any feasible government G ∈ G, WG is given by WG = {X ∈ C : |X| ≥ mG and |X ∩ G| ≥ lG }, where lG and mG are integers satisfying 0 ≤ lG ≤ |G| ≤ k¯ < mG ≤ n − k¯ (recall that k¯ is the maximal size of the government and n is the size of the society). The restrictions imposed in Assumption 2 are intuitive. In particular, they state that a new government can be instituted if it receives a sufficient number of votes from the entire society (mG total votes) and if it receives support from some subset of the members of the current government (lG of the current government members need to support such a change). This definition allows lG to be any number between 0 and |G|. One special feature of Assumption 2 is that it does not relate the number of veto players in the current government, lG , to the total number of individuals in

POLITICAL SELECTION AND BAD GOVERNMENTS

1523

the society who wish to change the government, mG . This aspect of Assumption 2 can be relaxed without affecting our general characterization; we return to a discussion of this issue in the Conclusions. Given this notation, the case where there is no incumbency veto power, lG = 0, can be thought of as perfect democracy, where current members of the government have no special power. The case where lG = |G| can be thought of as extreme dictatorship, where unanimity among government members is necessary for any change. Between these extremes are imperfect democracies (or less strict forms of dictatorships), which may arise either because there is some form of (strong or weak) incumbency veto power in democracy or because current government (junta) members are able to block the introduction of a new government. In what follows, one might wish to interpret lG as an inverse measure of the degree of democracy, though naturally this only captures one dimension of democratic regimes in practice. Note also that Assumption 2 imposes some mild assumptions on mG . In particular, less than k¯ individuals are insufficient for a change to take place. This ensures that a rival government cannot take power without any support from other individuals (recall that k¯ denotes the maximum size of the government, so the rival government must have no more than k¯ members), and mG ≤ n − k¯ individuals are sufficient to implement a change provided that lG members of the current government are among them. For example, these requirements are naturally met when k¯ < n/2 and mG = (n + 1)/2 (i.e., majority rule).10 In addition to Assumptions 1 and 2, we also impose the following genericity assumption, which ensures that different governments have different competences. This assumption simplifies the notation and can be made without much loss of generality, because if it were not satisfied for a society, any small perturbation of competence levels would restore it. ASSUMPTION 3. For any t ≥ 0 and any G, H ∈ G such that G = H, t G = tH . III. POLITICAL EQUILIBRIA IN NONSTOCHASTIC ENVIRONMENTS In this section, we focus on nonstochastic environments, t = G for all G ∈ G). For these environments, where t = (or G 10. Recall also that x denotes the integer part of a real number x.

1524

QUARTERLY JOURNAL OF ECONOMICS

we introduce our equilibrium concept, (Markov) political equilibrium, and show that equilibria have a simple recursive characterization.11 We return to the more general stochastic environments in Section V. III.A. Political Equilibrium Our equilibrium concept, (Markov) political equilibrium, imposes that only transitions from the current government to a new government that increase the discounted utility of the members of a winning coalition will take place; and if no such transition exists, the current government will be stable (i.e., it will persist in equilibrium). The qualifier “Markov” is added because this definition implicitly imposes that transitions from the current to a new government depend on the current government—not on the entire history. To introduce this equilibrium concept more formally, let us first define the transition rule φ : G → G, which maps each feasible government G in power at time t to the government that would emerge in period t + 1.12 Given φ, we can write the discounted utility implied by (1) for each individual i ∈ I starting from the current government G ∈ G recursively as Vi (G | φ), given by (4)

Vi (G | φ) = wi (G) + βVi (φ(G) | φ) for all G ∈ G.

Intuitively, starting from G ∈ G, individual i ∈ I receives a current payoff of wi (G). Then φ (uniquely) determines the next period’s government φ(G), and thus the continuation value of this individual, discounted to the current period, is βVi (φ(G) | φ). A government G is stable given mapping φ if φ(G) = G. In addition, we say that φ is acyclic if for any (possibly infinite) chain H1 , H2 , . . . ⊂ G such that Hk+1 = φ(Hk), and any a < b < c, if Ha = Hc then Ha = Hb = Hc . Given (4), the next definition introduces the notion of a political equilibrium, which will be represented by the mapping φ provided that two conditions are met. 11. Throughout, we refer to this equilibrium concept as “political equilibrium” or simply as “equilibrium.” We do not use the acronym MPE, which will be used for the Markov perfect equilibrium of a noncooperative game in the Appendix. 12. In principle, φ could be set-valued, mapping from G into P(G) (the power set of G), but our analysis below shows that, thanks to Assumption 3, its image is always a singleton (i.e., it is a “function” rather than a “correspondence,” and also by implication, it is uniquely defined). We impose this assumption to simplify the notation.

POLITICAL SELECTION AND BAD GOVERNMENTS

1525

DEFINITION 1. A mapping φ : G → G is a (Markov) political equilibrium if for any G ∈ G, the following two conditions are satisfied: i. either the set of players who prefer φ(G) to G (in terms of discounted utility) form a winning coalition, that is, S = {i ∈ I : Vi (φ(G) | φ) > Vi (G | φ)} ∈ WG (or equivalently |S| ≥ mG and |S ∩ G| ≥ lG ); or else, φ(G) = G; ii. there is no alternative government H ∈ G that is preferred both to a transition to φ(G) and to staying in G permanently, that is, there is no H such that = {i ∈ I : Vi (H | φ) > Vi (φ(G) | φ)} ∈ WG SH

and = {i ∈ I : Vi (H | φ) > wi (G)/(1 − β)} ∈ WG SH (alternatively, for any alternative H, either |SH | < mG , or |SH ∩ G| < lG , or |SH | < mG , or |SH ∩ G| < lG ).

This definition states that a mapping φ is a political equilibrium if it maps the current government G to alternative φ(G), which (unless it coincides with G) must be preferred to G (taking continuation values into account) by a sufficient majority of the population and a sufficient number of current government members (in order not to be blocked). Note that in part (i), the set S can be equivalently written as S = {i ∈ I : Vi (φ(G) | φ) > wi (G)/(1 − β)}, because if this set is not a winning coalition, then φ(G) = G and thus Vi (G | φ) = wi (G)/(1 − β). Part (ii) of the definition requires that there does not exist another alternative H that would have been a “more preferable” transition; that is, there should be no H that is preferred both to a transition to φ(G) and to staying in G forever by a sufficient majority of the population and a sufficient number of current government members. The latter condition is imposed, because if there exists a winning coalition that prefers H to a transition to φ(G) but there is no winning coalition that prefers H to staying in G forever, then at each stage a move to H can be blocked. We use the definition of political equilibrium in Definition 1 in this and the next section. The advantage of this definition is its simplicity. A disadvantage is that it does not explicitly specify how offers for different types of transitions are made and the exact sequences of events at each stage. In the Appendix, we describe an infinite-horizon extensive-form game, where there is an explicit sequence in which proposals are made, votes are cast,

1526

QUARTERLY JOURNAL OF ECONOMICS

and transitions take place. We then characterize the MPEs of this dynamic game and show that they are equivalent to political equilibria as defined in Definition 1. Briefly, in this extensive-form game, any given government can be in either a sheltered or an unstable state. Sheltered governments cannot be challenged but become unstable with some probability. When the incumbent government is unstable, all individuals (according to a prespecified order) propose possible alternative governments. Primaries across these governments determine a challenger government, and then a vote between this challenger and the incumbent governments determines whether there is a transition to a new government (depending on whether those in support of the challenger form a winning coalition according to Assumption 2). New governments start out as unstable, and with some probability become sheltered. All votes are sequential. We prove that for a sufficiently high discount factor, the MPE of this game does not depend on the sequence in which proposals are made, the protocols for primaries, or the sequence in which votes are cast, and coincides with political equilibria described by Definition 1. This result justifies our focus on the much simpler notion of political equilibrium in the text. The fact that new governments start out as unstable provides a justification for part (ii) of Definition 1 that there should not exist another alternative H that is “more preferable” than φ(G) and than staying in G forever; otherwise there would be an immediate transition to H. III.B. General Characterization We now prove the existence and provide a characterization of political equilibria. We start with a recursive characterization of the mapping φ described in Definition 1. Let us enumerate the elements of the set G as {G1 , G2 , . . . , G|G| } such that Gx > Gy whenever x < y. With this enumeration, G1 is the most competent (“best”) government, whereas G|G| is the least competent government. In view of Assumption 3, this enumeration is well defined and unique. Now, suppose that for some q > 1, we have defined φ for all G j with j < q. Define the set (5)

Mq ≡ { j : 1 ≤ j < q, {i ∈ I : wi (G j ) > wi (Gq )} ∈ WGq , and φ(G j ) = G j }.

Note that this set depends simply on stage payoffs in (2), not on the discounted utilities defined in (4), which are “endogenous” objects.

POLITICAL SELECTION AND BAD GOVERNMENTS

1527

This set can thus be computed easily from the primitives of the model (for each q). Given this set, let the mapping φ be (6)

φ(Gq ) =

Gq Gmin{ j∈Mq }

if Mq = ∅; if Mq = ∅.

Because the set Mq is well defined, the mapping φ is also well defined, and by construction it is single-valued. Theorems 1 and 2 next show that, for sufficiently high discount factors, this mapping constitutes the unique acyclic political equilibrium and that, under additional mild conditions, it is also the unique political equilibrium (even considering possible cyclic equilibria). THEOREM 1. Suppose that Assumptions 1–3 hold and let φ : G → G be as defined in (6). Then there exists β0 < 1 such that for any discount factor β > β0 , φ is the unique acyclic political equilibrium. Proof. See the Appendix.

Let us now illustrate the intuition for why the mapping φ constitutes a political equilibrium. Recall that G1 is the most competent (“best”) government. It is clear that we must have φ(G1 ) = G1 , because all members of the population that are not in G1 will prefer it to any other G ∈ G (from Assumption 1). Assumption 2 then ensures that there will not be a winning coalition in favor of a permanent move to G . However, G itself may not persist, and it may eventually lead to some alternative government G ∈ G. But in this case, we can apply this reasoning to G instead of G , and thus the conclusion φ(G1 ) = G1 applies. Next suppose we start with government G2 in power. The same argument applies if G is any one of G3 , G4 , . . . , G|G| . One of these may eventually lead to G1 ; thus for sufficiently high discount factors, a sufficient majority of the population may support a transition to such a G in order eventually to reach G1 . However, discounting also implies that in this case, a sufficient majority would also prefer a direct transition to G1 to this dynamic path (recall part (ii) of Definition 1). So the relevant choice for the society is between G1 and G2 . In this comparison, G1 will be preferred if it has sufficiently many supporters, that is, if the set of individuals preferring G1 to G2 is a winning coalition within G2 , or more formally if {i ∈ I : wi (G1 ) > wi (G2 )} ∈ WG2 .

1528

QUARTERLY JOURNAL OF ECONOMICS

If this is the case, φ(G2 ) = G1 ; otherwise, φ(G2 ) = G2 . This is exactly what the function φ defined in (6) stipulates. Now let us start from government G3 . We then only need to consider the choice between G1 , G2 , and G3 . To move to G1 , it suffices that a winning coalition within G3 prefers G1 to G3 .13 However, whether the society will transition to G2 depends on the stability of G2 . In particular, we may have a situation in which G2 is not a stable government, which, by necessity, implies that φ(G2 ) = G1 . Then a transition to G2 will lead to a permanent transition to G1 in the next period. However, this sequence may be nondesirable for some of those who prefer to move to G2 . In particular, there may exist a winning coalition in G3 that prefers to stay in G3 rather than to transit permanently to G1 (and as a consequence, there is no winning coalition that prefers such a transition), even though there also exists a winning coalition in G3 that would have preferred a permanent move to G2 . Writing this more explicitly, we may have {i ∈ I : wi (G2 ) > wi (G3 )} ∈ WG3 , but {i ∈ I : wi (G1 ) > wi (G3 )} ∈ / WG3 . If so, the transition from G3 to G2 may be blocked with the anticipation that it will lead to G1 , which does not receive the support of a winning coalition within G3 . This reasoning illustrates that for a transition to take place, not only should the target government be preferred to the current one by a winning coalition (starting from the current government), but also that the target government should be “stable,” that is, φ(G ) = G . This is exactly the requirement in (6). In this light, the intuition for the mapping φ and thus for Theorem 1 is that a government G will persist in equilibrium (will be stable) if there does not exist another stable government receiving support from a winning coalition (a sufficient majority of the population and the required number of current members of government). Theorem 1 states that φ in (6) is the unique acyclic political equilibrium. However, it does not rule out cyclic equilibria. We provide an example of a cyclic equilibrium in Example 11 in the 13. If some winning coalition also prefers G2 to G3 , then G1 should still be chosen over G2 , because only members of G2 who do not belong to G1 prefer G2 to G1 , and Assumption 2 ensures that those preferring G1 over G2 (starting in G3 ) also form a winning coalition. Then a transition to G2 is ruled out by part (ii) of Definition 1.

POLITICAL SELECTION AND BAD GOVERNMENTS

1529

Appendix. Cyclic equilibria are unintuitive and “fragile.” We next show that they can also be ruled out under a variety of relatively weak assumptions. The next theorem thus strengthens Theorem 1 so that φ in (6) is the unique equilibrium (among both cyclic and acyclic ones). THEOREM 2. The mapping φ defined in (6) is the unique political equilibrium (and hence in the light of Theorem 1, any political equilibrium is acyclic) if any of the following conditions holds: 1. For any G ∈ G, |G| = k, lG = l and mG = m for some k, l and m. 2. For any G ∈ G, lG ≥ 1. 3. For any collection of different feasible governments H1 , . . . , H q ∈ G (for q ≥ 2) and for all i ∈ I, we have wi (H1 ) = q ( p=1 wi (Hp))/q. 4. θ > ε · |G|, where θ ≡ min{i∈I and G,H∈G: i∈G\H} {wi (G)−wi (H)} and ε ≡ max{i∈I and G,H∈G: i∈G∩H} {wi (G) − wi (H)}. Proof. See Online Appendix B.

This theorem states four relatively mild conditions under which there are no cyclic equilibria (thus making φ in (6) the unique equilibrium). First, if all feasible governments have the same size, k, the same degree of incumbency veto power, l, and the same threshold for the required number of total votes for change, m, then all equilibria must be acyclic and thus φ in (6) is the unique political equilibrium. Second, the same conclusion applies if we always need the consent of at least one member of the current government for a transition to a new government. These two results imply that cyclic equilibria are only possible if starting from some governments, there is no incumbency veto power and either the degree of incumbency veto power or the vote threshold differs across governments. The third part of the theorem shows that there are also no acyclic political equilibria under a mild restriction on payoffs (which is a slight strengthening of Assumption 3 and holds generically,14 meaning that if it did not hold, a small perturbation of payoff functions would restore it). Finally, the fourth part of the theorem provides a condition on preferences that also rules out cyclic equilibria. In particular, this condition states that if each individual receives sufficiently high 14. This requirement is exactly the same as Assumption 3 , which we impose in the Appendix in the analysis of the extensive form game.

1530

QUARTERLY JOURNAL OF ECONOMICS

utility from being in government (greater than θ ) and does not care much about the composition of the rest of the government (the difference in his or her utility between any two governments including him or her is always less than ε), then all equilibria must be acyclic. In the Appendix, we show (Example 11) how a cyclic political equilibrium is possible if none of the four sufficient conditions in Theorem 2 holds. IV. CHARACTERIZATION OF NONSTOCHASTIC TRANSITIONS IV.A. Main Results We now compare different political regimes in terms of their ability to select governments with high levels of competence. To simplify the exposition and focus on the more important interactions, we assume that all feasible governments have the same size, k ∈ N, where k < n/2. More formally, let us define C k = {Y ∈ C : |Y | = k}. Then G = C k. In addition, we assume that for any G ∈ G, lG = l ∈ N and mG = m ∈ N, so that the set of winning coalitions can be simply expressed as (7)

WG = {X ∈ C : |X| ≥ m and |X ∩ G| ≥ l},

where 0 ≤ l ≤ k < m ≤ n − k. If l = 0, then all individuals have equal weight and there is no incumbency veto power; thus we have a perfect democracy. In contrast, if l > 0, the consent of some of the members of the government is necessary for a change; thus there is some incumbency veto power. We have thus strengthened Assumption 2 to the following. ASSUMPTION 2 . We have that G = C k, and that there exist integers l and m such that the set of winning coalitions is given by (7). In view of part 1 of Theorem 2, Assumption 2 ensures that the acyclic political equilibrium φ given by (6) is the unique equilibrium; naturally, we will focus on this equilibrium throughout the rest of the analysis. In addition, given this additional structure, the mapping φ can be written in a simpler form. Recall that governments are still ranked according to their level of competence, so that G1 denotes the most competent government. Then

POLITICAL SELECTION AND BAD GOVERNMENTS

1531

we have (8) Mq = { j : 1 ≤ j < q, |G j ∩ Gq | ≥ l,

and

φ(G j ) = G j },

and, as before, (9)

φ(Gq ) =

Gq Gmin{ j∈Mq }

if Mq = ∅; if Mq = ∅.

Naturally, the mapping φ is again well defined and unique. Finally, let us also define D = {G ∈ G : φ(G) = G} as the set of stable governments (the fixed points of the mapping φ). If G ∈ D, then φ(G) = G, and this government will persist forever if it is the initial government of the society. We now investigate the structure of stable governments and how it changes as a function of the underlying political institutions—in particular, the extent of incumbency veto power, l. Throughout this section, we assume that Assumptions 1, 2 , and 3 hold, and we do not add these qualifiers to any of the propositions to economize on space. Our first proposition provides an important technical result (part 1). It then uses this result to show that perfect democracy (l = 0) ensures the emergence of the best (most competent) government, but any departure from perfect democracy destroys this result and enables the emergence of highly incompetent/inefficient governments. It also shows that extreme dictatorship (l = k) makes all initial governments stable, regardless of how low their competence may be. PROPOSITION 1. The set of stable feasible governments D satisfies the following properties. 1. If G, H ∈ D and |G ∩ H| ≥ l, then G = H. In other words, any two distinct stable governments may have at most l − 1 common members. 2. Suppose that l = 0. Then D = {G1 }. In other words, starting from any initial government, the society will transit to the most competent government. 3. Suppose l ≥ 1. Then there are at least two stable governments; that is, |D| ≥ 2. Moreover, the least competent governments may be stable.

1532

QUARTERLY JOURNAL OF ECONOMICS

4. Suppose l = k. Then D = G, so every feasible government is stable. Proof. See Online Appendix B.

Proposition 1 shows the fundamental contrast between perfect democracy, where incumbents have no veto power, and other political institutions, which provide some additional power to “insiders” (current members of the government). Perfect democracy leads to the formation of the best government. With any deviation from perfect democracy, there will necessarily exist at least one other stable government (by definition less competent than the best), and even the worst government might be stable. The next example supplements Example 1 from the Introduction by showing a richer environment in which the least competent government is stable. EXAMPLE 3. Suppose n = 9, k = 3, l = 1, and m = 5, so that a change in government requires support from a simple majority of the society, including at least one member of the current government. Suppose that I = {1, 2, . . . , 9}, and that stage payoffs are given by (3) in Example 2. Assume also that {i1 ,i2 ,i3 } = 1000 − 100i1 − 10i2 − i3 (for i1 < i2 < i3 ). This implies that {1, 2, 3} is the most competent government, and is therefore stable. Any other government that includes 1 or 2 or 3 is unstable. For example, the government {2, 5, 9} will transit to {1, 2, 3}, as all individuals except 5 and 9 prefer the latter. However, government {4, 5, 6} is stable: any government that is more competent must include 1 or 2 or 3, and therefore either is {1, 2, 3} or will immediately transit to {1, 2, 3}, which means that any such transition will not receive support from any of the members of {4, 5, 6}. Now, proceeding inductively, we find that any government other than {1, 2, 3} and {4, 5, 6} that contains at least one individual 1, 2, . . . , 6 is unstable. Consequently, government {7, 8, 9}, which is the least competent government, is stable. Proposition 1 establishes that under any regime other than perfect democracy, there will necessarily exist stable inefficient/ incompetent governments and these may in fact have quite low levels of competence. It does not, however, provide a characterization of when highly incompetent governments will be stable. We next provide a systematic answer to this question, focusing on societies with large numbers of individuals (i.e., n large).

POLITICAL SELECTION AND BAD GOVERNMENTS

1533

Before doing so, we introduce an assumption that will be used in the third part of the next proposition and in later results. In particular, in what follows we will sometimes suppose that each individual i ∈ I has a level of ability (or competence) given by γi ∈ R+ and that the competence of the government is a strictly increasing function of the abilities of its members. This is more formally stated in the next assumption. ASSUMPTION 4. Suppose G ∈ G, and individuals i, j ∈ I are such that i ∈ G, j ∈ / G, and γi ≥ γ j . Then G ≥ (G\{i})∪{ j} . The canonical form of the competence function consistent with Assumption 4 is G =

(10)

i∈G

γi ,

though for most of our analysis, we do not need to impose this specific functional form. Assumption 4 is useful because it enables us to rank individuals in terms of their “abilities.” This ranking is strict, because Assumptions 3 and 4 together imply that γi = γ j whenever i = j. When we impose Assumption 4, we also enumerate individuals according to their abilities, so that γi > γ j whenever i < j. The next proposition shows that for societies above a certain threshold of size (as a function of k and l), there always exist stable governments that contain no member of the ideal government and no member of any group of certain prespecified sizes (thus, no member of a group that would generate a range of potentially high-competence governments). Then, under Assumption 4, it extends this result, providing a bound on the percentile of the ability distribution such that there exist stable governments that do not include any individuals with competences above this percentile. PROPOSITION 2. Suppose l ≥ 1 (and, as before, that Assumptions 1, 2 , and 3 hold). 1. If (11)

n ≥ 2k + k(k − l)

(k − 1)! , (l − 1)!(k − l)!

then there exists a stable government G ∈ D that contains no members of the ideal government G1 .

1534

QUARTERLY JOURNAL OF ECONOMICS

2. Take any x ∈ N. If (12)

n ≥ k + x + x(k − l)

(k − 1)! , (l − 1)!(k − l)!

then for any set of individuals X with |X| ≤ x, there exists a stable government G ∈ D such that X ∩ G = ∅ (so no member of set X belongs to G). 3. Suppose in addition that Assumption 4 holds and let (13)

ρ=

1 (k−1)! 1 + (k − l) (l−1)!(k−l)!

.

Then there exists a stable government G ∈ D that does not include any of the ρn highest-ability individuals. Proof. See Online Appendix B.

Let us provide the intuition for part 1 of Proposition 2 when l = 1. Recall that G1 is the most competent government. Let G be the most competent government among those that do not include members of G1 (such a G exists, because n > 2k by assumption). In this case, Proposition 2 implies that G is stable; that is, G ∈ D. The reason is that if φ(G) = H = G, then H > G , and therefore H ∩ G1 contains at least one element by construction of G. But then φ(H) = G1 , as implied by (9). Intuitively, if l = 1, then once the current government contains a member of the most competent government G1 , this member will consent to (support) a transition to G1 , which will also receive the support of the population at large. He or she can do so, because G1 is stable, and thus there are no threats that further rounds of transitions will harm him or her. But then, as in Example 1 in the Introduction, G itself becomes stable, because any reform away from G will take us to an unstable government. Part 2 of the proposition has a similar intuition, but it states the stronger result that one can choose any subset of the society with size not exceeding the threshold defined in (12) such that there exist stable governments that do not include any member of this subset (which may be taken to include several of the most competent governments).15 Finally, part 3, which follows immediately from part 2 under Assumption 4, further strengthens both parts 1 and 2 of this proposition and also parts 3 and 15. Note that the upper bound on X in part 2 of Proposition 2 is O(x), meaning that increasing x does not require an exponential increase in the size of population n for Proposition 2 to hold.

POLITICAL SELECTION AND BAD GOVERNMENTS

1535

4 of Proposition 1: it shows that there exist stable governments that do not include a certain fraction of the highest-ability individuals. Interestingly, this fraction, given in (13), is nonmonotonic in l, reaching its maximum at l = k/2, that is, for an intermediate level of incumbency veto power. This partly anticipates the results pertaining to the relative success of different regimes in selecting more competent governments, which we discuss in the next proposition. Before providing a more systematic analysis of the relationship between political regimes and the quality of governments, we first extend Example 1 from the Introduction to show that, starting with the same government, the long-run equilibrium government may be worse when there is less incumbency veto power (as long as we are not in a perfect democracy). EXAMPLE 4. Take the setup from Example 3 (n = 9, k = 3, l = 1, and m = 5), and suppose that the initial government is {4, 5, 6}. As we showed there, government {4, 5, 6} is stable and will therefore persist. Suppose, however, that l = 2 instead. In that case, {4, 5, 6} is unstable and φ({4, 5, 6}) = {1, 4, 5}; thus there will be a transition to {1, 4, 5}. Because {1, 4, 5} is more competent than {4, 5, 6}, this is an example where the long-run equilibrium government is worse under l = 1 than under l = 2. Note that if l = 3, {4, 5, 6} would be stable again. When either k = 1 or k = 2, the structure of stable governments is relatively straightforward. (Note that in this proposition, and in the examples that follow, a, b or c denote the indices of individuals, with our ranking in which lower-ranked individuals have higher ability; thus γa > γb whenever a < b.) PROPOSITION 3. Suppose that Assumptions 1, 2 , 3, and 4 hold. 1. Suppose that k = 1. If l = 0, then φ(G) = {G1 } = {1} for any G ∈ G. If l = k = 1, then φ(G) = G for any G ∈ G. 2. Suppose that k = 2. If l = 0, then φ(G) = G1 = {1, 2} for any G ∈ G. If l = 1, then if G = {a, b} with a < b, we have φ(G) = {a − 1, a} when a is even and φ(G) = {a , a + 1} when a is odd; in particular, φ(G) = G if and only if a is odd and b = a +1. If l = 2, then φ(G) = G for any G ∈ G. Proof. See Online Appendix B.

1536

QUARTERLY JOURNAL OF ECONOMICS

Proposition 3, though simple, provides an important insight into the structure of stable governments that will be further exploited in the next section. When k = 2 and l = 1, the competence of the stable government is determined by the more able of the two members of the initial government. This means that, with rare exceptions, the quality of the initial government will improve to some degree; that is, typically φ(G) > G . However, this increase is generally limited; when G = {a, b} with a < b, φ(G) = {a − 1, a} or φ(G) = {a, a + 1}, so that at best the nexthigher-ability individual is added to the initial government instead of the lower-ability member. Therefore, summarizing these three cases, we can say that with a perfect democracy, the best government will arise; with an extreme dictatorship, there will be no improvement in the initial government; and in between this, there will be some limited improvements in the quality of the government. When k ≥ 3, the structure of stable governments is more complex, though we can still develop a number of results about and insights into the structure of such governments. Naturally, the extremes with l = 0 and 3 are again straightforward. If l = 1 and the initial government is G = {a, b, c}, where a < b < c, then we can show that members ranked above a − 2 will never become members of the stable government φ(G), and the most competent member of G, a, is always a member of the stable government φ(G).16 Therefore, again with l = 1, only incremental improvements in the quality of the initial government are possible. This ceases to be the case when l = 2. In this case, it can be shown that whenever G = {a, b, c}, where a + b < c, φ(G) = G; instead φ(G) = {a, b, d}, where d < c and in fact, d a is possible. This implies a potentially very large improvement in the quality of the government (contrasting with the incremental improvements in the case where l = 1). Loosely speaking, the presence of two veto players when l = 2 allows the initial government to import veryhigh-ability individuals without compromising stability. The next example illustrates this feature, which is at the root of the result highlighted in Example 4, whereby lesser incumbency veto power can lead to worse stable governments. 16. More specifically, government G = {a, b, c}, where a < b < c, is stable if and only if a = b − 1 = c − 2, and c is a multiple of 3. Moreover, for any government G = {a, b, c} with a < b < c, φ(G) = {a − 2, a − 1, a} if a is a multiple of 3, φ(G) = {a − 1, a, a + 1} if a + 1 is a multiple of 3, and φ(G) = {a, a + 1, a + 2} if a + 2 is a multiple of 3.

POLITICAL SELECTION AND BAD GOVERNMENTS

1537

EXAMPLE 5. Suppose k = 3, and first take the case where l = 1. Suppose G = {100, 101, 220}, meaning that the initial government consists of individuals ranked 100, 101, and 220 in terms of ability. Then φl=1 (G) = {100, 101, 102} so that the third member of the government is replaced, but the highest and the second-highest-ability members are not. More generally, recall that only very limited improvements in the quality of the highest-ability member are possible in this case. Suppose instead that l = 2. Then it can be shown that φl=2 (G) = {1, 100, 101}, so that now the stable government includes the most able individual in the society. Naturally if the gaps in ability at the top of the distribution are larger, implying that highest-ability individuals have a disproportionate effect on government competence, this feature becomes particularly valuable. The following example extends the logic of Example 5 to any distribution and shows how expected competence may be higher under l = 2 than l = 1, and in fact, this result may hold under any distribution over initial (feasible) governments. EXAMPLE 6. Suppose k = 3, and fix a (any) probability distribution over initial governments with full support (i.e., with a positive probability of picking any initial feasible government). Assume that of players i1 , . . . , in, the first q (where q is a multiple of 3 and 3 ≤ q < n − 3) are “smart,” whereas the rest are “incompetent,” so that governments that include at least one of players i1 , . . . , iq will have very high competence relative to governments that do not. Moreover, differences in competence among governments that include at least one of the players i1 , . . . , iq and also among those that do not are small relative to the gap between the two groups of governments. Then it can be shown that the expected competence of the stable government φl=2 (G) (under l = 2) is greater than that of φl=1 (G) (under l = 1)—both expectations are evaluated according to the probability distribution fixed at the outset. This is intuitive in view of the structure of stable governments under the two political regimes. In particular, if G includes at least one of i1 , . . . , iq , so do φl=1 (G) and φl=2 (G). But if G does not, then φl=1 (G) will not include them either, whereas φl=2 (G) will include one with positive probability, because the presence of two veto players will allow the incorporation of one of the “smart” players without destabilizing the government.

1538

QUARTERLY JOURNAL OF ECONOMICS

Conversely, suppose that G is very high if all its players are from {i1 , . . . , iq }, and very low otherwise. In that case, the expected competence of φ(G) will be higher under l = 1 than under l = 2. Indeed, if l = 1, the society will end up with a competent government if at least one of the players is from {i1 , . . . , iq }, whereas if l = 2, because there are now two veto players, there needs to be at least two “smart” players for a competent government to form (though, when l = 2, this is not sufficient to guarantee the emergence of a competent government either). Examples 5 and 6 illustrate a number of important ideas. With greater incumbency veto power, in these examples with l = 2, a greater number of governments near the initial government are stable, and thus there is a higher probability of improvement in the competence of some of the members of the initial government. In contrast, with less incumbency veto power, in these examples with l = 1, fewer governments near the initial one are stable; thus incremental improvements are more likely. Consequently, when including a few high-ability individuals in the government is very important, regimes with greater incumbency veto power perform better; otherwise, regimes with less incumbency veto power perform better.17 Another important implication of these examples is that the situations in which regimes with greater incumbency veto power may perform better are not confined to some isolated instances. This feature applies to a broad class of configurations and to expected competences evaluated by taking uniform or nonuniform distributions over initial feasible governments. Nevertheless, we will see that in stochastic environments, there will be a distinct advantage to political regimes with less incumbency veto power or “greater degrees of democracy.” The intuition for this phenomenon will also be illustrated using Examples 5 and 6. IV.B. Royalty-Type Regimes We have so far focused on political institutions that are “juntalike” in the sense that no specific member is indispensable. In such an environment, the incumbency veto power takes the form of the requirement that some members of the current government must 17. In the former case, this effect is nonmonotonic, because the perfect democracy, l = 0, always performs best.

POLITICAL SELECTION AND BAD GOVERNMENTS

1539

consent to change. The alternative is a “royalty-like” environment where one or several members of the government are irreplaceable (i.e., correspond to “individual veto players” in terms of Tsebelis’s [2002] terminology). This can be conjectured to be a negative force, because it would mean that a potentially low-ability person must always be part of the government. However, because such an irreplaceable member (the member of “royalty”) is also unafraid of changes, better governments may be more likely to arise under certain circumstances, whereas, as we have seen, junta members would resist certain changes because of the further transitions that these would unleash. Let us change Assumption 2 and the structure of the set of winning coalitions WG to accommodate royalty-like regimes. We assume that there are l royalty whose votes are always necessary for a transition to be implemented (regardless of whether they are current government members). We denote the set of these individuals by Y . So the new set of winning coalitions becomes WG = {X ∈ C : |X| ≥ m and Y ⊂ X}. We also assume that all members of royalty are in the initial government; that is, Y ⊂ G0 . Note that the interpretation of the parameter l is now different from what it was for junta-like regimes. In particular, in junta-like regimes, l measured the incumbency veto power and could be considered an inverse measure of (one dimension of) the extent of democracy. In contrast, in the case of royalty, l = 1 corresponds to a one-person dictatorship, whereas l > 1 could be thought of as a “more participatory” regime. The next proposition compares royalty-like and junta-like institutions in terms of the expected competence of the equilibrium government, where, as in Example 6, the expectation is taken with respect to any full support probability distribution over the composition of the initial government. PROPOSITION 4. Consider a royalty-like regime with 1 ≤ l < k such that l royals are never removed from the government. Suppose that the competence of governments is given by (10). Let γi be the ability of the ith most able person in society, so γi > γ j for i < j. If {γ1 , . . . , γn} is sufficiently “convex,” meaning that γ1 −γ2 is sufficiently large, then the expected competence of the γ2 −γn government under the royalty system is greater than under the original junta-like system (with the same l). The opposite −γn−1 is sufficiently low. conclusion holds if γγ1n−1 −γn

1540

QUARTERLY JOURNAL OF ECONOMICS

Proof. See Online Appendix B.

Proposition 4 shows that royalty-like regimes achieve better expected performance than junta-like regimes provided that {γ1 , . . . , γn} is highly “convex” (such convexity implies that the benefit to society from having the highest-ability individual in government is relatively high). As discussed above, juntas are unlikely to lead to such high-quality governments because of the fear of a change leading to a further round of changes, which would exclude all initial members of the junta. Royalty-like regimes avoid this fear. Nevertheless, royalty-like regimes have a disadvantage in that the ability of royals may be very low (or, in a stochastic environment, may become very low at some point), and the royals will always be part of the government. In this sense, royalty-like regimes create a clear disadvantage. However, this result shows that when {γ1 , . . . , γn} is sufficiently convex (to outweigh the loss of expected competence because of the presence of a potentially lowability royal), expected competence is nonetheless higher under the royalty-like system. This result is interesting because it suggests that different types of dictatorships may have distinct implications for long-term quality of government and performance, and regimes that provide security to certain members of the incumbent government may be better at dealing with changes and in ensuring relatively high-quality governments in the long run. This proposition also highlights that, in contrast to existing results on veto players, a regime with individual veto players (members of royalty) can be less stable and more open to change. In particular, a junta-like regime with l > 0 has no individual veto players in the sense of Tsebelis (2002), whereas a royaltylike regime with the same l has such veto players, and Proposition 4 shows that the latter can lead to greater changes in the composition of the government.18 V. EQUILIBRIA IN STOCHASTIC ENVIRONMENTS In this section, we introduce stochastic shocks to the competence of different coalitions (or different individuals) in order to study the flexibility of different political institutions in their 18. If, instead, we assume that β is close to zero or that the players are myopic (as discussed in Example 12), then individual veto players always increase stability and reduce change in the composition of the government, as in the existing literature. This shows the importance of dynamic considerations in the analysis of changes in the structure of government and the impact of veto players.

POLITICAL SELECTION AND BAD GOVERNMENTS

1541

ability to adapt the nature and the composition of the government to changes in the underlying environment. Changes in the nature and structure of “high-competence” governments may result from changes in the economic, political, or social environment, which may in turn require different types of government to deal with newly emerging problems. Our main results in this section establish the relationship between the extent of incumbency veto power (one aspect of the degree of democracy) and the flexibility to adapt to changing environments (measured by the probability that the most competent will come to power). Changes in the environment are modeled succinctly by alt : G → R, which determines the lowing changes in the function G competence associated with each feasible government. Formally, we assume that at each t, with probability 1 − δ, there is no change t−1 t t from G , and with probability δ, there is a shock and G in G may change. In particular, following such a shock, we assume that t−1 t | G ) that gives there exists a set of distribution functions F (G t−1 t the conditional distribution of G at time t as functions of G . The characterization of political equilibria in this stochastic environment is a challenging task in general. However, when δ is sufficiently small so that the environment is stochastic but subject to infrequent changes, the structure of equilibria is similar to that in Theorem 1. We will exploit this characterization to illustrate the main implications of stochastic shocks for the selection of governments. In the rest of this section, we first generalize our definition of (Markov) political equilibrium to this stochastic environment and generalize Theorems 1 and 2 (for δ small). We then provide a systematic characterization of political transitions in this stochastic environment and illustrate the links between incumbency veto power and institutional flexibility. V.A. Stochastic Political Equilibria The structure of stochastic political equilibria is complicated in general because individuals need to consider the implications of current transitions on future transitions under a variety of scenarios. Nevertheless, when the likelihood of stochastic shocks, δ, is sufficiently small, as we have assumed here, then political equilibria must follow a logic similar to that in Definition 1 in Section III. Motivated by this reasoning, we introduce a similar definition of stochastic political equilibria (with infrequent shocks). Online Appendix B establishes that when the discount

1542

QUARTERLY JOURNAL OF ECONOMICS

factor is high and stochastic shocks are sufficiently infrequent, the MPE of the explicit-form game outlined there and our notions of (stochastic) political equilibrium are indeed equivalent. To introduce the notion of (stochastic Markov) political equilibrium, let us first consider a set of mappings φ{G } : G → G defined as in (6), but now separately for each {G }G∈G . These mappings are indexed by {G } to emphasize this dependence. Essentially, if the configuration of competences of different governments given by {G }G∈G applied forever, we would be in a nonstochastic environment and φ{G } would be the equilibrium transition rule, or simply the political equilibrium, as shown by Theorems 1 and 2. The idea underlying our definition for this stochastic environment with infrequent changes is that although the current configuration is {G }G∈G , φ{G } will still determine equilibrium behavior, because the probability of a change in competences is sufficiently small (see Online Appendix B). When the current configuration is {G }G∈G , φ{G } will determine political transitions, and if φ{G } (G) = G, then G will remain in power as a stable government. However, when a stochastic shock hits and {G }G∈G changes to {G }G∈G , political transitions will be determined by the transition rule φ{G } , and unless φ{G } (G) = G, following this shock, there will be a transition to a new government, G = φ{G } (G). DEFINITION 2. Let the set of mappings φ{G } : G → G (a separate mapping for each configuration {G }G∈G ) be defined by the following two conditions. When the configuration of competences is given by {G }G∈G , we have that for any G ∈ G: i. the set of players who prefer φ{G } (G) to G (in terms of discounted utility) form a winning coalition, that is, S = {i ∈ I : Vi (φ{G } (G) | φ{G } ) > Vi (G | φ{G } )} ∈ WG ; ii. there is no alternative government H ∈ G that is preferred both to a transition to φ{G } (G) and to staying in G permanently, that is, there is no H such that SH = {i ∈ I : Vi (H | φ{G } ) > Vi (φ{G } (G) | φ{G } )} ∈ WG

and SH = {i ∈ I : Vi (H | φ{G } ) > wi (G)/(1 − β)} ∈ WG (alternatively, |SH | < mG , or |SH ∩ G| < lG , or |SH | < mG , or |SH ∩ G| < lG ).

Then a set of mappings φ{G } : G → G constitutes a (stochastic Markov) political equilibrium for an environment with sufficiently infrequent changes if there is a transition to

POLITICAL SELECTION AND BAD GOVERNMENTS

1543

government Gt+1 at time t (starting with government Gt ) if t }G∈G = {G }G∈G and Gt+1 = φ{G } (Gt ). and only if {G Therefore, a political equilibrium with sufficiently infrequent changes involves the same political transitions (or the stability of governments) as those implied by the mappings φ{G } defined in (6), applied separately for each configuration {G }. The next theorem provides the general characterization of stochastic political equilibria in environments with sufficiently infrequent changes. THEOREM 3. Suppose that Assumptions 1–3 hold, and let φ{G } : G → G be the mapping defined by (6) applied separately for each configuration {G }. Then there exist β0 < 1 and δ0 > 0 such that for any discount factor β > β0 and any positive probability of shocks δ < δ0 , φ{G } is a unique acyclic political equilibrium. Proof. See Online Appendix B.

The intuition for this theorem is straightforward. When shocks are sufficiently infrequent, the same calculus that applied in the nonstochastic environment still determines preferences because all agents put most weight on the events that will happen before such a change. Consequently, a stable government will arise and will remain in place until a stochastic shock arrives and changes the configuration of competences. Following such a shock, the stable government for this new configuration of competences emerges. Therefore, Theorem 3 provides us with a tractable way of characterizing stochastic transitions. In the next section, we use this result to study the links between different political regimes and institutional flexibility. V.B. The Structure of Stochastic Transitions In the rest of this section, we compare different political regimes in terms of their flexibility (adaptability to stochastic shocks). Our main results will show that, even though limited incumbency veto power does not guarantee the emergence of more competent governments in the nonstochastic environment (nor does it guarantee greater expected competence), it does lead to greater “flexibility” and to better performance according to certain measures in the presence of shocks. In what follows, we always impose Assumptions 1, 2 , 3, and 4, which ensure that when the discount factor β is sufficiently large and the frequency of stochastic shocks δ is sufficiently small, there will be a unique (and acyclic)

1544

QUARTERLY JOURNAL OF ECONOMICS

political equilibrium. Propositions 5, 6, and 7 describe properties of this equilibrium. We also impose some additional structure on the distribution t−1 t | G ) by assuming that any shock corresponds to a rearF (G rangement (“permutation”) of the abilities of different individuals. Put differently, we assume throughout this section that there is a fixed vector of abilities, say a = {a1 , . . . , an}, and the actual distribution of abilities across individuals at time t, {γ jt }nj=1 , is given by some permutation ϕ t of this vector a. We adopt the convention that a1 > a2 > · · · > an. Intuitively, this captures the notion that a shock will change which individual is best placed to solve certain tasks and thus most effective in government functions. We next characterize the “flexibility” of different political regimes. Throughout the rest of this section, our measure of flexibility is the probability with which the best government will be in power (either at given t or as t → ∞).19 More formally, let πt (l, k, n | G, {G }) be the probability that in a society with n individuals under a political regime characterized by l (for given k), a configuration of competences given by {G }, and current government G ∈ G, the most competent government will be in power at the time t. Given n and k, we will think of a regime characterized by l as more flexible than one characterized by l if πt (l , k, n | G, {G }) > πt (l, k, n | G, {G }) for all G and {G } and for all t following a stochastic shock. Similarly, we can think of the regime as asymptotically more flexible than another, if limt→∞ πt (l , k, n | G, {G }) > limt→∞ πt (l, k, n | G, {G }) for all G and {G } (provided that these limits are well defined). Clearly, “being more flexible” is a partial order. PROPOSITION 5. Suppose that any permutation ϕ of the vector a following a shock is equally likely. 1. If l = 0, then a shock immediately leads to the replacement of the current government by the new most competent government. 2. If l = 1, the competence of the government following a shock never decreases further; instead, it increases with 19. This is a natural metric of flexibility in the context of our model; because we have not introduced any cardinal comparisons between the competences of governments, “expected competence” would not be a meaningful measure (see also Footnote 21). Note also that we would obtain similar results if we related flexibility to the probability that one of the best two or three governments comes to power, etc.

POLITICAL SELECTION AND BAD GOVERNMENTS

1545

probability no less than 1−

(k − 1)!(n − k)! n − 1 −1 =1− . (n − 1)! k− 1

Starting with any G and {G }, the probability that the most competent government will ultimately come to power as a result of a shock is lim πt (l, k, n | G, {G }) n − k n −1 < 1. = π (l, k, n) ≡ 1 − k k

t→∞

For fixed k as n → ∞, π (l, k, n) → 0. 3. If l = k ≥ 2, then a shock never leads to a change in government. The probability that the most competent government is in power at any given period (any t) after the shock is −1 n . πt (l = k, k, n | ·, ·) = k This probability is strictly less than πt (l = 0, k, n | G, {G }) and πt (l = 1, k, n | G, {G }) for any G and {G }. Proof. See Online Appendix B.

Proposition 5 contains a number of important results. A perfect democracy (l = 0) does not create any barriers against the installation of the best government at any point in time. Hence, under a perfect democracy, every shock is “flexibly” met by a change in government according to the wishes of the population at large (which here means that the most competent government will come to power). As we know from the analysis in Section IV, this is no longer true as soon as members of the governments have some veto power. In particular, we know that without stochastic shocks, arbitrarily incompetent governments may come to power and remain in power. However, in the presence of shocks, the evolution of equilibrium governments becomes more complex. Next consider the case with l ≥ 1. Now, even though the immediate effect of a shock may be a deterioration in government competence, there are forces that increase government competence in the long run. This is most clearly illustrated in the case where l = 1. With this set of political institutions, there is zero

1546

QUARTERLY JOURNAL OF ECONOMICS

probability that there will be a further decrease in government competence following a shock. Moreover, there is a positive probability that competence will improve and in fact a positive probability that, following a shock, the most competent government will be instituted. In particular, a shock may make the current government unstable, and in this case, there will be a transition to a new stable government. A transition to a less competent government would never receive support from the population. The change in competences may be such that the only stable government after the shock, starting with the current government, may be the best government.20 Proposition 5 also shows that when political institutions take the form of an extreme dictatorship, there will never be any transition; thus the current government can deteriorate following shocks (in fact, it can do so significantly). Most importantly, Proposition 5, as well as Proposition 6 below, shows that regimes with intermediate levels of incumbency veto power have a higher degree of flexibility than extreme dictatorship, ensuring better long-run outcomes (and naturally, perfect democracy has the highest degree of flexibility). This unambiguous ranking in the presence of stochastic shocks (and its stronger version stated in the next proposition) contrasts with the results in Section IV, which showed that general comparisons between regimes with different degrees of incumbency veto power (beyond perfect democracy) are not possible in the nonstochastic case. An informal intuition for the greater flexibility of regimes with more limited incumbency veto power in the presence of stochastic shocks can be obtained from Examples 5 and 6 in the preceding section. Recall from these examples that an advantage of the less democratic regime, l = 2, is that the presence of two veto players makes a large number of governments near the initial one stable. But this implies that if the initial government is destabilized because of a shock, there will only be a move to a nearby government. In contrast, the more democratic regime, l = 1, often makes highly incompetent governments stable because there are no nearby stable governments (recall, for example, part 2 of Proposition 3). But this also implies that if a shock destabilizes the current government, a significant improvement in the quality of the government becomes more likely. Thus, at a broad level and 20. Nevertheless, the probability of the most competent government coming to power, though positive, may be arbitrarily low.

POLITICAL SELECTION AND BAD GOVERNMENTS

1547

contrasting with the presumption in the existing literature (e.g., Tsebelis [2002]), regimes with greater incumbency veto power “create more stability,” which facilitates small or moderate-sized improvements in initial government quality; but they do not create a large “basin of attraction” for the most competent government. In contrast, in regimes with less incumbency veto power, low-competence governments are often made stable by the instability of nearby alternative governments; this instability can be a disadvantage in deterministic environments, as illustrated in the preceding section, but turns into a significant flexibility advantage in the presence of stochastic shocks because it creates the possibility that, after a shock, there may be a jump to a very highcompetence government (in particular, to the best government, which now has a larger “basin of attraction”). The next proposition strengthens the conclusions of Proposition 5. In particular, it establishes that the probability of having the most competent government in power is decreasing in l (or in other words, it is increasing in this measure of the “degree of democracy”).21 PROPOSITION 6. The probability of having the most competent government in power after a shock (for any t), πt (l, k, n | G, {G }), is decreasing in l for any k, n, G, and {G }. Proof. See Online Appendix B.

Propositions 5 and 6 highlight a distinct flexibility advantage (in terms of the probability of the most competent government coming to power) of regimes with low incumbency veto power (“more democratic” regimes). These results can be strengthened further when shocks are “limited” in the sense that only the abilities of two (or several) individuals in the society are swapped. The next proposition contains these results. PROPOSITION 7. Suppose that any shock permutes the abilities of x individuals in the society. 21. This conclusion need not be true for the “expected competence” of the government, because we have not made cardinal assumptions on abilities and competences. In particular, it is possible that some player is not a member of any stable government for some l and becomes part of a stable government for some l < l. If this player is such that the competence of any government that includes him or her is very low (e.g., his or her ability is very low), then expected competence under l may be lower. In Online Appendix B, we provide an example (Example 10) illustrating this point, and we also show that the expected competence of government is decreasing in l when l is close to 0 or to k.

1548

QUARTERLY JOURNAL OF ECONOMICS

1. If x = 2 (so that the abilities of two individuals are swapped at a time) and l ≤ k − 1, then the competence of the government in power is nondecreasing over time; that is, πt (l, k, n | G, {G }) is nondecreasing in t for any l, k, n, G, and {G } such that l ≤ k − 1. Moreover, if the probability of swapping of abilities between any two individuals is positive, then the most competent government will be in power as t → ∞ with probability 1; that is, limt→∞ πt (l, k, n | G, {G }) = 1 (for any l, k, n, G, and {G } such that l ≤ k − 1). 2. If x > 2, then the results in part 1 hold provided that l ≤ k − x/2 . Proof. See Online Appendix B.

An interesting application of Proposition 7 is that when shocks are (relatively) rare and limited in their scope, relatively democratic regimes will gradually improve over time and install the most competent government in the long run. This is not true for the most autocratic governments, however. This proposition, therefore, strengthens the conclusions of Propositions 5 and 6 in highlighting the flexibility benefits of more democratic regimes. VI. CONCLUSIONS In this paper, we have provided a tractable model of dynamic political selection. The main barrier to the selection of good politicians and to the formation of good governments in our model is not the difficulty of identifying competent or honest politicians, but the incumbency veto power of current governments. Our framework shows how a small degree of incumbency veto power can lead to the persistence of highly inefficient and incompetent governments. This is because incumbency veto power implies that one of (potentially many) members of the government needs to consent to a change in the composition of government. However, all current members of the government may recognize that any change may unleash a further round of changes, ultimately unseating them. In this case, they will all oppose any change in government, even if such changes can improve welfare significantly for the rest of the society, and highly incompetent governments can remain in power. Using this framework, we study the implications of different political institutions for the selection of governments in both

POLITICAL SELECTION AND BAD GOVERNMENTS

1549

nonstochastic and stochastic environments. A perfect democracy corresponds to a situation in which there is no incumbency veto power; thus citizens can nominate alternative governments and vote them to power without the need for the consent of any member of the incumbent government. In this case, we show that the most competent government will always come to power. However, interestingly, any deviation from perfect democracy breaks this result and the long-run equilibrium government can be arbitrarily incompetent (relative to the best possible government). In extreme dictatorship, where any single member of the current government has a veto power on any change, the initial government always remains in power and can be arbitrarily costly for the society. More surprisingly, the same is true for any political institution other than perfect democracy. Moreover, there is no obvious ranking between different sets of political institutions (other than perfect democracy and extreme dictatorship) in terms of what they imply for the quality of long-run government. In fact, regimes with greater incumbency veto power, which may be thought of as “less democratic,” can lead to higher-quality governments both in specific instances and in expectation (with uniform or nonuniform distribution over the set of feasible initial governments). Even though no such ranking across political institutions is possible, we provide a fairly tight characterization of the structure of stable governments in our benchmark nonstochastic society. In contrast, in stochastic environments, more democratic political regimes have a distinct advantage because of their greater “flexibility.” In particular, in stochastic environments, either the abilities and competences of individuals or the needs of government functions change, shuffling the ranking of different possible governments in terms of their competence and effectiveness. Less incumbency veto power then ensures greater “adaptability” or flexibility. This result therefore suggests that a distinct advantage of “more democratic” regimes might be their greater flexibility in the face of shocks and changing environments. Finally, we also compare “junta-like” and “royalty-like” regimes. The former is our benchmark society, where change in government requires the consent or support of one or multiple members of the current government. The latter corresponds to situations in which one or multiple individuals are special and must always be part of the government (hence the title “royalty”). If royal individuals have low ability, royalty-like regimes can lead to the persistence of highly incompetent governments. However,

1550

QUARTERLY JOURNAL OF ECONOMICS

we also show that in stochastic environments, royalty-like regimes may lead to the emergence of higher quality governments in the long run than junta-like regimes. This is because royal individuals are not afraid of changes in governments, because their tenure is absolute. In contrast, members of the junta may resist changes that may increase government competence or quality because such changes may lead to another round of changes, ultimately excluding all members of the initial government. An important contribution of our paper is to provide a tractable framework for the dynamic analysis of the selection of politicians. This tractability makes it possible to extend the analysis in various fruitful directions. For example, it is possible to introduce conflict of interest among voters by having each government be represented by two characteristics: competence and ideological leaning. In this case, not all voters will simply prefer the most competent government. The general approach developed here remains applicable in this case. Another generalization would allow the strength of the preferences of voters for a particular government to influence the number of veto players, so that a transition away from a semi-incompetent government can be blocked by a few insiders, but more unified opposition from government members would be necessary to block a transition away from a highly incompetent government. The current framework also abstracts from “self-selection” issues, whereby some citizens may not wish to be part of some, or any, governments (e.g., as in Caselli and Morelli [2004] or Mattozzi and Merlo [2008]). Such considerations can also be incorporated by adding a second dimension of heterogeneity in the outside opportunities of citizens, and restricting feasible transitions to coalitions that only include members who would prefer or can be incentivized to be part of the government. An open question, which may be studied by placing more structure on preferences and institutions, is the characterization of equilibria in the stochastic environment when shocks occur frequently. The most important direction for future research, which is again feasible using the general approach here, is an extension of this framework to incorporate the asymmetric information issues emphasized in the previous literature. For example, we can generalize the environment in this paper so that the ability of an individual is not observed until he or she becomes part of the government. In this case, to install high-quality governments, it is necessary to first “experiment” with different types of governments.

POLITICAL SELECTION AND BAD GOVERNMENTS

1551

The dynamic interactions highlighted by our analysis will then become a barrier to such experimentation. In this case, the set of political institutions that will ensure high-quality governments must exhibit a different type of flexibility, whereby some degree of “churning” of governments can be guaranteed even without shocks. Another interesting area is to introduce additional instruments, so that some political regimes can provide incentives to politicians to take actions in line with the interests of the society at large. In that case, successful political institutions must ensure both the selection of high-ability individuals and the provision of incentives to these individuals once they are in government. Finally, as hinted by the discussion in this paragraph, an interesting and challenging extension is to develop a more general “mechanism design” approach in this context, whereby certain aspects of political institutions are designed to facilitate the appropriate future changes in government. We view these directions as interesting and important areas for future research. APPENDIX Let us introduce the following binary relation on the set of feasible governments G. For any G, H ∈ G we write (14)

H G if and only if {i ∈ I : wi (H) > wi (G)} ∈ WG .

In other words, H G if and only if there exists a winning coalition WG in G such that all members of WG have higher stage payoff in H than in G. Let us also define set D as D = {G ∈ G : φ(G) = G}. A. Proof of Theorem 1 We start with two lemmas that establish useful properties of the payoff functions and the mapping φ and then present the proof of Theorem 1. LEMMA 1. Suppose that G, H ∈ G and G > H . Then 1. For any i ∈ I, wi (G) < wi (H) if and only if i ∈ H \ G. 2. H G. ¯ 3. |{i ∈ I : wi (G) > wi (H)}| > n/2 ≥ k. Proof of Lemma 1. Part 1. If G > H then, by Assumption 1, / H. Hence, wi (G) < wi (H) is wi (G) > wi (H) whenever i ∈ G or i ∈

1552

QUARTERLY JOURNAL OF ECONOMICS

possible only if i ∈ H \ G (note that wi (G) = wi (H) is ruled out by Assumption 3). At the same time, i ∈ H \ G implies that wi (G) < wi (H) by Assumption 1, hence the equivalence. Part 2. We have |H \ G| ≤ |H| ≤ k¯ ≤ mG , because by Assump/ WG , and H G by definition (14). tion 2 k¯ ≤ mG , so that H \ G ∈ Part 3. We have {i ∈ I : wi (G) > wi (H)} = I \ {i ∈ I : wi (G) < wi (H)} ⊃ I \ (H \ G); hence |{i ∈ I : wi (G) > wi (H)}| ≥ n − k¯ ≥ n − ¯ n/2 = n/2 ≥ k. LEMMA 2. Consider the mapping φ defined in (6) and let G, H ∈ G. Then 1. Either φ(G) = G (and then G ∈ D) or φ(G) G. 2. φ(G) ≥ G . 3. If φ(G) G and H G, then φ(G) ≥ H . 4. φ(φ(G)) = φ(G). Proof of Lemma 2. The proof of this lemma is straightforward and is omitted. Proof of Theorem 1. As in the text, let us enumerate elements of G as {G1 , G2 , . . . , G|G| } such that Gx > Gy whenever x < y. First, we prove that the function φ defined in (6) constitutes a (Markov) political equilibrium. Take any Gq , 1 ≤ q ≤ |G|. By (6), either φ(Gq ) = Gq , or φ(Gq ) = Gmin{ j∈Mq } . In the latter case, the set of players who obtain higher-stage payoff under φ(Gq ) than under Gq (i.e., those with wi (φ(Gq ) > wi (Gq ))) form a winning coalition in Gq by (5). Because by definition φ(Gq ) is φ-stable, that is, φ(Gq ) = φ(φ(Gq )), we have Vi (φ(Gq )) > Vi (Gq ) for a winning coalition of players. Hence, in either case condition (i) of Definition 1 is satisfied. Now, suppose, to obtain a contradiction, that condition (ii) of Definition 1 is violated, and X, Y ∈ WGq are winning coalitions such that Vi (H) > wi (Gq )/(1 − β) for all i ∈ X and Vi (H) > Vi (φ(Gq )) for all i ∈ Y and some alternative H ∈ G. Consider first the case φ(H) = φ(Gq ) ; then Vi (H) > Vi (φ(Gq )) would imply wi (φ(H)) > wi (φ(Gq )) as β is close to 1, and hence the set of players who have wi (φ(H)) > wi (φ(Gq )) would include Y , and thus would be a winning coalition in Gq . This is impossible if φ(H) < φ(Gq ) (only players in φ(H) would possibly prefer φ(H), and they are fewer than mGq ). If φ(H) > φ(Gq ) , however, we would get a government φ(H) that was φ-stable by construction of φ and that was preferred to Gq by at least mGq players (all except perhaps members of Gq ) and at least lG members of Gq , as φ(H) is stable and

POLITICAL SELECTION AND BAD GOVERNMENTS

1553

φ(Gq ) ≥ Gq (indeed, at least lG members of Gq —those in coalition X—had Vi (φ(H)) > wi (Gq )/(1 − β), which thus means they belong to φ(H), and hence must have wi (φ(H)) > wi (Gq )). This would imply that φ(H) ∈ Mq by (5), but in that case φ(H) > φ(Gq ) would contradict (6). Finally, consider the case φ(H) = φ(Gq ) , which by Assumption 3 implies that φ(H) = φ(Gq ). Now Vi (H) > Vi (φ(Gq )) implies that wi (H) > wi (φ(Gq )) for all i ∈ Y , as the instantaneous utilities are the same except for the current period. Because this includes at least mGq players, we must have that H > φ(Gq ) . But φ(H) ≥ H by (6), so φ(H) ≥ H > φ(Gq ) , which contradicts φ(H) = φ(Gq ) . This contradiction proves that mapping φ satisfies both conditions of Definition 1, and thus forms a political equilibrium. To prove uniqueness, suppose that there is another acyclic political equilibrium ψ. For each G ∈ G, define χ (G) = ψ |G| (G); due to acyclicity, χ (G) is ψ-stable for all G. We prove the following sequence of claims. First, we must have that χ(G) ≥ G ; indeed, otherwise condition (i) of Definition 1 would not be satisfied for large β. Second, we prove that all transitions must take place in one step; that is, χ (G) = ψ(G) for all G. If this were not the case, then, due to the finiteness of any chain of transitions, there would exist G ∈ G such that ψ(G) = χ (G), but ψ 2 (G) = χ (G). Take H = χ (G). Then H > ψ(G) , H > G , and ψ(H) = H. For β sufficiently close to 1, the condition Vi (H) > wi (G)/(1 − β) is automatically satisfied for the winning coalition of players, who had Vi (ψ(G)) > wi (G)/(1 − β). We next prove that Vi (H) > Vi (ψ(G)) for a winning coalition of players in G. Note that this condition is equivalent to wi (H) > wi (ψ(G)). The fact that at least mG players prefer H to ψ(G) follows from H > ψ(G) . Moreover, because χ (G) = H, at least lG members of G must also be members of H; naturally, they prefer H to ψ(G). Consequently, condition (ii) of Definition 1 is violated. This contradiction proves that χ (G) = ψ(G) for all G. Finally, we prove that ψ(G) coincides with φ(G), defined in (6). Suppose not; that is, suppose that φ(G) = ψ(G). Without loss of generality, we may assume that G is the most competent government that has φ(G) = ψ(G); that is, φ(H) = ψ(H) whenever H > G . By Assumption 3, we have that φ(G) = ψ(G) . Suppose that φ(G) > ψ(G) (the case φ(G) < ψ(G) is treated similarly). As ψ(G) forms a political equilibrium, it must satisfy condition (ii) of Definition 1, for H = φ(G) in particular. Because φ(G) is a political equilibrium, it must be the case that wi (φ(G)) > wi (G), and

1554

QUARTERLY JOURNAL OF ECONOMICS

thus Vi (H | φ) > wi (G)/(1 − β), for a winning coalition of players. Now, we see that the following two facts must hold. First, Vi (H | ψ) > wi (G)/(1 − β) for a winning coalition of players; this follows from the fact that H is φ-stable and thus ψ-stable as H > G and from the choice of G. Second, Vi (H | ψ) > Vi (ψ(G) | ψ) for a winning coalition of players; indeed, H and ψ(G) are ψ-stable, and the former is preferred to the latter (in terms of stage payoffs) by at least mG players and at least lG government members, as φ(G) > ψ(G) and the intersection of H and G contains at least lG members (because H = φ(G)). The existence of such H leads to a contradiction, which completes the proof. B. Dynamic Game We now present a dynamic game that captures certain salient features of the process of government change. This game involves different individuals proposing alternative governments and then all individuals (including current members of the government) voting over these proposals. Then we define the MPEs of this game, establish their existence and their properties, and show the equivalence between the notion of MPE and that of the (stochastic) Markov political equilibrium defined in the text. We also provide a number of examples referred to in the text (e.g., on the possibility of a cyclic MPE or political equilibrium, and on changes in expected competence). Let us first introduce this additional state variable, denoted by v t , which determines whether the current government can be changed. In particular, v t takes two values: v t = s corresponds to a “sheltered” political situation (or “stable” political situation, although we reserve the term “stable” for governments that persist over time) and v t = u designates an unstable situation. The government can be changed only during unstable times. A sheltered political situation destabilizes (becomes unstable) with probability r in each period; that is, Pr(v t = u | v t−1 = s) = r. These events are independent across periods, and we also assume that v 0 = u. An unstable situation becomes sheltered when an incumbent government survives a challenge or is not challenged at all (as explained below). We next describe the procedure for challenging an incumbent government. We start with some government Gt at time t. If at time t the situation is unstable, then all individuals i ∈ I are ordered according to some sequence ηGt . Individuals, in turn, nominate subsets of alternative governments Ait ⊂ G \ {Gt }, which

POLITICAL SELECTION AND BAD GOVERNMENTS

1555

will be part of the primaries. An individual may choose not to nominate any alternative government, in which case he or she may choose Ait = ∅. All nominated governments make up the set At , so (15)

At = {G ∈ G \ {Gt } : G ∈ Ai for some i ∈ I}.

If At = ∅, then all alternatives in At take part in the primaries at time t. The primaries work as follows. All of the altert t t natives in At are ordered ξGAt (1), ξGAt (2), . . . , ξGAt (|At |) according to some prespecified order (depending on At and the current governt ment Gt ). We refer to this order as the protocol, ξGAt . The primaries are then used to determine the challenging government G ∈ At . In particular, we start with G1 , given by the first element of the t t protocol ξGAt , ξGAt (1). In the second step, G1 is voted against the sect ond element, ξGAt (2). We assume that all votes are sequential (and show below that the sequence in which votes take place does not have any affect on the outcome). If more than n/2 of individuals t support the latter, then G2 = ξGAt (2); otherwise G2 = G1 . Proceed ing in order, G3 , G4 , . . ., and G|At | are determined, and G is equal to the last element of the sequence, G|At | . This ends the primary. After the primary, the challenger G is voted against the incumbent government Gt . G wins if and only if a winning coalition of individuals (i.e., a coalition that belongs to WGt t ) supports G . Otherwise, we say that the incumbent government Gt wins. If At = ∅ to start with, then there is no challenger and the incumbent government is again the winner. If the incumbent government wins, it stays in power, and moreover the political situation becomes sheltered; that is, Gt+1 = Gt and v t+1 = s. Otherwise, the challenger becomes the new government, but the situation remains unstable; that is, Gt+1 = G and v t+1 = v t = u. All individuals receive stage payoff wi (Gt ) (we assume that the new government starts acting from the next period). More formally, the exact procedure is as follows.

r Period t = 0, 1, 2, . . . begins with government Gt in power.

If the political situation is sheltered, v t = s, then each individual i ∈ I receives stage payoff uit (Gt ); in the next period, Gt+1 = Gt , v t+1 = v t = s with probability 1 − r and v t+1 = u with probability r. r If the political situation is unstable, vt = u, then the following events take place:

1556

QUARTERLY JOURNAL OF ECONOMICS

1. Individuals are ordered according to ηGt , and in this sequence, each individual i nominates a subset of feasible governments Ait ⊂ G \ {Gt } for the primaries. These determine the set of alternatives At as in (15). 2. If At = ∅, then we say that the incumbent government wins, Gt+1 = Gt , v t+1 = s, and each individual receives states payoff uit (Gt ). If At = ∅, then the alternatives in t At are ordered according to protocol ξGAt . t t 3. If A = ∅, then the alternatives in A are voted against t each other. In particular, at the first step, G1 = ξGAt (1). t t If |A | > 1, then for 2 ≤ j ≤ |A |, at step j, alternative t Gj−1 is voted against ξGAt ( j). Voting in the primary takes place as follows: all individuals vote yes or no sequentially according to some prespecified order, and Gj = t ξGAt ( j) if and only if the set of the individuals who voted yes, Y tj , is a simple majority (i.e., if |Y tj | > n/2); otherwise, Gj = Gj−1 . The challenger is determined as G = G|At | . 4. Government G challenges the incumbent government Gt , and voting in the election takes place. In particular, all individuals vote yes or no sequentially according to some prespecified order, and G wins if and only if the set of the individuals who voted yes, Y t , is a winning coalition in Gt (i.e., if Y t ∈ WGt t ); otherwise, Gt wins. 5. If Gt wins, then Gt+1 = Gt , v t+1 = s; if G wins, then Gt+1 = G , v t+1 = u. In either case, each individual obtains stage payoff uit (Gt ) = wi (Gt ). Several important features of this dynamic game are worth emphasizing. First, the set of winning coalitions, WGt t when the government is Gt , determines which proposals for change in the government are accepted. Second, to specify a well-defined game we had to introduce the prespecified order ηG in which indit viduals nominate alternatives for the primaries, the protocol ξGA for the order in which alternatives are considered, and also the order in which votes are cast. Ideally we would like these orders not to have a major influence on the structure of equilibria, because they are not an essential part of the economic environment and we do not have a good way of mapping the specific orders to reality. We will see that this is indeed the case in the equilibria of interest. Finally, the rate at which political situations become unstable, r,

POLITICAL SELECTION AND BAD GOVERNMENTS

1557

has an important influence on payoffs by determining the rate at which opportunities to change the government arise. In what follows, we assume that r is relatively small, so that political situations are not unstable most of the time. Here, it is also important that political instability ceases after the incumbent government withstands a challenge (or if there is no challenge). This can be interpreted as the government having survived a “no-confidence” motion. In addition, as in the text, we focus on situations in which the discount factor β is large. C. Strategies and Definition of Equilibrium We define strategies and equilibria in the usual fashion. In t particular, let ht,Q denote the history of the game up to period t t and stage Q in period t (there are several stages in period t if v t = u). This history includes all governments, proposals, votes, and stochastic events up to this time. The set of histories is denoted by t t Ht,Q . A history ht,Q can also be decomposed into two parts. We can t t,Q t t t write h = (h , Q ) and correspondingly Ht,Q = Ht × Qt , where t h summarizes all events that have taken place up to period t − 1 and Q t is the list of events that have taken place within the time instant t when there is an opportunity to change the government. t A strategy of individual i ∈ I, denoted by σi , maps Ht,Q (for t all t and Q ) into a proposal when i nominates alternative governments for primaries (i.e., at the first stage of the period where v t = u) and a vote for each possible proposal at each possible decision node (recall that the ordering of alternatives is automatic and is done according to a protocol). A subgame perfect equilibrium (SPE) is a strategy profile {σi }i∈I such that the strategy of each i is the best response to the strategies of all other individuals for all histories. Because there can be several SPE in dynamic games, many supported by complex trigger strategies, which are not our focus here, in this Appendix, we will limit our attention to the Markovian subset of SPEs. We next introduce the standard definition of MPE: DEFINITION 3. A Markov perfect equilibrium is a profile of strategies {σi∗ }i∈I that forms an SPE and such that σi∗ for each i in each period t depends only on Gt , t , W t , and Q t (previous actions taken in period t). MPEs are natural in such dynamic games, because they enable individuals to condition on all of the payoff-relevant information, but rule out complicated trigger-like strategies, which

1558

QUARTERLY JOURNAL OF ECONOMICS

are not our focus in this paper. It turns out that even MPEs potentially lead to a very rich set of behavior. For this reason, it is also useful to consider subsets of MPEs—in particular, acyclic MPEs and order-independent MPEs. As discussed in the text, an equilibrium is acyclic if cycles (changing the initial government but then reinstalling it at some future date) do not take place along the equilibrium path. Cyclical MPEs are both less realistic and more difficult to characterize, motivating our main focus on acyclic MPEs. Formally, we have DEFINITION 4. An MPE σ ∗ is cyclic if the probability that there exist t1 < t2 < t3 such that Gt3 = Gt1 = Gt2 along the equilibrium path is positive. An MPE σ ∗ is acyclic if it is not cyclic. Another relevant subset of MPEs, order-independent MPEs or simply order-independent equilibria, is introduced by Moldovanu and Winter (1995). These equilibria require that strategies should not depend on the order in which certain events (e.g., proposalmaking) unfold. Here we generalize (and slightly modify) their definition for our present context. For this purpose, let us denote the above-described game when the set of protocols is given by t ξ = {ξGA }G∈G,At ∈P(G),G/∈At as GAME[ξ ] and denote the set of feasible protocols by X . DEFINITION 5. Consider GAME[ξ ]. σ ∗ is an order-independent equilibrium for GAME[ξ ] if for any ξ ∈ X , there exists an equilibrium σ ∗ of GAME[ξ ] such that σ ∗ and σ ∗ lead to the same distributions of equilibrium governments Gτ | Gt for τ > t. We establish the relationship between acyclic and orderindependent equilibria in Theorem 5.22 D. Characterization of Markov Perfect Equilibria Recall the mapping φ : G → G defined by (6). We use the next theorem to establish the equivalence between political equilibria and MPEs in the dynamic noncooperative game. THEOREM 4. Consider the game described above. Suppose that Assumptions 1–3 hold and let φ : G → G be the political 22. One could also require order independence with respect to η as well as with respect to ξ . It can be easily verified that the equilibria we focus on already satisfy this property and hence, this is not added as a requirement of “order independence” in Definition 5.

POLITICAL SELECTION AND BAD GOVERNMENTS

1559

equilibrium given by (6). Then there exists ε > 0 such that for any β and r satisfying β > 1 − ε and r/(1 − β) < ε and any protocol ξ ∈ X , 1. There exists an acyclic MPE in pure strategies σ ∗ . 2. Take an acyclic MPE in pure or mixed strategies σ ∗ . Then we have that, if φ(G0 ) = G0 , then there are no transitions; otherwise, with probability 1, there exists a period t where the government φ(G0 ) is proposed, wins the primaries, and wins the power struggle against Gt . After that, there are no transitions, so Gτ = φ(G0 ) for all τ ≥ t. Proof of Theorem 4. Part 1. The proof of this theorem relies on Lemma 1. Let β0 be such that for any β > β0 the following inequalities are satisfied: (16) for any G, G , H, H ∈ G and i ∈ I : wi (G) < wi (H) implies (1 − β |G| )wi (G ) + β |G| wi (G) < 1 − β |G| wi (H ) + β |G| wi (H). For each G ∈ G, define the following mapping χG : G → G: φ(H) if H = G χG (H) = G if H = G. Take any protocol ξ ∈ X . Now take some node of the game in the beginning of some period t when ν t = u. Consider the stages of the dynamic game that take place in this period as a finite game by assigning the following payoffs to the terminal nodes: (17)

⎧ β ⎪ ⎪ wi (H) + wi (φ(H)) if H = G ⎪ ⎪ 1−β ⎨ 2 1 + rβ rβ vi (G, H) = wi (G) + wi (φ(G)) ⎪ ⎪ ⎪ 1 − β(1 − r) (1 − β)(1 − β(1 − r)) ⎪ ⎩ if H = G,

where H = Gt+1 is the government that is scheduled to be in power in period t + 1, that is, the government that defeated the incumbent Gt if it was defeated, and Gt itself if it was not. For any such period t, take an SPE in pure strategies σG∗ = σG∗ t of the truncated game, such that this SPE is the same for any two nodes with the same incumbent government; the latter requirement ensures that once we map these SPEs to a strategy profile σ ∗ of the entire

1560

QUARTERLY JOURNAL OF ECONOMICS

game GAME[ξ ], this profile will be Markovian. In what follows, we prove that for any G ∈ G, (a) if σG∗ is played, then there is no transition if φ(G) = G and there is a transition to φ(G) otherwise; and (b) actions in profile σ ∗ are best responses if continuation payoffs are taken from profile σ ∗ rather than assumed to be given by (17). These two results will complete the proof of part 1. We start with part (a). Take any government G and consider the SPE of the truncated game σG∗ . First, consider the subgame where some alternative H has won the primaries and challenges the incumbent government G. Clearly, proposal H will be accepted if and only if φ(H) G. This implies, in particular, from the construction of mapping φ, that if φ(G) = G, then no alternative H may be accepted. Second, consider the subgame where nominations have been made and the players are voting according to protocol ξGA . We prove that if φ(G) ∈ A, then φ(G) wins the primaries regardless of ξ (and subsequently wins against G, as φ(φ(G)) = φ(G) G. This is proved by backward induction: assuming that φ(G) has number q in the protocol, let us show that if it makes its way to the jth round, where q ≤ j ≤ |A|, then it will win in this round. The base is evident: if φ(G) wins in the last round, 1 w(φ(G)) (we drop the players will get v(G, φ(G)) = χG (φ(G)) = 1−β subscript for the player to refer to w and v as vectors of payoffs), whereas if it loses, they get v(G, H) for some H = φ(G). Clearly, voting for φ(G) is better for a majority of the population, and thus φ(G) wins the primaries and defeats G. The step is proven similarly, hence, in the subgame that starts from the qth round, φ(G) defeats the incumbent government. Because this holds irrespective of what happens in previous rounds, this concludes the second step as well. Third, consider the stage where nominations are made, and suppose, to obtain a contradiction, that φ(G) is not proposed. Then, in the equilibrium, players get a payoff vector v(G, H), where H = φ(G). But then, clearly, any member of φ(G) has a profitable deviation, which is to nominate φ(G) instead of or in addition to what he or she nominates in profile σG∗ . Because in an SPE there should be no profitable deviations, this completes the proof of part (a). Part (b) is straightforward. Suppose that the incumbent government is G. If some alternative H defeats government G, then from part (a), the payoffs that players get starting from next peβ 1 wi (H) if φ(H) = H and wi (H) + 1−β wi (φ(H)) riod are given by 1−β otherwise; in either case, the payoff is exactly equal to vi (G, H).

POLITICAL SELECTION AND BAD GOVERNMENTS

1561

If no alternative defeats government G, then ν t+1 = s (the situation becomes stable), and after that, government G stays until the situation becomes unstable, and government φ(G) is in power in all periods ever since; this again gives the payoff 1+rβ rβ 2 w (G) + (1−β)(1−β(1−r)) wi (φ(G)). This implies that the contin1−β(1−r) i uation payoffs are indeed given by vi (G, H), which means that if in the entire game profile σ ∗ is played, no player has a profitable deviation. This proves part 1. Part 2. Suppose σ ∗ is an acyclic MPE. Take any government G = Gt at some period t in some node on or off the equilibrium path. Define binary relation → on set G as follows: G → H if and only if either G = H and G has a positive probability of staying in power when Gt = G and ν t = u, or G = H and Gt+1 = H with positive probability if Gt = G and ν t = u. Define another binary relation → on G as follows: G → H if any only if there exists a sequence (perhaps empty) of different governments H1 , . . . , Hq such that G → H1 → H2 → · · · → Hq = H and H → H. In other words, G → H if there is an on-equilibrium path that involves a sequence of transitions from G to H and stabilization of political situation at H. Now, because σ ∗ is an acyclic equilibrium, there is no sequence that contains at least two different governments H1 , . . . , Hq such that H1 → H2 → · · · → Hq → H1 . Suppose that for at least one G ∈ G, the set {H ∈ G : G → H} contains at least two elements. From acyclicity it is easy to derive the existence of government G with the following properties: {H ∈ G : G → H} contains at least two elements, but for any element H of this set, {H ∈ G : H → H } is a singleton. Consider the restriction of profile σ ∗ on the part of the game where government G is in power, and call it σG∗ . The way we picked G implies that some government may defeat G with a positive probability, and for any such government H the subsequent evolution prescribed by profile σ ∗ does not exhibit any uncertainty, and the political situation will stabilize at the unique government H = G (but perhaps H = H) such that H → H in no more than |G| − 2 steps. Given our assumption (16) and the assumption that r is small, this implies that no player is indifferent between two terminal nodes of this period that ultimately lead to two different governments H1 and H2 , or between one where G stays and one where it is overthrown. But players act sequentially, one at a time, which means that the last player to act on the equilibrium path when it is still possible to get different outcomes must mix, and

1562

QUARTERLY JOURNAL OF ECONOMICS

therefore be indifferent. This contradiction proves that for any G, government H such that G → H is well defined. Denote this government by ψ(G). To complete the proof, we must show that ψ(G) = φ(G) for all G. Suppose not; then, because ψ(G) G (otherwise G would not be defeated, as players would prefer to stay in G), we must have that φ(G) > ψ(G) . This implies that if some alternative H such that H → φ(G) is nominated, it must win the primaries; this is easily shown by backward induction. If no such alternative is nominated, then, because there is a player who prefers φ(G) to ψ(G) (any member of φ(G) does), that player would be better off deviating and nominating ψ(G). A deviation is not possible in equilibrium, so ψ(G) = φ(G) for all G. By construction of mapping ψ, this implies that there are no transitions if G = φ(G) and one or more transitions ultimately leading to government φ(G) otherwise. This completes the proof. The most important result of this theorem is that any acyclic MPE leads to equilibrium transitions given by the same mapping φ, defined in (6), that characterizes political equilibria as defined in Definition 1. This result thus provides further justification for the notion of political equilibrium used in the paper. The hypothesis that r is sufficiently small ensures that stable political situations are sufficiently “stable,” so that if the government passes a “no-confidence” vote, it stays in power for a nontrivial amount of time. Such a requirement is important for the existence of an MPE in pure strategies and thus for our characterization of equilibria. It underpins the second requirement in part 2 of Definition 1. Example 7, which is presented next, illustrates the potential for nonexistence of pure strategy MPEs without this assumption. EXAMPLE 7. Suppose that the society consists of five individuals 1, 2, 3, 4, 5 (n = 5). Suppose each government consists of two members, so k = 2. Suppose also that l = 1 and m = 3, and {i, j} = 30 − min{i, j} − 5 max{i, j}. Moreover, assume that all individuals care a lot about being in the government and about competence if they are not in the government; however, if an individual compares the utility of being a member of two different governments, he or she is almost indifferent. In this environment, there are two fixed points of mapping φ: {1, 2} and {3, 4}.

POLITICAL SELECTION AND BAD GOVERNMENTS

1563

Let us show that there is no MPE in pure strategies if v t = u for all t (so that the incumbent government is contested in each period). Suppose that there is such an equilibrium for some protocol ξ . One can easily see that no alternative may win if the incumbent government is {1, 2}: indeed, if in equilibrium there is a transition to some G = {1, 2}, then in the last vote, when {1, 2} is challenged by G, both 1 and 2 would be better off rejecting the alternative and postponing the transition to the government (or a chain of governments) that they like less. It is also not hard to show that any of the governments that include 1 or 2 (i.e., {1, 3}, {1, 4}, {1, 5}, {2, 3}, {2, 4}, and {2, 5}) lose the contest for power to {1, 2} in equilibrium. Indeed, if {1, 2} is included in the primaries, it must be the winner (intuitively, this happens because {1, 2} is the Condorcet winner in simple majority voting). Given that, it must always be included in the primaries, for otherwise individual 1 would have a profitable deviation and nominate {1, 2}. We can now conclude that government {3, 4} is stable: any government that includes 1 or 2 will immediately lead to {1, 2}, which is undesirable for both 3 and 4, whereas {3, 5} and {4, 5} are worse than {3, 4} for 3 and 4 as well; therefore, if there is some transition in equilibrium, then 3 and 4 are better off staying at {3, 4} for an extra period, which is a profitable deviation. We now consider the governments {3, 5} and {4, 5}. First, we rule out the possibility that from {3, 5} the individuals move to {4, 5} and vice versa. Indeed, if this were the case, then in the last vote when the government was {3, 5} and the alternative was {4, 5}, individuals 1, 2, 3, 5 would be better off blocking this transition (i.e., postponing it for one period). Hence, either one of governments {3, 5} and {4, 5} is stable or one of them leads to {3, 4} in one step or {1, 2} in two steps. We consider these three possibilities for the government {3, 5} and arrive at a contradiction; the case of {4, 5} may be considered similarly and also leads to a contradiction. It is trivial to see that a transition to {1, 2} (in one or two steps) cannot be an equilibrium. If this were the case, then in the last vote, individuals 3 and 5 would block this transition, because they are better off staying in {3, 5} for one more period (even if the intermediate step to {1, 2} is a government that includes either 3 or 5). This is a profitable deviation that cannot happen in an equilibrium. It is also trivial to check

1564

QUARTERLY JOURNAL OF ECONOMICS

that {3, 5} cannot be stable. Indeed, if this were the case, then if alternative {3, 4} won the primaries, it would be accepted, as individuals 1, 2, 3, 4 would support it. At the same time, any alternative that would lead to {1, 2} would not be accepted, and neither would alternative {4, 5}, unless it led to {3, 4}. Because of that, alternative {3, 4} would make its way through the primaries if nominated, for it is better than {3, 5} for a simple majority of individuals. But then {3, 4} must be nominated, for, say, individual 4 is better off if it were, because he or she prefers {3, 4} to {3, 5}. Consequently, if {3, 5} were stable, we would get a contradiction, because we proved that in this case, {3, 4} must be nominated, win the primaries, and take over the incumbent government {3, 5}. The remaining case to consider is where from {3, 5} the individuals transit to {3, 4}. Note that in this case, alternative {1, 2} would be accepted if it won the primaries: indeed, individuals 1 and 2 prefer {1, 2} over {3, 4} for obvious reasons, but individual 5 is also better off if {1, 2} is accepted, even if the former grants him an extra period of staying in power (as the discount factor β is close to 1). Similarly, any alternative that would lead to {1, 2} in the next period must also be accepted in the last vote. This implies, however, that the alternative ({1, 2} or some other one that leads to {1, 2} ) must necessarily win the primaries if nominated (by the previous discussion, {4, 5} may not be a stable government, and hence the only choice the individuals make is whether to move ultimately to {3, 4} or to {1, 2}, of which they prefer the latter). This, in turn, means that {1, 2} must be nominated, for otherwise, say, individual 1 would be better off doing that. Hence, we have come to a contradiction in all possible cases, which proves that for no protocol ξ a MPE exists in pure strategies. Thus, the proof that both cyclic and acyclic MPEs do not exist is complete. E. Cycles, Acyclicity, and Order-Independent Equilibria The acyclicity requirement in Theorem 4 (similar to the requirement of acyclic political equilibrium in Theorem 1) is not redundant. We next provide an example of a cyclic MPE. EXAMPLE 8. Consider a society consisting of five individuals (n = 5). The only feasible governments are {1}, {2}, {3}, {4}, {5}. Suppose that there is “perfect democracy,” that is, lG = l = 0 for G ∈ G k, and that voting takes the form of simple majority

POLITICAL SELECTION AND BAD GOVERNMENTS

1565

rule, that is, mG = m = 3 for all G. Suppose also that the competences of different feasible governments are given by {i} = 5 − i, so {1} is the best government. Assume also that stage payoffs are given as in Example 2. In particular, wi (G) = G + 100I{i∈G} . These utilities imply that each individual receives a high value from being part of the government relative to the utility he or she receives from government competence. Finally, we define the protocols ξGA as follows. If G = {1}, G\{G} {{2},{3},{4},{5}} A then ξG = ξ{1} = ({3}, {4}, {5}, {2}) and ξ{1} for A = {2,3,4,5}

({3}, {4}, {5}, {2}) is obtained from ξ{1}

by dropping govern{{2},{3},{5}}

ments that are not in A: for example, ξ{1}

= ({3}, {5},

{{1},{3},{4},{5}} {2}). For other governments, we define ξ{2} = ({4}, {{1},{2},{4},{5}} {{1},{2},{3},{5}} {5}, {1}, {3}), ξ{3} =({5}, {1}, {2}, {4}), ξ{4} =({1}, {{1},{2},{3},{4}} {2}, {3}, {5}) and ξ{5} = ({2}, {3}, {4}, {1}), and for other A A again define ξG by dropping the governments absent in

A. Then there exists an equilibrium where the governments follow a cycle of the form {5} → {4} → {3} → {2} → {1} → {5} → · · · . To verify this claim, consider the following nomination strategies by the individuals. If the government is {1}, two individuals nominate {2} and the other three nominate {5}; if it is {2}, two individuals nominate {3} and three nominate {1}; if it is {3}, two nominate {4} and three nominate {2}; if it is {4}, two nominate {5} and three nominate {3}; if it is {5}, two nominate {1} and three nominate {4}. Let us next turn to voting strategies. Here we appeal to Lemma 1 from Acemoglu, Egorov, and Sonin (2008), which shows that in this class of games, it is sufficient to focus on strategies in which individuals always vote for the alternative yielding the highest payoff for them at each stage. In equilibrium, any alternative government that wins the primaries, on or off equilibrium path, subsequently wins against the incumbent government. In particular, in such an equilibrium, supporting the incumbent government breaks a cycle, but only one person (the member of the incumbent government) is in favor of it. We next show if only one individual deviates in the nomination stage, the next government in the cycle still wins in the primaries. Suppose that the current government is {3} (other cases are treated similarly). Then by

1566

QUARTERLY JOURNAL OF ECONOMICS

construction, governments {2} and {4} are necessarily nominated, and {1} or {5} may either be nominated or not. If the last vote in the primaries is between {2} and {4}, then {2} wins: indeed, all individuals know that both alternatives can take over the incumbent government, but {2} is preferred by individuals 1, 2, and 5 (because they want to be government members earlier rather than later). If, however, the last stage involves voting between {4} on the one hand and either {1} or {5} on the other, then {4} wins for similar reason. Now, if either {1} or {5} is nominated, then in the first voting it is voted against {2}. All individuals know that accepting {2} will ultimately lead to a transition to {2}, whereas supporting {1} or {5} will lead to {4}. Because of that, at least three individuals (1, 2, 5) will support {2}. This proves that {2} will win against the incumbent government {3}, provided that {2} and {4} participate in the primaries, which is necessarily the case if no more than one individual deviates. This, in turn, implies that nomination strategies are also optimal in the sense that there is no profitable one-shot deviation for any individual. We can easily verify that this holds for other incumbent governments as well. We have thus proved that the strategies we constructed form SPE; because they are also Markovian, it is an MPE as well. Along the equilibrium path, the governments follow a cycle {5} → {4} → {3} → {2} → {1} → {5} → · · · . We can similarly construct a cycle that moves in the other direction: {1} → {2} → {3} → {4} → {5} → {1} → · · · (though this would require different protocols). Hence, for some protocols, cyclic equilibria are possible. Intuitively, a cycle enables different individuals that will not be part of the limiting (stable) government to enjoy the benefits of being in power. This example, and the intuition we suggest, also highlight that even when there is a cyclic equilibrium, an acyclic equilibrium still exists. (This is clear from the statement in Theorem 1, and also from Theorem 5.) Example 8 also makes it clear that cyclic equilibria are somewhat artificial and less robust. Moreover, as emphasized in Theorems 1 and 4, acyclic equilibria have an intuitive and economically meaningful structure. In the text, we showed how certain natural restrictions rule out cyclic political equilibria (Theorem 2). Here we take a complementary approach and show that the refinement of MPE introduced

POLITICAL SELECTION AND BAD GOVERNMENTS

1567

above, order-independent equilibrium, is also sufficient to rule out cyclic equilibria (even without the conditions in Theorem 2). This is established in Theorem 5 below, which also shows that with order-independent MPE, multi-step transitions, which are possible under MPE as shown in the next example, will also be ruled out. EXAMPLE 9. Take the setup of Example 8, with the exception that l{1} = 1 (so that consent of individual 1 is needed to change the government when the government is {1}). It is easy to check that the strategy profile constructed in Example 8 is an MPE in this case as well. However, this strategy profile will lead to different equilibrium transitions. Indeed, if the government is {1}, individual 1 will vote against any alternative which wins the primaries, and thus alternative {5} will not be accepted in equilibrium, so government {1} will persist. Hence, in equilibrium, the transitions are as follows: {5} → {4} → {3} → {2} → {1}. We now establish that order-independent equilibria always exist, are always acyclic, and lead to rapid (one-step) equilibrium transitions. As such, this theorem will be a strong complement to Theorem 2 in the text, though its proof requires a slightly stronger version of Assumption 3, which we now introduce. ASSUMPTION 3. For any i ∈ I and any sequence of feasible governments, H1 , H2 , . . . , Hq ∈ G which include at least two different ones, we have q j=2 wi (H j ) . wi (H1 ) = q−1 Recall that Assumption 3 imposed that no two feasible governments have exactly the same competence. Assumption 3 strengthens this and requires that the competence of any government should not be the average of the competences of other feasible governments. Like Assumption 3, Assumption 3 is satisfied “generically,” in the sense that if it were not satisfied for a society, any small perturbation of competence levels would restore it. THEOREM 5. Consider the game described above. Suppose that Assumptions 1, 2, and 3 hold and let φ : G → G be the political equilibrium defined by (6). Then there exists ε > 0 such that

1568

QUARTERLY JOURNAL OF ECONOMICS

for any β and r satisfying β > 1 − ε and r/(1 − β) < ε and any protocol ξ ∈ X : 1. There exists an order-independent MPE in pure strategies σ ∗. 2. Any order-independent MPE in pure strategies σ ∗ is acyclic. 3. In any order-independent MPE σ ∗ , we have that, if φ(G0 ) = G0 , then there are no transitions and government Gt = G0 for each t; if φ(G0 ) = G0 , then there is a transition from G0 to φ(G0 ) in period t = 0, and there are no more transitions: Gt = φ(G0 ) for all t ≥ 1. 4. In any order-independent MPE σ ∗ , the payoff of each individual i ∈ I is given by β wi (φ(G0 )). ui0 = wi (G0 ) + 1−β Proof of Theorem 5. See Online Appendix B.

F. Stochastic Markov Perfect Equilibria We next characterize the structure of (order-independent) stochastic MPEs (that is, order-independent MPEs in the presence of stochastic shocks) and establish the equivalence between order-independent (or acyclic) MPE and our notion of (acyclic stochastic) political equilibrium. Once again, the most important conclusion from this theorem is that MPE of the dynamic game discussed here under stochastic shocks lead to the same behavior as our notion of stochastic political equilibrium introduced in Definition 2. THEOREM 6. Consider the above-described stochastic environment. Suppose that Assumptions 1, 2, 3 , and 4 hold. Let t . φ t : G → G be the political equilibrium defined by (6) for G Then there exists ε > 0 such that for any β and r satisfying β > 1 − ε, r/(1 − β) < ε, and δ < ε, for any protocol ξ ∈ X , we have the following results: 1. There exists an order-independent MPE in pure strategies. 2. Suppose that between periods t1 and t2 there are no shocks. Then in any order-independent MPE in pure strategies, the following results hold: if φ(Gt1 ) = Gt1 , then there are no transitions between t1 and t2 ; if φ(Gt1 ) = Gt1 , then alternative φ(Gt1 ) is accepted during the first period of instability (after t1 ).

POLITICAL SELECTION AND BAD GOVERNMENTS

Proof of Theorem 6. See Online Appendix B.

1569

G. Additional Examples The next example (Example 10) shows that in stochastic environments, even though the likelihood of the best government coming to power is higher under more democratic institutions, the expected competence of stable governments may be lower. EXAMPLE 10. Suppose n = 9, k = 4, l = l1 = 3, m = 5. Let the individuals be denoted 1, 2, 3, 4, 5, 6, 7, 8, 9, with decreasing ability. Namely, suppose that the abilities of individuals 1, . . . , 8 are given by γi = 28−i , and γ9 = −106 . Then the 14 stable governments, in the order of decreasing competence, are given as follows: {1, 2, 3, 4} {2, 3, 5, 8} {1, 2, 5, 6} {1, 2, 7, 8}

{2, 3, 6, 7} {2, 4, 5, 7}

{1, 3, 5, 7} {1, 3, 6, 8}

{2, 4, 6, 8} {3, 4, 5, 6}

{1, 4, 5, 8} {3, 4, 7, 8} {1, 4, 6, 7} {5, 6, 7, 8} (Note that this would be the list of stable governments for 9 , except that, say, {1368} any decreasing sequence {γi }i=1 may become less competent than {1458} .) Now consider the same parameters, but take l = l2 = 2. Then there are three stable governments {1, 2, 3, 4}, {1, 5, 6, 7}, and {2, 5, 8, 9}. For a random initial government, the probability that individual 9 will be a part of the stable government that evolves is 9/126 = 1/16: of 94 = 126 feasible governments there are 9 governments that lead to {2, 5, 8, 9}, which are {2, 5, 8, 9}, {2, 6, 8, 9}, {2, 7, 8, 9}, {3, 5, 8, 9}, {3, 6, 8, 9}, {3, 7, 8, 9}, {4, 5, 8, 9}, {4, 6, 8, 9}, and {4, 7, 8, 9}. Clearly, the expected competence of government for l2 = 2 is negative, whereas for l1 = 1 it is positive, as no stable government includes the least competent individual 9. Next, we provide an example (Example 11) of a cyclic political equilibrium. EXAMPLE 11. There are n = 19 players and three feasible governments: A = {1, 2, 3, 4, 5, 6, 7}, B = {7, 8, 9, 10, 11, 12, 13}, C = {13, 14, 15, 16, 17, 18, 19} (so k¯ = 7). The discount factor

1570

QUARTERLY JOURNAL OF ECONOMICS

is sufficiently close to 1, say, β > 0.999. The institutional parameters of these governments and players’ utilities from them are given in the following table: G {A} {B} {C}

lG 0 0 0

mG 10 11 12

1 90 60 10

2 90 20 10

3 90 20 10

4 90 20 10

5 90 20 10

6 90 20 10

7 90 80 10

8 60 80 10

9 60 80 10

10 30 80 10

11 30 80 10

12 30 80 10

13 30 80 70

14 30 20 70

15 30 20 70

16 30 20 70

17 30 20 70

18 30 20 70

19 45 20 70

We claim that φ, given by φ(A) = C, φ(B) = A, φ(C) = B is a (cyclic) political equilibrium. Let us first check property (ii) of Definition 1. The set of players with Vi (C) > Vi (A) is {10, 11, 12, 13, 14, 15, 16, 17, 18, 19} (as β is close to 1, the simplest way to check this condition for player i is to verify whether wi (A) is greater or less than the average of wi (A), wi (B), wi (C); the case where these are equal deserves more detailed study, and is critical for this example). These ten players form a winning coalition in A. The set of players with Vi (A) > Vi (B) is {2, 3, 4, 5, 6, 14, 15, 16, 17, 18, 19}; these eleven players form a winning coalition in B. The set of players with Vi (B) > Vi (C) is {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}; these twelve players form a winning coalition in C. Let us now check condition (ii) of Definition 1. Suppose the current government is A; then the only H we need to consider is B. Indeed, if H = C then Vi (H) > Vi (φ(A)) = Vi (C) is impossible for any player, and if H = A, then Vi (H) > Vi (φ(A)) cannot hold for a winning coalition of players, as the opposite inequality Vi (φ(A)) > Vi (A) holds for a winning coalition (condition (i)), and any two winning coalitions intersect in this example. But for H = B, condition (ii) of Definition 1 is also satisfied, as Vi (B) > wi (A)/(1 − β) holds for players from the set {10, 11, 12, 13, 14, 15, 16, 17, 18} only, which is not a winning coalition, as there are only nine players (we used the fact that player 19 has Vi (A) > Vi (B), but Vi (B) > wi (A)/(1 − β) for β close to 1, as 45 is the average of 20 and 70). If the current government is B, then, as before, only government H = C needs to be considered. But Vi (C) > Vi (A) holds for ten players only, and this is not a winning coalition in B. Finally, if the current government is C, then again, only the case H = A needs to be checked. But Vi (A) > Vi (B) holds for only eleven players, and this is not a winning coalition in C. So both conditions of Definition 1 are satisfied, and thus φ is a cyclic political equilibrium.

POLITICAL SELECTION AND BAD GOVERNMENTS

1571

The results in the paper are obtained for β close to 1 (and in the case with shocks, for r small). If β is close to 0, then an MPE in pure strategies always exists (in the stochastic case, it does for any r). In fact, it is straightforward to show that the equilibrium mapping φ0 takes the following form (again, assuming that governments are enumerated from the best one to the worst one): for all q ≥ 1, define Nq = { j : 1 ≤ j < q, |G j ∩ Gq | ≥ l}, and then, as in the case with β close to 1, define φ0 by Gq if Nq = ∅; φ0 (Gq ) = Gmin{ j∈Nq } if Nq = ∅. Intuitively, β = 0, or close to 0, corresponds to myopic players, and the equilibrium mapping simply defines the best government that includes at least l members from the current government. Consequently, there is no longer any requirement related to the stability of the new government φ(G), which resulted from dynamic considerations. We now demonstrate (Example 12) that for intermediate values of β, the situation is significantly different (and the same argument applies when r is not close to 0): there might not exist an MPE in pure strategies. EXAMPLE 12. Set β = 1/2, and suppose n = 5. Assume k = 2 and l = 1. That is, feasible governments consist of two players, and a transition requires the consent of a simple majority of individuals which must include a member of the incumbent government. For simplicity, restrict the set of feasible governments, G, to four elements: {1, 2}, {1, 3}, {2, 4}, {3, 4}. Preferences of players over these governments are defined in the table below: G {1, 2} {1, 3} {2, 4} {3, 4}

1 90 70 10 1

2 90 10 70 1

3 25 40 15 35

4 20 15 80 25

5 50 40 10 1

In this example, 5 is a “dummy player” who always prefers more competent governments because he has no opportunity to become a member of the government. Players 1 and 2 have well-behaved preferences, so governments {1, 3} and {2, 4}

1572

QUARTERLY JOURNAL OF ECONOMICS

would lead to {1, 2} in one step. Both 1 and 2 value their own membership in the government much more than the quality of governments they are not members of, so transitions from {1, 3} and {2, 4} to these governments will not happen in equilibrium. Player 3 also prefers to stay in {3, 4} forever rather than moving to {1, 2} via {1, 3}. In contrast, player 4 would prefer to transit to {1, 2} through {2, 4} rather than staying in {3, 4} forever. The majority, however, prefers the latter transition to the former one, and these considerations together lead to nonexistence. To obtain a contradiction, suppose that there exists a MPE in pure strategies that implements some transition rule φ. First, we prove that φ({1, 2}) = φ({1, 3}) = {1, 2}. Then we will show that φ({3, 4}) cannot be any of the feasible governments. It is straightforward that φ({1, 2}) = {1, 2} as otherwise 1 and 2 would block the last transition, as they prefer to stay in {1, 2} for one more period. Now let us take government {1, 3}, and prove that φ({1, 3}) = {1, 2}. To show this, notice first that a transition to {2, 4} or {3, 4} would never be approved at the last voting, no matter what the equilibrium transition from {1, 3} is. Indeed, a transition to {2, 4} will ultimately be blocked by 1, 3, 5, for each of whom a single period in {2, 4} is worse than staying in {1, 3} for one period, no matter what the subsequent path is; this follows from evaluating discounted utilities with β = 1/2. A transition to {3, 4} will be blocked by 1, 2, 5 for similar reasons. But then it is easy to see that {1, 3} cannot be stable either. Indeed, if alternative {1, 2} is proposed for the primaries, then the ultimate choice that players make is between staying in {1, 3} and transiting to {1, 2}, as transiting to any of the other two governments is not an option, as such a transition would be blocked at the last stage. But a transition to {1, 2} is preferred by all players except 3; hence, it will eventually be the outcome of the primaries and will defeat {1, 3} in the election stage. Anticipating this, if {1, 2} is not in the primaries in equilibrium player 1 is better off proposing {1, 2}, and this is a profitable deviation. This proves that φ({1, 3}) = {1, 2}. Our next step is to prove that φ({2, 4}) = {1, 2}. A transition to {3, 4} will be blocked at the last stage regardless of the equilibrium transition rule, because both 2 and 4 are worse off, and one incumbent is needed. The same is true about a transition to {1, 3}. Hence, {2, 4} is either stable or must

POLITICAL SELECTION AND BAD GOVERNMENTS

1573

transition to {1, 2}; an argument similar to the one before shows that {2, 4} cannot be stable, as then alternative {1, 2} would win the primaries and be implemented, and therefore some player will propose it for the primaries. This shows that φ({2, 4}) = {1, 2}. Finally, consider state {3, 4}. It is easy to see that φ({3, 4}) cannot equal {1, 2}. Moreover, an immediate transition to {1, 2} will not be accepted at the last voting stage regardless of the equilibrium transition, because both 3 and 4 would prefer to stay in {3, 4} for an extra period (indeed, even if in the next period they expect to transition to the state they like least, such as {1, 3} for player 4, staying in {3, 4} is still preferred as they know that the next transition would be to {1, 2} anyway). Consequently, there are three possibilities. Consider first the case where {3, 4} is stable. This means that an offer to move to {1, 3} would be blocked by 3 and 4 (both get a lower expected utility from that then from staying in {3, 4}). Hence, the only alternative that may be accepted is {2, 4}, and it will actually be accepted if proposed for primaries, as it gives a higher discounted utility to players 1, 2, 4, 5 than staying in {3, 4}. The same reasoning as before suggests that it will then be nominated for primaries, which in turn means that {3, 4} cannot be stable. Consider the possibility that φ({3, 4}) = {2, 4}. If this is the case, then alternative {1, 3} would be accepted, if it makes its way to the final voting. Indeed, for players 1, 3, 5 (and perhaps even player 2, depending on the value of r), transiting to {1, 3} is preferred to staying in {3, 4}, even if eventually a transition to {2, 4} will happen. Hence, if some player nominates {1, 3}, then players will compare transiting to {1, 3} to transiting {2, 4} and staying in {3, 4}, and here {1, 3} is the Condorcet winner: it beats staying in {3, 4} as we just proved, and it beats transiting to {2, 4} because 1, 3, 5 prefer transiting to {1, 2} via {1, 3} than via {2, 4}. Consequently, {1, 3} will be implemented if nominated, and hence some player, say, 1, will be better off nominating it. Consequently, φ({3, 4}) = {2, 4} is impossible. The last possibility is that φ({3, 4}) = {1, 3}. But then, given β = 1/2, both 3 and 4 are better off blocking this transition to {1, 3} at the final voting in order to stay in {3, 4} for at least one more period (and perhaps more, depending on r). This

1574

QUARTERLY JOURNAL OF ECONOMICS

shows that a transition to {1, 3} cannot happen in equilibrium. This final contradiction shows that there is no MPE in pure strategies in this example. MASSACHUSETTS INSTITUTE OF TECHNOLOGY AND CANADIAN INSTITUTE FOR ADVANCED RESEARCH KELLOGG SCHOOL OF MANAGEMENT NEW ECONOMIC SCHOOL

REFERENCES Acemoglu, Daron, “Oligarchic versus Democratic Societies,” Journal of European Economic Association, 6 (2008), 1–44. Acemoglu, Daron, Philippe Aghion, and Fabrizio Zilibotti, “Distance to Frontier, Selection, and Economic Growth,” Journal of the European Economic Association, 4 (2006), 37–74. Acemoglu, Daron, Georgy Egorov, and Konstantin Sonin, “Dynamics and Stability in Constitutions, Coalitions, and Clubs,” NBER Working Paper No. 14239, 2008. Acemoglu, Daron, and James Robinson, “Why Did the West Extend the Franchise? Democracy, Inequality, and Growth in Historical Perspective,” Quarterly Journal of Economics, 115 (2000), 1167–1199. ——, Economic Origins of Dictatorship and Democracy (New York: Cambridge University Press, 2006). ——, “Persistence of Power, Elites and Institutions,” American Economic Review, 98 (2008), 267–293. Acemoglu, Daron, James Robinson, and Thierry Verdier, “Kleptocracy and Divideand-Rule,” Journal of the European Economic Association, 2 (2004), 162–192. Acemoglu, Daron, Davide Ticchi, and Andrea Vindigni, “Emergence and Persistence of Inefficient States,” Journal of European Economic Association, forthcoming, 2010. Aghion, Philippe, Alberto Alesina, and Francesco Trebbi, “Democracy, Technology and Growth,” in Institutions and Economic Performance, Elhanan Helpman, ed. (Cambridge, MA: Harvard University Press, 2009). Alfoneh, Ali, “Ahmadinejad vs. the Technocrats,” American Enterprise Institute Online (http://www.aei.org/outlook/27966, 2008). Baker, Peter, and Susan Glasser, Kremlin Rising: Vladimir Putin’s Russia and the End of Revolution, updated ed. (Dulles, VA: Potomac Books, 2007). Banks, Jeffrey S., and Rangarajan K. Sundaram, “Optimal Retention in Principal/ Agent Models,” Journal of Economic Theory, 82 (1998), 293–323. ` Salvador, and Matthew Jackson, “Choosing How to Choose: Self-Stable Barbera, Majority Rules and Constitutions,” Quarterly Journal of Economics, 119 (2004), 1011–1048. ` Salvador, Michael Maschler, and Jonathan Shalev, “Voting for Voters: A Barbera, Model of the Electoral Evolution,” Games and Economic Behavior, 37 (2001), 40–78. Barro, Robert, “The Control of Politicians: An Economic Model,” Public Choice, 14 (1973), 19–42. ——, “Democracy and Growth,” Journal of Economic Growth, 1 (1996), 1–27. Besley, Timothy, “Political Selection,” Journal of Economic Perspectives, 19 (2005), 43–60. Besley, Timothy, and Anne Case, “Incumbent Behavior: Vote Seeking, Tax Setting and Yardstick Competition,” American Economic Review, 85 (1995), 25–45. Besley, Timothy, and Stephen Coate, “An Economic Model of Representative Democracy,” Quarterly Journal of Economics, 112 (1997), 85–114. ——, “Sources of Inefficiency in a Representative Democracy: A Dynamic Analysis,” American Economic Review, 88 (1998), 139–156.

POLITICAL SELECTION AND BAD GOVERNMENTS

1575

Besley, Timothy, and Masayuki Kudamatsu, “Making Autocracy Work,” in Institutions and Economic Performance, Elhanan Helpman, ed. (Cambridge, MA: Harvard University Press, 2009). Brooker, Paul, Non-Democratic Regimes: Theory, Government, and Politics (New York: St. Martin’s Press, 2000). Bueno de Mesquita, Bruce, Alastair Smith, Randolph D. Siverson, and James D. Morrow, The Logic of Political Survival (Cambridge, MA: MIT Press, 2003). Caselli, Francesco, and Massimo Morelli, “Bad Politicians,” Journal of Public Economics, 88 (2004), 759–782. Cox, Gary, and Jonathan Katz, “Why Did the Incumbency Advantage in U.S. House Elections Grow?” American Journal of Political Science, 40 (1996), 478–497. Diermeier, Daniel, Michael Keane, and Antonio Merlo, “A Political Economy Model of Congressional Careers,” American Economic Review, 95 (2005), 347–373. Egorov, Georgy, and Konstantin Sonin, “Dictators and Their Viziers: Endogenizing the Loyalty–Competence Trade-off,” Journal of European Economic Association, forthcoming, 2010. Ferejohn, John, “Incumbent Performance and Electoral Control,” Public Choice, 50 (1986), 5–25. Jones, Benjamin, and Benjamin Olken, “Do Leaders Matter? National Leadership and Growth since World War II,” Quarterly Journal of Economics, 120 (2005), 835–864. Lagunoff, Roger, “Markov Equilibrium in Models of Dynamic Endogenous Political Institutions,” Georgetown University, Mimeo, 2006. Mattozzi, Andrea, and Antonio Merlo, “Political Careers or Career Politicians?” Journal of Public Economics, 92 (2008), 597–608. McKelvey, Richard, and Raymond Reizman, “Seniority in Legislatures,” American Political Science Review, 86 (1992), 951–965. Menashri, David, Post-Revolutionary Politics in Iran: Religion, Society in Power (London: Frank Cass Publishers, 2001). Meredith, Martin, Mugabe: Power, Plunder and the Struggle for Zimbabwe (New York: Perseus Books, 2007). Messner, Matthias, and Mattias Polborn “Voting on Majority Rules,” Review of Economic Studies, 71 (2004), 115–132. Minier, Jenny, “Democracy and Growth: Alternative Approaches,” Journal of Economic Growth, 3 (1998), 241–266. Moldovanu, Benny, and Eyal Winter, “Order Independent Equilibria,” Games and Economic Behavior, 9 (1995), 21–34. Niskanen, William, Bureaucracy and Representative Government (New York: Auldine, Atherton, 1971). Osborne, Martin, and Al Slivinski, “A Model of Political Competition with CitizenCandidates,” Quarterly Journal of Economics, 111 (1996), 65–96. Padro-i-Miquel, Gerard, “The Control of Politicians in Divided Societies: The Politics of Fear,” Review of Economic Studies, 74 (2007), 1259–1274. Persson, Torsten, Gerard Roland, and Guido Tabellini, “Separation of Power and Political Accountability,” Quarterly Journal of Economics, 112 (1997), 1163– 1202. Przeworski, Adam, and Fernando Limongi, “Modernization: Theories and Facts,” World Politics, 49 (1997), 155–183. Roberts, Kevin, “Dynamic Voting in Clubs,” London School of Economics, Mimeo, 1999. Shleifer, Andrei, and Robert Vishny, “Corruption,” Quarterly Journal of Economics, 108 (1993), 599–617. Tsebelis, George, Veto Players: How Political Institutions Work (Princeton, NJ: Princeton University Press, 2002).

THE DEVELOPING WORLD IS POORER THAN WE THOUGHT, BUT NO LESS SUCCESSFUL IN THE FIGHT AGAINST POVERTY∗ SHAOHUA CHEN AND MARTIN RAVALLION A new data set on national poverty lines is combined with new price data and almost 700 household surveys to estimate absolute poverty measures for the developing world. We find that 25% of the population lived in poverty in 2005, as judged by what “poverty” typically means in the world’s poorest countries. This is higher than past estimates. Substantial overall progress is still indicated—the corresponding poverty rate was 52% in 1981—but progress was very uneven across regions. The trends over time and regional profile are robust to various changes in methodology, though precise counts are more sensitive.

I. INTRODUCTION When the extent of poverty in a given country is assessed, a common (real) poverty line is typically used for all citizens within that country, such that two people with the same standard of living—measured in terms of current purchasing power over commodities—are treated the same way in that both are either poor or not poor. Similarly, for the purpose of measuring poverty in the world as a whole, a common standard is typically applied across countries. This assumes that a person’s poverty status depends on his or her own command over commodities, and not on where he or she lives independently of that.1 In choosing a poverty line for a given country one naturally looks for a line that is considered appropriate for that country, while acknowledging that rich countries tend to have higher real ∗ A great many colleagues at the World Bank helped us in obtaining the necessary data for this paper and answered our many questions. An important acknowledgement goes to the staff of over 100 governmental statistics offices who collected the primary household and price survey data. Our thanks go to Prem Sangraula, Yan Bai, Xiaoyang Li, and Qinghua Zhao for their invaluable help in setting up the data sets we use here. The Bank’s Development Data Group helped us with our many questions concerning the 2005 ICP and other data issues; we are particularly grateful to Yuri Dikhanov and Olivier Dupriez. We have also benefited from the comments of Francois Bourguignon, Gaurav Datt, Angus Deaton, Massoud Karshenas, Aart Kraay, Peter Lanjouw, Rinku Murgai, Ana Revenga, Luis Serv´en, Merrell Tuck, Dominique van de Walle, Kavita Watsa, and the journal’s editors, Robert Barro and Larry Katz, and anonymous referees. We are especially grateful to Angus Deaton, whose comments prompted us to provide a more complete explanation of why we obtain a higher global poverty count with the new data. These are our views and should not be attributed to the World Bank or any affiliated organization. [email protected]; [email protected] 1. For further discussion of this assumption, see Ravallion (2008b) and Ravallion and Chen (2010). C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1577

1578

QUARTERLY JOURNAL OF ECONOMICS

poverty lines than poor ones. (Goods that are luxuries in rural India, say, are considered absolute necessities in the United States.) There must, however, be some lower bound, because the cost of a nutritionally adequate diet (and even of social needs) cannot fall to zero. Focusing on that lower bound for the purpose of measuring poverty in the world as a whole gives the resulting poverty measure a salience in characterizing “extreme poverty,” though higher lines are also needed to obtain a complete picture of the distribution of levels of living. This reasoning led Ravallion, Datt, and van de Walle (RDV) (1991)—in background research for the 1990 World Development Report (World Bank 1990)—to propose two international lines: the lower one was the predicted line for the poorest country and the higher one was a more typical line amongst low-income countries. The latter became known as the “$1-a-day” line. In 2004, about one in five people in the developing world—close to one billion people—were poor by this standard (Chen and Ravallion 2007). This paper reports on the most extensive revision yet of the World Bank’s estimates of poverty measures for the developing world.2 In the light of a great deal of new data, the paper estimates the global poverty count for 2005 and updates all past estimates back to 1981. New data from three sources make the need for this revision compelling. The first is the 2005 International Comparison Program (ICP). The price surveys done by the ICP have been the main data source for estimating PPPs, which serve the important role of locating the residents of each country in the “global” distribution. Prior to the present paper, our most recent global poverty measures had been anchored to the 1993 round of the ICP. A better funded round of the ICP in 2005, managed by the World Bank, took considerable effort to improve the price surveys, including developing clearer product descriptions. A concern about the 1993 and prior ICP rounds was a lack of clear standards in defining internationally comparable commodities. This is a serious concern in comparing the cost of living between poor countries and rich ones, given that there is likely to be an economic gradient in the quality of commodities consumed and (relatively homogeneous) “name brands” are less common in poor countries. Without strict standards in defining the products to be priced, there is a risk 2. By the “developing world” we mean all low- and middle-income countries— essentially the Part 2 member countries of the World Bank.

POVERTY IN THE DEVELOPING WORLD

1579

that one will underestimate the cost of living in poor countries by confusing quality differences with price differences. The new ICP data imply some dramatic revisions to past estimates, consistent with the view that the old ICP data had underestimated the cost-of-living in poor countries (World Bank 2008b). The second data source is a new compilation of poverty lines. The original “$1-a-day” line was based on a compilation of national lines for only 22 developing countries, mostly from academic studies in the 1980s. Although this was the best that could be done at the time, the sample was hardly representative of developing countries even in the 1980s. Since then, national poverty lines have been developed for many other countries. Based on a new compilation of national lines for 75 developing countries provided by Ravallion, Chen, and Sangraula (2009), this paper implements updated international poverty lines, in the spirit of the aim of the original $1-a-day line, namely to measure global poverty by the standards of the poorest countries. The third data source is the large number of new household surveys now available. We draw on 675 surveys, spanning 115 countries and 1979–2006. (In contrast, the original RDV estimates used 22 surveys, one per country; Chen and Ravallion [2004] used 450 surveys.) Each of our international poverty lines at PPP is converted to local currencies in 2005 and then is converted to the prices prevailing at the time of the relevant household survey using the best available Consumer Price Index (CPI). (Equivalently, the survey data on household consumption or income for the survey year are expressed in the prices of the ICP base year, and then converted to PPP dollars.) Then the poverty rate is calculated from that survey. All intertemporal comparisons are real, as assessed using the country-specific CPI. We make estimates at three-year intervals over the years 1981–2005. Interpolation/extrapolation methods are used to line up the survey-based estimates with these reference years, including 2005. We also present a new method of mixing survey data with national accounts (NAS) data to try to reduce survey-comparability problems. For this purpose, we treat the national accounts data on consumption as the data for predicting a Bayesian prior for the survey mean and the actual survey as the new information. Under log-normality with a common variance, the mixed posterior estimator is the geometric mean of the survey mean and its predicted value based on the NAS. These new data call for an upward revision of our past estimates of the extent of poverty in the world, judged by the

1580

QUARTERLY JOURNAL OF ECONOMICS

standards of the world’s poorest countries. The new PPPs imply that the cost of living in poor countries is higher than was thought, implying greater poverty at any given poverty line. Working against this effect, the new PPPs also imply a downward revision of the international value of the national poverty lines in the poorest countries. On top of this, we also find that an upward revision to the national poverty lines is called for, largely reflecting sample biases in the original data set used by RDV. The balance of these data revisions implies a higher count of global poverty by the standards of the world’s poorest countries. However, we find that the poverty profile across regions and the overall rate of progress against absolute poverty are fairly robust to these changes, and to other variations on our methodology. II. PURCHASING POWER PARITY EXCHANGE RATES International economic comparisons have long recognized that market exchange rates are deceptive, given that some commodities are not traded internationally; these include services but also many goods, including some food staples. Furthermore, there is likely to be a systematic effect, stemming from the fact that low real wages in developing countries entail that nontraded goods tend to be relatively cheap. In the literature, this is known as the “Balassa–Samuelson effect” (Balassa 1964; Samuelson 1964), which is the most widely accepted theoretical explanation for an empirical finding known as the “Penn effect”—that richer countries tend to have higher price indices, as given by the ratios of their PPPs to the market exchange rate.3 Thus GDP comparisons based on market exchange rates tend to understate the real incomes of developing countries. Similarly, market exchange rates overstate the extent of poverty in the world when judged relative to a given US$ poverty line. Global economic measurement, including poverty measurement, has relied instead on PPPs, which give conversion rates for a given currency with the aim of ensuring parity in terms of purchasing power over commodities, both internationally traded and nontraded. Here we only point to some salient features of the new PPPs relevant to measuring poverty in the developing world.4 We focus on the PPP for 3. The term “Penn effect” stems from the Penn World Tables (Summers and Heston 1991). 4. Broader discussions of PPP methodology can be found in Ackland, Dowrick, and Freyens (2007), World Bank (2008b), Deaton and Heston (2010), and Ravallion (2010).

POVERTY IN THE DEVELOPING WORLD

1581

individual consumption, which we use later in constructing our global poverty measures.5 The 2005 ICP is the most complete and thorough assessment to date of how the cost of living varies across the world, with 146 countries participating.6 The world was divided into six regions (Africa, Asia–Pacific, Commonwealth of Independent States, South America, Western Asia, and Eurosat–OECD) with different product lists for each. The ICP collected primary data on the prices for 600–1,000 (depending on the region) goods and services grouped under 155 “basic headings” corresponding to the expenditure categories in the national accounts; 110 of these relate to household consumption. The price surveys covered a large sample of outlets in each country and were done by the government statistics offices in each country, under supervision from regional and World Bank authorities. The price surveys for the 2005 ICP were done on a more scientific basis than prior rounds. Following the recommendations of the Ryten Report (United Nations 1998), stricter standards were used in defining internationally comparable qualities of the goods. Region-specific detailed product lists and descriptions were developed, involving extensive collaboration amongst the countries and the relevant regional ICP offices. Not having these detailed product descriptions, it is likely that the 1993 ICP used lower qualities of goods in poor countries than would have been found in (say) the U.S. market.7 This is consistent with the findings of Ravallion, Chen, and Sangraula (RCS) (2009) suggesting that a sizable underestimation of the 1993 PPP is implied by the 2005 data. Furthermore, the extent of this underestimation tends to be greater for poorer countries. The regional PPP estimates were linked through a common set of global prices collected in 18 countries spanning the regions, giving what the ICP calls “ring comparisons.” The design of these ring comparisons was also a marked improvement over past ICP rounds.8 5. This is the PPP for “individual consumption expenditure by households” in World Bank (2008b). It does not include imputed values of government services to households. 6. As compared to 117 in the 1993 ICP; the ICP started in 1968 with PPP estimates for just 10 countries, based on rather crude price surveys. 7. See Ahmad (2003) on the problems in the implementation of the 1993 ICP round. 8. The method of deriving the regional effects is described in Diewert (2008). Also see the discussion in Deaton and Heston (2010).

1582

QUARTERLY JOURNAL OF ECONOMICS

The World Bank uses a multilateral extension of Fisher price indices, known as the EKS method, rather than the Geary– Khamis (GK) method used by the Penn World Tables. The GK method overstates real incomes in poor countries (given that the international prices are quantity-weighted), imparting a downward bias to global poverty measures, as shown by Ackland, Dowrick, and Freyens (2007).9 There were other differences with past ICP rounds, though they were less relevant to poverty measurement.10 Changes in data and methodology are known to confound PPP comparisons across benchmark years (Dalgaard and Sørensen 2002; World Bank 2008a). It can also be argued that poverty comparisons over time for a given country should respect domestic prices.11 We follow standard practice in doing the PPP conversion only once, in 2005, for a given country; all estimates are then revised back in time consistently with the CPI for that country. We acknowledge, however, the national distributions formed this way may well lose purchasing power comparability as one goes further back in time from the ICP benchmark year. Some dramatic revisions to past PPPs are implied by the 2005 ICP, not least for the two most populous developing countries, China and India—neither of which actually participated in the price surveys for the 1993 ICP.12 The 1993 consumption PPP used for China (estimated from non-ICP sources) was 1.42 yuan to the US$ in 1993, whereas the new estimate based on the 2005 ICP is 3.46 yuan (4.09 if one excludes government consumption). The corresponding price index level (US$ = 100) went from 25% in 1993 to 52% in 2005. So the Penn effect is still evident, but it has declined markedly relative to past estimates, with a new PPP at about half the market exchange rate rather than onefourth. Adjusting solely for the differential inflation rates in the United States and China, one would have expected the 2005 PPP 9. Though this problem can be fixed; see Ikl´e (1972). In the 2005 ICP, the Africa region chose to use Ikl´e’s version of the GK method (African Development Bank 2007). 10. New methods for measuring government compensation and housing were used. Adjustments were also made for the lower average productivity of public sector workers in developing countries (lowering the imputed value of the services derived from public administration, education, and health). 11. Nuxoll (1994) argues that the real growth rates measured in domestic prices better reflect the trade-offs facing decision makers at country level, and thus have a firmer foundation in the economic theory of index numbers. 12. In India’s case, the 1993 PPP was an extrapolation from the 1985 PPP based on CPIs, whereas in China’s case the PPP was based on non-ICP sources and extrapolations using CPIs.

POVERTY IN THE DEVELOPING WORLD

1583

to be 1.80 yuan, not 3.46. Similarly, India’s 1993 consumption PPP was Rs 7.0, whereas the 2005 PPP is Rs 16, and the price level index went from 23% to 35%. If one updated the 1993 PPP for inflation one would have obtained a 2005 PPP of Rs 11 rather than Rs 16. Although there were many improvements in the 2005 ICP, the new PPPs still have some problems. Four concerns stand out in the present context. First, making the commodity bundles more comparable across countries (within a given region) invariably entails that some of the reference commodities are not typically consumed in certain countries, and prices are then drawn from untypical outlets such as specialist stores, probably at high prices. However, the expenditure weights are only available for the 115 basic headings (corresponding to the national accounts). So the prices for uncommonly consumed goods within a given basic heading may end up getting undue weight. This problem could be avoided by only pricing representative country-specific bundles, but this would reintroduce the quality bias discussed above, which has plagued past ICP rounds. Using region-specific bundles helps get around the problem, though it also arises in the ring comparisons used to compare price levels in different regions.13 Second, there is a problem of “urban bias” in the ICP surveys for some counties; the next section describes our methods of addressing this problem. Third, as was argued in RDV, the weights attached to different commodities in the conventional PPP rate may not be appropriate for the poor; Section VII examines the sensitivity of our results to the use of alternative “PPPs for the poor” available for a subset of countries from Deaton and Dupriez (2009). Fourth, the PPP is a national average. Just as the cost of living tends to be lower in poorer countries, one expects it to be lower in poorer regions within one country, especially in rural areas. Ravallion, Chen, and Sangraula (2007) have allowed for urban–rural costof-living differences facing the poor, and provided an urban–rural breakdown of our prior global poverty measures using the 1993 PPP. We plan to update these estimates in future work. What do these revisions to past PPPs imply for measures of global extreme poverty? Given that the bulk of the PPPs have risen for developing countries, the poverty count will tend to rise at any given poverty line in PPP dollars. However, the story is more 13. The OECD and Eurostat have used controls for “representativeness” (based on the price survey), following Cuthbert and Cuthbert (1988). This has not been done for developing countries.

1584

QUARTERLY JOURNAL OF ECONOMICS

complex, given that the same changes in the PPPs alter the (endogenous) international poverty line, which is anchored to the national poverty lines in the poorest countries in local currency units. Next we turn to the poverty lines, and then the household surveys, after which we will be able to put the various data together to see what they suggest about the extent of poverty in the world. III. NATIONAL AND INTERNATIONAL POVERTY LINES We use a range of international lines, representative of the national lines found in the world’s poorest countries. For this purpose, RCS compiled a new set of national poverty lines for developing countries drawn from the World Bank’s country-specific Poverty Assessments (PAs) and the Poverty Reduction Strategy Papers (PRSP) done by the governments of the countries concerned. These documents provide a rich source of data on poverty at the country level, and almost all include estimates of national poverty lines. The RCS data set was compiled from the most recent PAs and PRSPs over the years 1988–2005. In the source documents, each poverty line is given in the prices for a specific survey year (for which the subsequent poverty measures are calculated). In most cases, the poverty line was also calculated from the same survey (though there are some exceptions, for which preexisting national poverty lines, calibrated to a prior survey, were updated using the consumer price index). About 80% of these reports used a version of the “cost of basic needs” method in which the food component of the poverty line is the expenditure needed to purchase a food bundle specific to each country that yields a stipulated food energy requirement.14 To this is added an allowance for nonfood spending, which is typically anchored to the nonfood spending of people whose food spending, or sometimes total spending, is near the food poverty line. There are some notable differences between the old (RDV) and new (RCS) data sets on national poverty lines. The RDV data were for the 1980s (with a mean year of 1984), whereas the new and larger compilation in RCS is post-1990 (mean of 1999); in no case do the proximate sources overlap. The RCS data cover 75 developing countries, whereas the earlier data included only 22. The 14. This method, and alternatives, are discussed in detail in Ravallion (1994, 2008c).

POVERTY IN THE DEVELOPING WORLD

1585

RDV data set used rural poverty lines when there was a choice, whereas the RCS data set estimated national average lines. And the RDV data set was unrepresentative of the poorest region, SubSaharan Africa (SSA), with only four countries from that region (Burundi, South Africa, Tanzania, and Zambia), whereas the RCS data set has a good spread across regions. The sample bias in the RDV data set was unavoidable at the time (1990), but it can now be corrected. Although there are similarities across countries in how poverty lines are set, there is considerable scope for discretion. National poverty lines must be considered socially relevant in the specific country.15 If a proposed poverty line is widely seen as too frugal by the standards of a society, then it will surely be rejected. Nor will a line that is too generous be easily accepted. The stipulated food-energy requirements are similar across countries, but the food bundles that yield a given nutritional intake can vary enormously (as in the share of calories from course starchy staples rather than more processed food grains, and the share from meat and fish). The nonfood components also vary. The judgments made in setting the various parameters of a poverty line are likely to reflect prevailing notions of what poverty means in each country. There must be a lower bound to the cost of the nutritional requirements for any given level of activity (with the basal metabolic rate defining an absolute lower bound). The cost of the (food and nonfood) goods needed for social needs must also be bounded below (as argued by Ravallion and Chen [2010]). The poverty lines found in many poor countries are certainly frugal. For example, the World Bank (1997) gives the average daily food bundle consumed by someone living in the neighborhood of India’s national poverty in 1993. The daily food bundle comprised 400 g of coarse rice and wheat and 200 g of vegetables, pulses, and fruit, plus modest amounts of milk, eggs, edible oil, spices, and tea. After buying such a food bundle, one would have about $0.30 left (at 1993 PPP) for nonfood items. India’s official line is frugal by international standards, even among low-income countries (Ravallion 2008a). To give another example, the daily food bundle used by Bidani and Ravallion (1993) to construct Indonesia’s poverty line comprises 300 g of rice, 100 g of tubers, and amounts of vegetables, 15. This is no less true of the poverty lines constructed for World Bank Poverty Assessments, which emerge out of close collaboration between the technical team (often including local statistical staff and academics) and the government of the country concerned.

1586

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I National Poverty Lines Plotted against Mean Consumption at 2005 PPP Bold symbols are fitted values from a nonparametric regression.

fruits, and spices similar to those in the India example but also includes fish and meat (about 140 g in all per day). Such poverty lines are clearly too low to be acceptable in rich countries, where much higher overall living standards mean that higher standards are also used for identifying the poor. For example, the U.S. official poverty line in 2005 for a family of four was $13 per person per day (http://aspe.hhs.gov/poverty/05poverty.shtml). Similarly, we can expect middle-income countries to have higher poverty lines than low-income countries. The expected pattern in how national poverty lines vary is confirmed by Figure I, which plots the poverty lines compiled by RCS in 2005 PPP dollars against log household consumption per capita, also in 2005 PPP dollars, for the 74 countries with complete data. The figure gives a nonparametric regression of the national poverty lines against log mean consumption. Above a certain point, the poverty line rises with mean consumption. The overall elasticity of the poverty line to mean consumption is about 0.7. However, the slope is essentially zero among the poorest 20 or so countries, where absolute poverty clearly dominates. The gradient evident in Figure I is driven more by the nonfood component of

POVERTY IN THE DEVELOPING WORLD

1587

FIGURE II Comparison of New and Old National Poverty Lines at 1993 PPP Bold symbols are fitted values from a nonparametric regression.

the poverty lines (which accounts for about 60% of the overall elasticity) than the food component, although there is still an appreciable share attributable to the gradient in food poverty lines (RCS). To help see how this new compilation of national poverty lines compares to those used to set the original “$1-a-day” line, Figure II gives both the RCS and RDV lines evaluated at 1993 prices and converted to dollars using the 1993 PPPs; both sets of national poverty lines are plotted against consumption per capita at 1993 PPP. The relationship between the RCS national poverty lines and consumption per capita (at 1993 PPP) looks similar to Figure I, although the 1993 PPPs suggest a slightly steeper gradient amongst the poorest countries. But the more important observation from Figure II is that the RDV lines are lower at given mean consumption; the absolute gap diminishes as consumption falls, but still persists among the poorest countries. For the poorest fifteen countries ranked by consumption per capita at 1993 PPP, the mean poverty line in the RCS data set is $43.92 ($1.44 a day16 ) versus $33.51 ($1.10 a day) using the old (RDV) series 16. Note that this is at 1993 PPP; $1.44 in 1993 prices represents $1.95 a day at 2005 U.S. prices.

1588

QUARTERLY JOURNAL OF ECONOMICS

for eight countries with consumption below the upper bound of consumption for those fifteen countries. The RCS sample is more recent, and possibly there has been some upward drift in national poverty lines over time, although that does not seem very likely given that few growing developing countries have seen an upward revision to their poverty lines, which can be politically difficult. (Upward revisions have a long cycle; for example, China and India are only now revising upward their official poverty lines, which stood for 30–40 years.) The other differences in the two samples noted above may well be more important in explaining the upward shift seen in Figure II in moving from the RDV to RCS samples. For example, there is some evidence that poverty lines for SSA tend to be higher than for countries at similar mean consumption levels (RCS), and (as noted above) SSA was underrepresented in the original RDV data set of national poverty lines.17 We use five international poverty lines at 2005 PPP: (i) $1.00 a day, which is very close to India’s national poverty line;18 (ii) $1.25, which is the mean poverty line for the poorest fifteen countries;19 (iii) $1.45, obtained by updating the 1993 $1.08 line used by Chen and Ravallion (2001, 2004, 2007) for inflation in the United States; (iv) $2.00, which is the median of the RCS sample of national poverty lines for developing and transition economies and is also approximately the line obtained by updating the $1.45 line at 1993 PPP for inflation in the United States; and (v) $2.50, twice the $1.25 line, which is also the median poverty line of all except the poorest fifteen countries in the RCS data set of national poverty lines. The range from $1.00 to $1.45 is roughly the 95% confidence 17. The residuals in Figure I are about $0.44 per day higher for SSA on average, with a standard error of $0.27. 18. India’s official poverty lines for 2004/2005 were Rs 17.71 and Rs 11.71 per day for urban and rural areas. Using our urban and rural PPPs for 2005, these represent $1.03 per day (Ravallion 2008a). An Expert Group constituted by the Planning Commission (2009) has recently recommended a higher rural poverty line, although retaining the prior official line for urban areas. The implied new national line is equivalent to $1.17 per day for 2005 when evaluated at our implicit urban and rural PPPs. Note that the Expert Group does not claim that the higher line is a “relative poverty” effect, but rather that it corrects for claimed biases in past price deflators. 19. The fifteen countries are mostly in SSA and comprise Malawi, Mali, Ethiopia, Sierra Leone, Niger, Uganda, Gambia, Rwanda, Guinea-Bissau, Tanzania, Tajikistan, Mozambique, Chad, Nepal, and Ghana. Their median poverty line is $1.27 per day. Note that this is a set of reference countries different from those used by RDV. Deaton (2010) questions this change in the set of reference countries. However, it would be hard to justify keeping the reference group fixed over time, given what we now know about the bias in the original RDV sample of national lines.

POVERTY IN THE DEVELOPING WORLD

1589

interval for the mean poverty line for the poorest fifteen countries (RCS). To test the robustness of qualitative comparisons, we also estimate the cumulative distribution functions (CDFs) up to a maximum poverty line, which we set at the U.S. line of $13 per day.20 Although we present results for multiple poverty lines, we consider the $1.25 line the closest in sprit to the original idea of the “$1-a-day” line. The use of the poorest fifteen countries as the reference group has a strong rationale. The relationship between the national poverty lines and consumption per person can be modeled very well (in terms of goodness of fit) by a piecewise linear function that has zero slope up to some critical level of consumption, and rises above that point. The econometric tests reported in RCS imply that national poverty lines tend to rise with consumption per person when it exceeds about $2 per day, which is very near the upper bound of the consumption levels found among these fifteen countries.21 Of course, there is still variance in the national poverty lines at any given mean, including among the poorest countries; RCS estimate the robust standard error of the $1.25 line to be $0.10 per day. We use the same PPPs to convert the international lines to local currency units (LCUs). Three countries were treated differently, China, India, and Indonesia. In all three we used separate urban and rural distributions. For China, the ICP survey was confined to 11 cities, and the evidence suggests that the cost of living is lower for the poor in rural areas (Chen and Ravallion 2010). We treat the ICP PPP as an urban PPP for China and use the ratio of urban to rural national poverty lines to derive the corresponding rural poverty line in local currency units. For India, the ICP included rural areas, but they were underrepresented. We derived urban and rural poverty lines consistent with both the urban– rural differential in the national poverty lines and the relevant 20. First-order dominance up to a poverty line of zmax implies that all standard (additively separable) poverty measures rank the distributions identically for all poverty lines up to zmax ; see Atkinson (1987). (When CDFs intersect, unambiguous rankings may still be possible for a subset of poverty measures.) 21. RCS use a suitably constrained version of Hansen’s (2000) method for estimating a piecewise linear (“threshold”) model. (The constraint is that the slope of the lower linear segment must be zero and there is no potential discontinuity at the threshold.) This method gave an absolute poverty line of $1.23 (t = 6.36) and a threshold level of consumption (above which the poverty line rises linearly) very close to the $60 per month figure used to define the reference group. Ravallion and Chen (2010) use this piecewise linear function in measuring “weakly relative poverty” in developing countries.

1590

QUARTERLY JOURNAL OF ECONOMICS

features of the design of the ICP samples for India; further details can be found in Ravallion (2008a). For Indonesia, we converted the international poverty line to LCUs using the official consumption PPP from the 2005 ICP. We then unpack that poverty line to derive implicit urban and rural lines that are consistent with the ratio of the national urban-to-rural lines for Indonesia. IV. HOUSEHOLD SURVEYS AND POVERTY MEASURES We have estimated all poverty measures ourselves from the primary sample survey data, rather than relying on preexisting poverty or inequality measures of uncertain comparability. The primary data come in various forms, ranging from micro data (the most common) to specially designed grouped tabulations from the raw data, constructed following our guidelines.22 All our previous estimates have been updated to ensure internal consistency. We draw on 675 nationally representative surveys for 115 countries.23 Taking the most recent survey for each country, about 1.23 million households were interviewed in the surveys used for our 2005 estimate. The surveys were mostly done by governmental statistics offices as part of their routine operations. Not all available surveys were included; a survey was dropped if there were known to be serious problems of comparability with the rest of the data set.24 IV.A. Poverty Measures Following past practice, poverty is assessed using household expenditure on consumption per capita or household income per capita as measured from the national sample surveys.25 Households are ranked by consumption (or income) per person. 22. In the latter case we use parametric Lorenz curves to fit the distributions. These provide a more flexible functional form than the log-normality assumption used by (inter alia) Bourguignon and Morrisson (2002) and Pinkovskiy and Sala-iMartin (2009). Log-normality is a questionable approximation; the tests reported in Lopez and Serv´en (2006) reject log-normality of consumption, though it performs better for income. Note also that the past papers in the literature have applied log-normality to distributional data for developing countries that have already been generated by our own parametric Lorenz curves, as provided in the World Bank’s World Development Indicators. This overfitting makes the fit of log-normal distribution to these secondary data look deceptively good. 23. A full listing is found in Chen and Ravallion (2009). 24. Also, we have not used surveys for 2006 or 2007 when we already have a survey for 2005—the latest year for which we provide estimates in this paper. 25. The use of a “per capita” normalization is standard in the literature on developing countries. This stems from the general presumption that there is rather little scope for economies of size in consumption for poor people. However, that assumption can be questioned; see Lanjouw and Ravallion (1995).

POVERTY IN THE DEVELOPING WORLD

1591

The distributions are weighted by household size and sample expansion factors. Thus our poverty counts give the number of people living in households with per capita consumption or income below the international poverty line. When there is a choice we use consumption rather than income, in the expectation that consumption is the better measure of current economic welfare.26 Although intertemporal credit and risk markets do not appear to work perfectly, even poor households have opportunities for saving and dissaving, which they can use to protect their living standards from income fluctuations, which can be particularly large in poor agrarian economies. A fall in income due to a crop failure in one year does not necessarily mean destitution. There is also the (long-standing) concern that measuring economic welfare by income entails double counting over time; saving (or investment) is counted initially in income and then again when one receives the returns from that saving. Consumption is also thought to be measured more accurately than income, especially in developing countries. Of the 675 surveys, 417 allow us to estimate the distribution of consumption; this is true of all the surveys used in the Middle East and North Africa (MENA), South Asia, and SSA, although income surveys are more common in Latin America. The measures of consumption (or income, when consumption is unavailable) in our survey data set are reasonably comprehensive, including both cash spending and imputed values for consumption from own production. But we acknowledge that even the best consumption data need not adequately reflect certain “nonmarket” dimensions of welfare, such as access to certain public services, or intrahousehold inequalities. Furthermore, with the expansion in government spending on basic education and health in developing countries, it can be argued that the omission of the imputed values for these services from survey-based consumption aggregates will understate the rate of poverty reduction. How much so is unclear, particularly in the light of mounting evidence from micro studies on absenteeism of public teachers and healthcare workers in a number of developing countries.27 However, 26. See Ravallion (1994), Slesnick (1998), and Deaton and Zaidi (2002). Consumption may also be a better measure of long-term welfare, though this is less obvious (Chaudhuri and Ravallion 1994). 27. See Chaudhury et al. (2006). Based on such evidence, Deaton and Heston (2010, p. 44) remark that “To count the salaries of AWOL government employees as ‘actual’ benefits to consumers adds statistical insult to original injury.”

1592

QUARTERLY JOURNAL OF ECONOMICS

there have clearly been some benefits to poor people from higher public spending on these services. Our sensitivity tests in Section VII, in which we mix survey means with NAS consumption aggregates (which, in principle, should include the value of government services to households), will help address this concern. These and other limitations of consumption as a welfare metric also suggest that our poverty measures need to be supplemented by other data, such as on education attainments and infant and child mortality, to obtain a complete picture of how living standards are evolving. We use standard poverty measures for which the aggregate measure is the (population-weighted) sum of individual measures. In this paper we report three such poverty measures.28 The first measure is the headcount index given by the percentage of the population living in households with consumption or income per person below the poverty line. We also give estimates of the number of poor, as obtained by applying the estimated headcount index to the population of each region under the assumption that the countries without surveys are a random subsample of the region. Our third measure is the poverty gap index, which is the mean distance below the poverty line as a proportion of the line where the mean is taken over the whole population, counting the nonpoor as having zero poverty gaps. Having converted the international poverty line at PPP to local currency in 2005, we convert it to the prices prevailing at each survey date using the most appropriate available countryspecific CPI.29 The weights in this index may or may not accord well with consumer budget shares at the poverty line. In periods of relative price shifts, this will bias our comparisons of the incidence of poverty over time, depending on the extent of (utilitycompensated) substitution possibilities for people at the poverty line. In the aggregate, 90% of the population of the developing world is represented by surveys within two years of 2005.30 Survey coverage by region varies from 74% of the population of the MENA 28. The website we have created to allow replication of these estimates, PovcalNet, provides a wider range of measures from the literature on poverty measurement. 29. Note that the same poverty line is generally used for urban and rural areas. There are three exceptions, China, India, and Indonesia, where we estimate poverty measures separately for urban and rural areas and use sector-specific CPIs. 30. Some countries have graduated from the set of developing countries; we apply the same definition over time to avoid selection bias. In this paper our definition is anchored to 2005.

POVERTY IN THE DEVELOPING WORLD

1593

to 98% of the population of South Asia. Some countries have more surveys than others; for the 115 countries, 14 have only one survey, 17 have two, and 14 have three, whereas 70 have four or more over the period, of which 23 have 10 or more surveys. Naturally, the further back we go, the smaller the number of surveys—reflecting the expansion in household survey data collection for developing countries since the 1980s. Because the PPP conversion is only done in 2005, estimates may well become less reliable earlier in time, depending on the quality of the national CPIs. Coverage also deteriorates in the last year or two of the series, given the lags in survey processing. We made the judgment that there were too few surveys prior to 1981 or after 2005. The working paper version (Chen and Ravallion 2009) gives further details, including the number of surveys by year, the lags in survey availability, and the proportion of the population represented by surveys by year. Most regions are quite well covered from the latter half of the 1980s (East and South Asia being well covered from 1981 onward).31 Unsurprisingly, we have weak coverage in Eastern Europe and Central Asia (EECA) for the 1980s; many of these countries did not officially exist then, so we have to rely heavily on back projections. More worrying is the weak coverage for SSA in the 1980s; indeed, our estimates for the early 1980s rely heavily on projections based on distributions around 1990. IV.B. Heterogeneity and Measurement Errors in Surveys Survey instruments differ between countries, including how the questions are asked (such as recall periods), response rates, whether the surveys are used to measure consumption or income, and what gets included in the survey’s aggregate for consumption or income. These differences are known to matter to the statistics calculated from surveys, including poverty and inequality measures. It is questionable whether survey instruments should be identical across countries; some adaptation to local circumstances may well make the results more comparable even though the surveys differ. Nonetheless, the heterogeneity is a concern. The literature on measuring global poverty and inequality has dealt with this concern in two ways. The first makes an effort to iron out obvious comparability problems using the micro data, 31. China’s survey data for the early 1980s are probably less reliable than in later years, as discussed in Chen and Ravallion (2004), where we also describe our methods of adjusting for certain comparability problems in the China data, including changes in valuation methods.

1594

QUARTERLY JOURNAL OF ECONOMICS

either by reestimating the consumption/income aggregates or by the more radical step of dropping a survey. It is expected that aggregation across surveys will help reduce the problem. But beyond this, the problem is essentially ignored. This is the approach we have taken in the past, and for our benchmark estimates below. We call this the “survey-based method.” The second approach rescales the survey means to be consistent with the national accounts (NAS) but assumes that the surveys get the relative distribution (“inequality”) right. Thus all levels of consumption or income in the survey are multiplied by the ratio of the per capita NAS aggregate (consumption or GDP) to the survey mean.32 We can call this the “rescaling method.” The choice depends in part on the data and application. The first method is far more data-intensive, as it requires the primary data, which rules it out for historical purposes (indeed, for estimates much before 1980). For example, Bourguignon and Morrisson (2002) had no choice but to use the rescaling method, given that they had to rely on secondary sources (notably prior inequality statistics) to estimate aggregate poverty and inequality measures back to 1820. Arguments can also be made for and against each approach. It is claimed by proponents of the rescaling method that it corrects for survey mismeasurement. In this view, NAS consumption is more accurate because it captures things that are often missing from surveys, such as imputed rents for owner-occupied housing and government-provided services to households. Although this is true in principle, compliance with the UN Statistical Division’s System of National Accounts (SNA) is uneven across countries in practice. Most developing countries still have not fully implemented SNA guidelines, including those for estimating consumption, which is typically calculated residually at the commodity level. In this and other respects (including how output is measured) the NAS is of questionable reliability in many low-income countries.33 Given how consumption is estimated in practice in the NAS in most low-income countries, we would be loath to assume it is more accurate than a well-designed survey. 32. In one version of this method, Bhalla (2002) replaces the survey mean by consumption from the NAS. Instead, Bourguignon and Morrisson (2002), Sala-iMartin (2006), and Pinkovskiy and Sala-i-Martin (2009) anchor their measures to GDP per capita rather than to consumption. 33. As Deaton and Heston (2010, p. 5) put it, “The national income accounts of many low-income countries remain very weak, with procedures that have sometimes not been updated for decades.”

POVERTY IN THE DEVELOPING WORLD

1595

Proponents of the survey-based method acknowledge that there are survey measurement errors but question the assumptions of the rescaling method that the gaps between the survey means and NAS aggregates are due solely to underestimation in the surveys and that the measurement errors are distributionneutral, such that the surveys get inequality right. The discrepancy between the two data sources reflects many factors, including differences in what is included.34 Selective compliance with the randomized assignment in a survey and underreporting is also playing a role. Survey statisticians do not generally take the view that nonsampling errors affect only the mean and not inequality. More plausibly, underestimation of the mean by surveys due to selective compliance comes with underestimation of inequality.35 For instance, high-income households might be less likely to participate because of the high opportunity cost of their time or concerns about intrusion in their affairs.36 Naturally evidence on this is scarce, but in one study of compliance with the “long form” of the U.S. Census, Groves and Couper (1998, Chapter 5) found that higher socioeconomic status tended to be associated with lower compliance. Estimates by Korinek, Mistiaen, and Ravallion (2007) of the microcompliance function (the individual probability of participating in a survey as a function of own income) for the Current Population Survey in the United States suggest a steep economic gradient, with very high compliance rates for the poor, falling to barely 50% for the rich. Korinek, Mistiaen, and Ravallion (2006) examine the implications of selective compliance for inequality and poverty measurement and find little bias in the poverty measures but sizable underestimation of inequality in the United States. In other words, their results suggest that the surveys underestimate both the mean and inequality but get poverty roughly right;

34. For example, NAS private consumption includes imputed rents for owneroccupied housing, imputed services from financial intermediaries, and the expenditures of nonprofit organizations; none of these are included in consumption aggregates from standard household surveys. Surveys, on the other hand, are probably better at picking up consumption from informal-sector activities. For further discussion, see Ravallion (2003) and Deaton (2005). In the specific case of India (with one of the largest gaps between the survey-based estimates of mean consumption and that from the NAS), see Central Statistical Organization (2008). 35. Although the qualitative implications for an inequality measure of even a monotonic income effect on compliance are theoretically ambiguous (Korinek, Mistiaen, and Ravallion 2006). 36. Groves and Couper (1998) provide a useful overview of the arguments and evidence on the factors influencing survey compliance.

1596

QUARTERLY JOURNAL OF ECONOMICS

replacing the survey mean with consumption from the NAS would underestimate poverty. This may be a less compelling argument for some other sources of divergence between the survey mean and NSS consumption per person. Suppose, for example, that the surveys exclude imputed rent for owner-occupied housing (practices are uneven in how this is treated) and that this is a constant proportion of expenditure. Then the surveys get inequality right and the mean wrong. Similarly, the private consumption aggregate in the NAS should include government expenditures on services consumed by households, which are rarely valued in surveys. Of course, it is questionable whether these items could be treated as a constant proportion of expenditure. The implications of measurement errors also depend on how the poverty line is set. Here it is important to note that the underlying national poverty lines were largely calibrated to the surveys. Measurement errors will be passed on to the poverty lines in a way that attenuates the bias in the final measure of poverty. By the most common methods of setting poverty lines, underestimation of nonfood spending in the surveys will lead to underestimation of the poverty line, which is anchored to the spending of sampled households living near the food poverty line (or with food-energy intakes near the recommended norms). Correcting for underestimation of nonfood spending in surveys would then require higher poverty lines. The poverty measures based on these poverty lines will then be more robust to survey measurement errors than would be the case if the line was set independent of the surveys. IV.C. A Mixed Method Arguably the more important concern here is the heterogeneity of surveys, given that the level of the poverty line is always somewhat arbitrary. In an interesting variation on the rescaling method, Karshenas (2003) replaces the survey mean by its predicted value from a regression on NAS consumption per capita. So Karshenas uses a stable linear function of NAS consumption, with mean equal to the overall mean of the survey means. This assumes that national accounts consumption data are comparable and ignores the country-specific information on the levels in surveys. As noted above, that is a questionable assumption. However, unlike other examples of rescaling methods, Karshenas assumes that the surveys are correct on average and focuses instead on the

POVERTY IN THE DEVELOPING WORLD

1597

problem of survey comparability, for which purpose the poverty measures are anchored to the national accounts data. Where we depart from the Karshenas method is that we do not ignore the country-specific survey means. When one has two less-than-ideal measures of roughly the same thing, it is natural to combine them. For virtually all developing countries, surveys are far less frequent than NAS data. Because one is measuring poverty at the survey date, the survey can be thought of as the Bayesian posterior estimate, whereas NAS consumption is the Bayesian prior. A result from Bayesian statistics then provides an interpretation of a mixing parameter under the assumption that consumption is log-normally distributed with a common variance in the prior distribution as in the new survey data. That assumption is unlikely to hold; log-normality of consumption can be rejected statistically (Lopez and Serv´en 2006), and (as noted) it is unlikely that the prior based on the NAS would have the same relative distribution as the survey. However, this assumption does at least offer a clear conceptual foundation for a sensitivity test, given the likely heterogeneity in surveys. In particular, it can then be shown readily that if the prior is the expected value of the survey mean, conditional on national accounts consumption, and consumption is log-normally distributed with a common variance, then the posterior estimate is the geometric mean of the survey mean and its expected value.37 Over time, the relevant growth rate is the (arithmetic) mean of the growth rates from the two data sources. V. BENCHMARK ESTIMATES We report aggregate results for nine “benchmark years,” at three-yearly intervals over 1981–2005, for the regions of the developing world and (given their populations) China and India.38 Jointly with this paper, we have updated the PovcalNet website to provide public access to the underlying country-level data set, so that users can replicate these calculations and try different assumptions, including different poverty measures, poverty lines, and country groupings, including deriving estimates for individual countries. The PovcalNet site will also provide updates as new data come in. 37. The working paper version (Chen and Ravallion 2009) provides a proof. 38. Chen and Ravallion (2004) describe our interpolation and projection methods to deal with the fact that national survey years differ from our benchmark years.

1598

QUARTERLY JOURNAL OF ECONOMICS TABLE I HEADCOUNT INDICES OF POVERTY (% BELOW EACH LINE) 1981

1984

$1.00 $1.25 $1.45 $2.00 $2.50

41.4 51.8 58.4 69.2 74.6

34.4 46.6 54.4 67.4 73.7

$1.00 $1.25 $1.45 $2.00 $2.50

29.4 39.8 46.6 58.6 65.9

27.6 38.3 45.5 58.1 66.7

1987

1999

2002

2005

(a) Aggregate for developing world 29.8 29.5 27.0 23.1 41.8 41.6 39.1 34.4 49.9 49.4 47.2 42.6 64.2 63.2 61.5 58.2 71.6 70.4 69.2 67.2

22.8 33.7 41.6 57.1 65.9

20.3 30.6 38.1 53.3 62.4

16.1 25.2 32.1 47.0 56.6

(b) Excluding China 24.4 23.3 35.0 34.1 42.3 41.6 55.6 55.6 65.4 66.0

22.3 33.1 40.8 55.6 67.4

20.7 31.3 38.9 54.0 66.0

18.6 28.2 37.0 50.3 62.9

26.9 37.5 44.5 57.2 67.3

1990

1993

1996

22.9 33.8 41.4 55.9 67.9

Note. The headcount index is the percentage of the relevant population living in households with consumption per person below the poverty line.

V.A. Aggregate Measures Table I gives our new estimates for a range of lines from $1.00 to $2.50 in 2005 prices. Table II gives the corresponding counts of the number of poor. We calculate the global aggregates under the assumption that the countries without surveys have the poverty rates of their region. The following discussion will focus more on the $1.25 line, though we test the robustness of our qualitative poverty comparisons to that choice. We find that the percentage of the population of the developing world living below $1.25 per day was halved over the 25-year period, falling from 52% to 25% (Table I). (Expressed as a proportion of the population of the world, the decline is from 42% to 21%; this assumes that there is nobody living below $1.25 per day in the developed countries.39 ) The number of poor fell by slightly over 500 million, from 1.9 billion to 1.4 billion over 1981–2005 (Table II). The trend rate of decline in the $1.25 a day poverty rate over 1981–2005 was 1% per year; when the poverty rate is regressed on time the estimated trend is −0.99% per year with a standard error of 0.06% (R2 = .97). This is slightly higher than the trend we had obtained using the 1993 PPPs, which was −0.83% per year (standard error = 0.11%). When this trend is simply projected 39. The population of the developing world in 2005 was 5,453 million, representing 84.4% of the world’s total population; in 1981, it was 3,663 million, or 81.3% of the total.

1,515.0 1,896.2 2,137.7 2,535.1 2,731.6

784.5 1,061.1 1,244.0 1,563.0 1,759.5

$1.00 $1.25 $1.45 $2.00 $2.50

$1.00 $1.25 $1.45 $2.00 $2.50

1981

786.2 1,088.3 1,293.2 1,652.1 1,895.4

1,334.7 1,808.2 2,111.5 2,615.4 2,858.7

1984

814.9 1,134.3 1,348.9 1,732.7 2,037.6

1,227.2 1,720.0 2,051.7 2,639.7 2,944.6

1987

1993

1996

(b) Excluding China 787.6 793.4 1,130.2 1,162.3 1,365.3 1,418.9 1,795.1 1,895.2 2,110.2 2,250.4

823.2 1,213.4 1,488.1 2,009.9 2,439.2

(a) Aggregate for developing world 1,286.7 1,237.9 1,111.9 1,813.4 1,794.9 1,656.2 2,153.5 2,165.0 2,048.1 2,755.9 2,821.4 2,802.1 3,071.0 3,176.7 3,231.4

1990

TABLE II NUMBERS OF POOR (MILLIONS)

843.2 1,249.5 1,541.7 2,101.9 2,546.4

1,145.6 1,696.2 2,095.7 2,872.1 3,316.6

1999

821.9 1,240.0 1,543.5 2,140.8 2,615.6

1,066.6 1,603.1 1,997.9 2,795.7 3,270.6

2002

769.9 1,169.0 1,535.2 2,087.9 2,611.0

876.0 1,376.7 1,751.7 2,561.5 3,084.7

2005

POVERTY IN THE DEVELOPING WORLD

1599

1600

QUARTERLY JOURNAL OF ECONOMICS

forward to 2015, the estimated headcount index for that year is 16.6% (standard error of 1.5%). Given that the 1990 poverty rate was 41.6%, the new estimates indicate that the developing world as a whole is on track to achieving the Millennium Development Goal (MDG) of halving the 1990 poverty rate by 2015. The 1% per year rate of decline in the poverty rate also holds if one focuses on the period since 1990 (not just because this is the base year for the MDG but also recalling that the data for the 1980s are weaker). The $1.25 poverty rate fell 10% in the ten years of the 1980s (from 52% to 42%) and a further 17% in the 16 years from 1990 to 2005. It is notable that 2002–2005 suggests a higher (absolute and proportionate) drop in the poverty rate than other periods. Given that lags in survey data availability mean that our 2005 estimate is more heavily dependent on nonsurvey data (notably the extrapolations based on NAS consumption growth rates), there is a concern that this sharper decline over 2002–2005 might be exaggerated. However, that does not seem likely. The bulk of the decline is in fact driven by countries for which survey data are available close to 2005. The region for which nonsurvey data have played the biggest role for 2005 is SSA. If instead we assume that there was in fact no decline in the poverty rate over 2002– 2005 in SSA, then the total headcount index (for all developing countries) for the $1.25 line in 2005 is 26.2%—still suggesting a sizable decline relative to 2002. China’s success against absolute poverty has clearly played a major role in this overall progress. The lower panels of Tables I and II repeat the calculations excluding China. The $1.25 a day poverty rate falls from 40% to 28% over 1981–2005, with a rate of decline that is less than half the trend including China; the regression estimate of the trend falls to −0.43% per year (standard error of 0.03%; R2 = .96), which is almost identical to the rate of decline for the non-China developing world that we had obtained using the 1993 PPPs (which gave a trend of −0.44% per year, standard error = 0.01%). Based on our new estimates, the projected value for 2015 is 25.1% (standard error = 0.8%), which is well over half the 1990 value of 35%. So the developing world outside China is not on track to reach the MDG for poverty reduction. Our estimates suggest less progress (in absolute and proportionate terms) in getting above the $2 per day line than the $1.25 line. The poverty rate by this higher standard has fallen from 70%

POVERTY IN THE DEVELOPING WORLD

1601

FIGURE III Cumulative Distributions for the Developing World

in 1981 to 47% in 2005 (Table I). The trend is about 0.8% per year (a regression coefficient on time of −0.84; standard error = 0.08); excluding China, the trend is only 0.3% per year (a regression coefficient of −0.26; standard error = 0.05%). This has not been sufficient to bring down the number of people living below $2 per day, which was about 2.5 billion in both 1981 and 2005 (Table II). Thus the number of people living between $1.25 and $2 a day has risen sharply over these 25 years, from about 600 million to 1.2 billion. This marked “bunching up” of people just above the $1.25 line suggests that the poverty rate according to that line could rise sharply with aggregate economic contraction (including real contraction due to higher prices). The qualitative conclusions that poverty measures have fallen over 1981–2005 and 1990–2005 are robust to the choice of poverty line over a wide range (and robust to the choice of poverty measure within a broad class of measures). Figure III gives the cumulative distribution functions up to $13 per day, which is the official poverty line per person for a family of four in the United States in 2005. First-order dominance is indicated. In 2005, 95.7% of the population of the developing world lived below the U.S. poverty line; 25 years earlier it was 96.7%.

1602

QUARTERLY JOURNAL OF ECONOMICS

V.B. Regional Differences Table III gives the estimates over 1981–2005 for four lines, $1.00, $1.25, $2.00, and $2.50. There have been notable changes in regional poverty rankings over this period. Looking back to 1981, East Asia had the highest incidence of poverty, with 78% of the population living below $1.25 per day and 93% below the $2 line. South Asia had the next highest poverty rate, followed by SSA, LAC, MENA, and lastly EECA. Twenty years later, SSA had swapped places with East Asia, where the $1.25 headcount index had fallen to 17%, with South Asia staying in second place. EECA had overtaken MENA. The regional rankings are not robust to the poverty line. Two changes are notable. At lower lines (under $2 per day) SSA has the highest incidence of poverty, but this switches to South Asia at higher lines. (Intuitively, this difference reflects the higher inequality found in Africa than in South Asia.) Second, MENA’s poverty rate exceeds LAC’s at $2 or higher, but the ranking reverses at lower lines. The composition of world poverty has changed noticeably over time. The number of poor has fallen sharply in East Asia but risen elsewhere. For East Asia, the MDG of halving the 1990 “$1-per-day” poverty rate by 2015 was already reached a little after 2002. Again, China’s progress against absolute poverty was a key factor; looking back to 1981, China’s incidence of poverty (measured by the percentage below $1.25 per day) was roughly twice that for the rest of the developing world; by the mid-1990s, the Chinese poverty rate had fallen well below average. There were over 600 million fewer people living under $1.25 per day in China in 2005 than 25 years earlier. Progress was uneven over time, with setbacks in some periods (the late 1980s) and more rapid progress in others (the early 1980s and mid 1990s). Ravallion and Chen (2007) identify a number of factors (including policies) that account for this uneven progress against poverty over time (and space) in China. Over 1981–2005, the $1.25 poverty rate in South Asia fell from almost 60% to 40%, which was not sufficient to bring down the number of poor (Table IV). If the trend over this period in South Asia were to continue until 2015, the poverty rate would fall to 32.5% (standard error = 1.2%), which is more than half its 1990 value. So South Asia is not on track to attaining the MDG without a higher trend rate of poverty reduction. Note, however, that this conclusion is not robust to the choice of the poverty line.

1981

66.8 73.5 0.7 7.7 3.3 41.9 42.1 42.6 41.4

77.7 84.0 1.7 11.5 7.9 59.4 59.8 53.7 51.8

Region

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

50.8 53.7 4.3 9.1 4.1 46.9 49.4 57.1 39.1

1993

(b) % living below $1.25 a day 65.5 54.2 54.7 69.4 54.0 60.2 1.3 1.1 2.0 13.4 12.6 9.8 6.1 5.7 4.3 55.6 54.2 51.7 55.5 53.6 51.3 56.2 54.8 57.9 46.6 41.8 41.6

1990 35.4 37.7 2.1 6.0 1.5 29.3 31.1 46.4 27.0

1987

(a) % living below $1.00 a day 49.9 38.9 39.1 52.9 38.0 44.0 0.6 0.5 0.9 9.2 8.9 6.6 2.4 2.3 1.7 38.0 36.6 34.0 37.6 35.7 33.3 45.2 44.1 47.5 34.4 29.8 29.5

1984

36.0 36.4 4.6 10.8 4.1 47.1 46.6 58.7 34.4

23.4 23.7 2.5 7.3 1.6 29.1 28.6 47.6 23.1

1996

35.5 35.6 5.1 10.8 4.2 44.1 44.8 58.2 33.7

23.5 24.1 3.1 7.4 1.7 26.9 27.0 47.0 22.8

1999

27.6 28.4 4.6 11.0 3.6 43.8 43.9 55.1 30.6

17.8 19.1 2.7 7.7 1.4 26.5 26.3 43.8 20.3

2002

TABLE III REGIONAL BREAKDOWN OF HEADCOUNT INDEX FOR INTERNATIONAL POVERTY LINES OF $1.00–$2.50 A DAY OVER 1981–2005

16.8 15.9 3.7 8.2 3.6 40.3 41.6 50.9 25.2

9.3 8.1 2.2 5.6 1.6 23.7 24.3 39.9 16.1

2005 POVERTY IN THE DEVELOPING WORLD

1603

1981

92.6 97.8 8.3 22.5 26.7 86.5 86.6 74.0 69.2

95.4 99.4 15.2 29.2 39.0 92.6 92.5 81.0 74.6

Region

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

83.7 86.5 15.1 25.9 31.4 88.6 89.9 82.5 69.2

1993

(d) % living below $2.50 a day 93.5 89.7 87.3 97.4 92.4 91.6 12.5 11.2 12.0 32.4 29.6 26.0 34.8 34.6 31.2 91.5 90.8 90.3 91.5 90.8 90.2 82.3 81.0 82.5 73.7 71.6 70.4

1990 75.8 78.6 10.3 19.3 19.8 79.7 81.7 76.0 61.5

1987

(c) % living below $2.00 a day 88.5 81.6 79.8 92.9 83.7 84.6 6.5 5.6 6.9 25.3 23.3 19.7 23.1 22.7 19.7 84.8 83.9 82.7 84.8 83.8 82.6 75.7 74.2 76.2 67.4 64.2 63.2

1984

TABLE III (CONTINUED)

74.9 76.4 18.3 28.8 32.5 88.5 88.7 84.2 67.2

64.1 65.1 11.9 21.8 20.2 79.9 79.8 77.9 58.2

1996

71.7 71.7 21.4 28.0 30.8 86.7 87.6 83.8 65.9

61.8 61.4 14.3 21.4 19.0 77.2 78.4 77.6 57.1

1999

62.6 61.6 17.8 28.4 29.5 86.5 86.9 82.5 62.4

51.9 51.2 12.0 21.7 17.6 77.1 77.5 75.6 53.3

2002

50.7 49.5 12.9 22.1 28.4 84.4 85.7 80.5 56.6

38.7 36.3 8.9 16.6 16.9 73.9 75.6 73.0 47.0

2005

1604 QUARTERLY JOURNAL OF ECONOMICS

921.7 730.4 3.0 28.0 5.6 387.3 296.1 169.4 1,515.0

1,071.5 835.1 7.1 42.0 13.7 548.3 420.5 213.7 1,896.2

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

1981

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

Region 404.9 288.7 11.7 35.6 4.1 368.0 271.3 287.6 1,111.9 622.3 442.8 21.8 52.2 10.6 594.4 441.8 355.0 1,656.2

(a) Number living below $1.00 a day 721.8 590.2 623.4 588.7 548.5 412.4 499.1 444.4 2.4 2.1 4.1 10.1 35.8 36.9 29.0 27.6 4.6 4.7 3.8 3.7 374.3 384.4 381.2 348.8 282.2 285.3 282.5 280.1 195.9 209.0 245.2 259.0 1,334.7 1,227.2 1,286.7 1,237.9 (b) Number living below $1.25 a day 947.3 822.4 873.3 845.3 719.9 585.7 683.2 632.7 5.7 4.8 9.1 20.1 52.3 52.3 42.9 41.8 11.6 11.9 9.7 9.8 547.6 569.1 579.2 559.4 416.0 428.0 435.5 444.3 243.8 259.6 299.1 318.5 1,808.2 1,720.0 1,813.4 1,794.9

1990

1996

1987

1993

1984

635.1 446.7 24.3 54.8 11.5 588.9 447.2 381.6 1,696.2

420.8 302.4 14.4 37.8 4.7 359.5 270.1 308.4 1,145.6

1999

506.8 363.2 21.7 58.4 10.3 615.9 460.5 390.0 1,603.1

326.8 244.7 12.6 40.7 3.9 372.5 276.1 310.1 1,066.6

2002

316.2 207.7 17.3 46.1 11.0 595.6 455.8 390.6 1,376.7

175.6 106.1 10.2 30.7 4.7 350.5 266.5 304.2 876.0

2005

TABLE IV REGIONAL BREAKDOWN OF NUMBER OF POOR (MILLIONS) FOR INTERNATIONAL POVERTY LINES OF $1.00–$2.50 A DAY OVER 1981–2005

POVERTY IN THE DEVELOPING WORLD

1605

1,277.7 972.1 35.0 82.3 46.3 799.5 608.9 294.2 2,535.1

1,315.8 987.5 64.3 106.9 67.6 855.0 650.3 322.0 2,731.6

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

1981

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

Region 1,108.1 792.2 56.2 105.7 52.2 1,008.8 757.1 471.1 2,802.1 1,293.9 930.2 86.4 139.5 83.8 1,118.5 841.1 509.4 3,231.4

(c) Number living below $2.00 a day 1,280.2 1,238.5 1,273.7 1,262.1 963.3 907.1 960.8 926.3 28.4 25.1 31.9 48.6 98.8 96.3 86.3 88.9 43.9 47.1 44.4 48.0 835.9 881.5 926.0 950.0 635.6 669.0 701.6 735.0 328.3 351.3 393.6 423.8 2,615.4 2,639.7 2,755.9 2,821.4 (d) Number living below $2.50 a day 1,352.8 1,361.9 1,393.7 1,393.7 1,009.8 1,001.7 1,040.4 1,019.0 54.4 50.2 55.7 71.0 126.3 122.6 113.9 119.5 66.1 71.8 70.3 75.9 902.1 954.6 1,011.0 1,056.1 686.1 725.0 766.5 808.8 356.9 383.5 426.4 460.6 2,858.7 2,944.6 3,071.0 3,176.7

1990

1996

1987

1993

1984

TABLE IV (CONTINUED)

1,282.8 899.2 101.2 142.1 84.2 1,156.8 875.2 549.5 3,316.6

1,104.9 770.2 67.6 108.5 51.9 1,030.8 782.8 508.5 2,872.1

1999

1,150.5 788.8 84.0 150.5 85.2 1,216.3 911.4 584.0 3,270.6

954.1 654.9 56.8 114.6 50.9 1,083.7 813.1 535.6 2,795.7

2002

955.2 645.6 61.0 121.8 86.7 1,246.2 938.0 613.7 3,084.7

728.7 473.7 41.9 91.3 51.5 1,091.5 827.7 556.7 2,561.5

2005

1606 QUARTERLY JOURNAL OF ECONOMICS

POVERTY IN THE DEVELOPING WORLD

1607

If instead we use a lower line of $1.00 per day at 2005 prices, then the poverty rate would fall to 15.7% (standard error = 1.3%) by 2015, which is less than half the 1990 value of 34.0%. Not surprisingly (given its population weight), the same observations hold for India, which is not on track for attaining the MDG using the $1.25 line but is on track using the $1.00 line, which is also closer to the national poverty line in India.40 The extent of the “bunching up” that has occurred between $1.25 and $2 per day is particularly striking in both East and South Asia, where we find a total of about 900 million people living between these two lines, roughly equally split between the two sides of Asia. Although this points again to the vulnerability of the poor, by the same token it also suggests that substantial further impacts on poverty can be expected from economic growth, provided that it does not come with substantially higher inequality. We find a declining trend in LAC’s poverty rate but not enough to reduce the count of the number of poor over the 1981–2005 period as a whole, though with more encouraging signs of progress since 1999. MENA has experienced a fairly steady decline in the poverty rate, though (again) not sufficient to avoid a rising count of poor in that region. We find generally rising poverty in EECA using the lower lines ($1.00 and $1.25 a day) though there are very few people who are poor by this standard in EECA. The $2.50-a-day line is more representative of the poverty lines found in the relatively poorer countries of EECA. By this standard, the poverty rate in EECA has shown little clear trend over time in either direction, though there are encouraging signs of a decline in poverty since the late 1990s. The paucity of survey data for EECA in the 1980s should also be recalled. Thus our estimates are heavily based on extrapolations, which do not allow for any changes in distribution. One would expect that distribution was better from the point of view of the poor in EECA in the 1980s, in which case poverty would have been even lower than we estimate—and the increase over time even larger. The incidence of poverty in SSA fell only slightly over the period as a whole, from 54% of the population living under $1.25 a day in 1981 to 51% in 2005. The number of poor by our new $1.25a-day standard has almost doubled in SSA over 1981–2005, from 40. The corresponding poverty rates for the $1.00 line in India are 42.1% (1981), 37.6%, 35.7%, 33.3%, 31.1%, 28.6%, 27.0%, 26.3%, and 24.3% (2005).

1608

QUARTERLY JOURNAL OF ECONOMICS

214 million to over 390 million. The share of the world’s poor by this measure living in Africa has risen from 11% in 1981 to 28% in 2005. The trend increase in SSA’s share of poverty is 0.67% points per year (standard error = 0.04% points), implying that one-third of the world’s poor will live in this region by 2015 (more precisely, the projected poverty rate for that year is 33.7%, with a standard error of 0.8%). However, there are signs of progress against poverty in SSA since the mid-1990s. The $1.25-a-day poverty rate for SSA peaked at 59% in 1996 and fell steadily after, though not enough to bring down the count of poor given population growth. The decline is proportionately higher the lower the poverty line; for the $1-a-day line, the poverty rate in 2005 is 16% lower than its 1996 value. V.C. Poverty Gaps Table V gives the PG indices for $1.25 and $2.00 a day. The aggregate PG for 2005 is 7.6% for the $1.25 line and 18.6% for the $2 line. The GDP per capita of the developing world was $11.30 per day in 2005 (at 2005 PPP). The aggregate poverty gap for the $1.25 line is 0.84% of GDP, whereas it is 3.29% for the $2 line. World (including the OECD countries) GDP per capita was $24.58 per day, implying that the global aggregate PG was 0.33% of global GDP using the $1.25 line and 1.28% using $2.41 Comparing Tables III and V, it can be seen that the regional rankings in terms of the poverty gap index are similar to those for the headcount index, and the changes over time follow similar patterns. The PG measures magnify the interregional differences seen in the headcount indices. The most striking feature of the results in Table III is the depth of poverty in Africa, with a $1.25per-day poverty gap index of almost 21%—roughly twice that for the next poorest region by this measure (South Asia). For the $1.25 line, Africa’s aggregate poverty gap represents 3.2% of the region’s GDP; for the $2 line, it is 9.0%.42 Table VI gives the mean consumption of the poor.43 For 2005, those living below the $1.25-a-day line had a mean consumption 41. This assumes that nobody lives below our international poverty line in the OECD countries. Under this assumption, the aggregate poverty gap as a percentage of global GDP is PG · (z/ y¯ ) · (N/NW), where PG is the poverty gap index (in %), z is the poverty line, y¯ is global GDP per capita, N is the population of the developing world, and NW is world population. 42. The GDP per capita of SSA in 2005, at 2005 PPP, was $8.13 per day. 43. The mean consumption of the poor is (1 − PG/H)z, where PG is the poverty gap index, H is the headcount index, and z is the poverty line.

35.5 39.3 0.4 4.0 1.6 19.6 19.6 22.9 21.3

54.7 59.3 1.9 8.9 7.4 40.7 40.8 38.8 36.5

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

44.9 47.3 1.5 10.2 6.1 38.4 38.2 40.6 32.5

24.2 25.6 0.3 4.7 1.3 17.5 17.2 24.6 16.8

1984

1993 16.4 17.6 1.6 3.3 0.8 12.9 13.6 25.6 12.9 34.8 36.6 3.7 7.4 4.8 32.8 34.1 41.4 27.5

1990

(a) $1.25 18.8 18.2 18.5 20.7 0.3 0.6 4.7 3.6 1.2 0.9 16.4 15.2 15.8 14.6 24.3 26.6 14.5 14.2 (b) $2.00 38.0 37.4 38.2 40.9 1.3 2.0 9.7 7.8 5.9 4.8 37.2 35.7 36.7 35.3 39.8 42.2 29.5 29.1

1987

25.9 26.3 4.1 8.6 4.8 32.7 32.4 42.3 24.7

10.5 10.7 1.7 3.9 0.8 12.6 12.4 25.9 11.0

1996

25.5 25.6 4.5 8.6 4.6 31.0 31.3 42.1 24.3

10.7 11.1 1.6 4.2 0.8 11.7 11.7 25.7 10.9

1999

20.2 20.6 3.8 8.7 4.1 30.8 30.8 39.7 22.1

8.0 8.7 1.3 4.2 0.7 11.5 11.4 23.5 9.6

2002

13.0 12.2 3.0 6.7 4.0 28.7 29.5 37.0 18.6

4.0 4.0 1.1 3.2 0.8 10.3 10.5 21.1 7.6

2005

Note. The poverty gap index is the mean distance below the poverty line as a proportion of the line where the mean is taken over the whole population, counting the nonpoor as having zero poverty gaps.

1981

Region

TABLE V POVERTY GAP INDEX (×100) BY REGION OVER 1981–2005

POVERTY IN THE DEVELOPING WORLD

1609

1981

0.68 0.67 0.97 0.82 0.99 0.84 0.84 0.72 0.74

0.80 0.79 1.55 1.22 1.44 1.05 1.06 0.98 0.94

Region

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

0.97 0.98 1.56 1.22 1.47 1.10 1.10 0.94 1.03

0.79 0.79 0.95 0.82 0.99 0.86 0.86 0.70 0.80

1984

1990 0.83 0.82 0.84 0.79 0.99 0.88 0.89 0.68 0.82 1.05 1.03 1.51 1.23 1.48 1.14 1.15 0.91 1.08

1987 (a) $1.25 0.81 0.82 0.95 0.78 0.99 0.87 0.88 0.70 0.82 (b) $2.00 1.06 1.09 1.57 1.20 1.46 1.11 1.12 0.94 1.08 1.07 1.07 1.38 1.21 1.49 1.19 1.17 0.92 1.11

0.85 0.84 0.79 0.80 1.01 0.91 0.91 0.69 0.84

1993

1.18 1.19 1.37 1.21 1.50 1.18 1.19 0.89 1.14

0.88 0.88 0.78 0.79 1.01 0.91 0.92 0.70 0.85

1996

TABLE VI MEAN CONSUMPTION OF THE POOR ($ PER DAY) BY REGION OVER 1981–2005

1.17 1.17 1.31 1.21 1.48 1.20 1.20 0.92 1.15

0.87 0.86 0.86 0.77 1.00 0.92 0.92 0.70 0.85

1999

1.20 1.19 1.29 1.24 1.50 1.20 1.21 0.95 1.16

0.89 0.87 0.91 0.78 1.01 0.92 0.93 0.72 0.86

2002

1.31 1.33 1.25 1.26 1.50 1.22 1.22 0.99 1.21

0.95 0.94 0.89 0.77 0.98 0.93 0.93 0.73 0.87

2005

1610 QUARTERLY JOURNAL OF ECONOMICS

POVERTY IN THE DEVELOPING WORLD

1611

FIGURE IV Poverty Rates Using Old and New Poverty Lines

of $0.87 (about 3.5% of global GDP per capita). The overall mean consumption of the poor tended to rise over time, from $0.74 per day in 1981 to $0.87 in 2005 by the $1.25 line, and from $0.94 to $1.21 for the $2 line. Over time, poverty has become shallower in the world as a whole. The mean consumption of Africa’s poor not only is lower than that for other regions, but also has shown very little increase over time (Table VI). The same persistence in the depth of poverty is evident in MENA and LAC, though the poor have slightly higher average levels of living in both regions. The mean consumption of EECA’s poor has actually fallen since the 1990s, even though the overall poverty rate was falling. VI. COMPARISONS WITH PAST ESTIMATES Both the $1.25 and $1.45 lines indicate a substantially higher poverty count in 2005 than obtained using our old $1.08 line in 1993 prices; Figure IV compares the poverty rates estimated using the latter line with those obtained using either the $1.00- or $1.25a-day lines at 2005 PPP. Focusing on the $1.25 line, we find that 25% of the developing world’s population in 2005 is poor, versus 17% using the old line at 1993 PPP—representing an extra 400 million people living in poverty. (As can be seen in Figure IV, the series for $1.00 a day at 2005 PPP tracks closely that for $1.08 at 1993 PPP.)

1612

QUARTERLY JOURNAL OF ECONOMICS

It is notable that the conclusion that the global poverty count has risen is also confirmed if one does not update the nominal value of the 1993 poverty line (ignoring inflation in the United States). Using the $1.08 line for 2005, one obtains an aggregate poverty rate of 19% (1,026 million people) for 2005. The 2005 line that gives the same headcount index for 2005 as the $1.08 line at 1993 PPP turns out to be lower, at $1.03 a day. Although the adjustment for U.S. inflation clearly gives a poverty line for 2005 that is “too high,” the 2005 line must presumably exceed the 1993 nominal line to have comparable purchasing power. So, as long as it is agreed that $1 in 1993 international prices is worth more than $1 at 2005 prices, the qualitative result that the new ICP round implies a higher global poverty count is robust.44 To help understand why we get a higher poverty count for a given year, it is instructive to decompose the total change into its various components. Recall that there are three ways in which the data have been revised: new PPPs, new national poverty lines, and new surveys. The last effect turns out to be small. When we use the new survey database for 2005 to estimate the poverty rate based on the 1993 PPPs and the old $1.08 line we get a headcount index of 17.6% (957.4 million people) instead of 17.2%. So we will focus on the effect of the other two aspects of the data, by evaluating everything using the new survey database. Let ztn denote the new (“n”) vector of national poverty lines (from RCS), evaluated at the PPPs for the ICP of round t, and let zto be the corresponding vector of old (“o”) poverty lines for the n ) = $1.25 a day 1980s (from RDV). The international lines are f (z05 o in 2005 prices and f (z93 ) = $1.08 a day in 1993 prices or $1.45 in 2005 prices adjusting for U.S. inflation. Next let yt be a vector giving the distribution of consumption for the developing world in 2005 evaluated using ICP round t. Let P(ztk, yt )(k = o, n; t = 93, 05) be the poverty measure for 2005 (subsuming the function f ). So o n , y93 ) = 18% and P(z05 , y05 ) = 25%. P(z93 Now consider the following exact decomposition: n o (1) P z05 , y05 − P z93 , y93 = A + B + C, n n where A ≡ P(z05 , y05 ) − P(z93 , y05 ) is the partial effect of the PPP change via the international poverty line, holding the distribution

44. Deaton (2010) questions this claim by comparing an international line of $0.92 a day in 2005 prices with the old $1.08 line in 1993 prices. Yet the former line must have a lower real value.

POVERTY IN THE DEVELOPING WORLD

1613

and national poverty lines constant at their new values; B ≡ n n P(z93 , y05 ) − P(z93 , y93 ) is the partial effect of the change in distrin o , y93 ) − P(z93 , y93 ) bution due to the change in PPPs; and C ≡ P(z93 is the partial effect of updating the data set on national poverty lines. Note that A + B is the total effect (on both the poverty lines and the distribution) of the PPP revisions, holding the data on national poverty lines constant at their new values. There are two counterfactual terms in the decomposition in n n , y05 ) and P(z93 , y93 ). To evaluate these terms we (1), namely P(z93 need to use the $1.44-a-day line at 1993 PPP (Section III), rather than the $1.08 line, which was based on the old RDV compilation of poverty lines. In applying this line to the 2005 distribution n = $1.95 in 2005 we need to update for U.S. inflation, giving z93 n n , y93 ) = 29%. prices. We then obtain P(z93 , y05 ) = 46% and P(z93 Comparing these it can be seen that, holding the 1993 international poverty line constant (in real terms in the United States), the change in the PPPs added 17% to the poverty rate; this results from the higher cost of living in developing countries implied by the 2005 ICP results. (If instead one makes the comparison using the RDV data set on national poverty lines, one obtains o o , y05 ) − P(z93 , y93 ) = 14%.) P(z93 We find that the partial effect of the PPP revisions via the international poverty line is to bring the headcount index down substantially from 46% to 25% ( A = −21%). But there is a large and almost offsetting upward effect of the change in distribution (B = 17%). On balance the net effect of the change in the PPPs is to bring the poverty rate down from 29% to 25% (A + B = −4%). The fact that the PPP revisions on their own bring down the overall poverty count relative to a fixed set of national lines is not surprising, given that the poverty line is set at the mean of lines for the poorest countries and that the proportionate revisions to the PPPs tend to be greater for poorer countries. It can be shown that if the international poverty line is that of the poorest country, which also has the largest upward revision to its PPP, then the aggregate poverty rate will automatically fall, given that the national poverty lines are fixed in local currency units. The working paper version provides a proof of this claim (Chen and Ravallion 2009). Working against this downward effect of the new PPPs, there is an upward adjustment to the poverty count coming from the new data on national poverty lines, which (as we have seen in Figure II) tend to be higher for the poorest countries than those used by RDV for the 1980s. The updating of the data on

1614

QUARTERLY JOURNAL OF ECONOMICS

national poverty lines moved the global poverty rate from 18% to 29% (C = 11%). VII. SENSITIVITY TO OTHER METHODOLOGICAL CHOICES We have already seen how much impact the choice of poverty line has, though we have also noted that the qualitative comparisons over time are robust to the choice of line. In this section we consider sensitivity to two further aspects of our methodology: the first is our use of the PPP for aggregate household consumption and the second is our reliance on surveys for measuring average living standards. VII.A. Alternative PPPs The benchmark analysis has relied solely on the individual consumption PPPs (“P3s”) from the ICP. One deficiency of these PPPs is that they are designed for national accounting purposes not poverty measurement. Deaton and Dupriez (DD) (2009) have estimated “PPPs for the poor” (P4s) for a subset of countries with the required data.45 Constructing P4s requires reweighting the prices to accord with the consumption patterns of those living near the poverty line. Notice that there is a simultaneity in this problem, in that one cannot do the reweighting until one knows the poverty line, which requires the reweighted PPPs. Deaton and Dupriez (2009) implement an iterative solution to derive internally consistent P4s.46 They do this for three price index methods, namely the country product dummy (CPD) method and both Fisher and T¨ornqvist versions of the EKS method used by the ICP. The Deaton–Dupriez P4s cannot be calculated for all countries and they cannot cover the same consumption space as the P3s from the ICP. The limitation on country coverage stems from the fact that P4s require suitable household surveys, namely micro data from consumption expenditure surveys that can be mapped 45. The Asian Development Bank (2008) has taken the further step of implementing special price surveys for Asian countries to collect prices on qualities of selected items explicitly lower than those identified in the standard ICP. Using lower-quality goods essentially entails lowering the poverty line. In terms of the impact on the poverty counts for Asia in 2005, the ADB’s method is equivalent to using a poverty line of about $1.20 a day by our methods. (This calculation is based on a log-linear interpolation between the relevant poverty lines.) 46. In general there is no guarantee that there is a unique solution for this method, although DD provide a seemingly plausible restriction on the Engel curves that ensures uniqueness. They also use an exact, one-step solution for the T¨ornqvist index under a specific parametric Engel curve.

POVERTY IN THE DEVELOPING WORLD

1615

FIGURE V Aggregate Poverty Rates over Time for Alternative PPPs

into the ICP “basic heading” categories for prices; the DD P4s are available for sixty countries, which is about half of our sample. The sixty-country sample is clearly not representative of the developing world as a whole and in some specific regions, notably EECA, where the population share covered by surveys in the sixtycountry sample is only 8%, whereas overall coverage is 79%. As we will see, the sixty-country sample is poorer, in terms of the aggregate (population-weighted) poverty count. Also, some of the 110 basic headings for consumption in the ICP were dropped by DD in calculating their P4s. These included expenditures made on behalf of households by governments and nongovernmental organizations (such as on education and health care). Given that such expenditures are not typically included in household surveys, they cannot be included in DD’s P4s. DD also preferred to exclude housing rentals from their calculations on the grounds that they were hard to measure and that different practices for imputing rentals for owner-occupied housing had been used by the official ICP in different countries. There are other (seemingly more minor) differences in how DD calculated their P4s and the methods used by the ICP. Using the P4s at the country level kindly provided by Deaton and Dupriez, we have recalculated our global poverty measures.

16.8 3.7 8.2 3.6 40.3 50.9 25.2

East Asia and Pacific Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Sub-Saharan Africa Total (for sampled countries)

P3 60 14 $37.41 110

(3) P3 60 14 $32.88 102

(4)

Headcount index (% of population) 13.4 16.1 12.9 3.75 7.1 8.1 7.1 8.4 7.6 2.2 8.7 4.9 35.1 39.1 34.0 50.4 49.9 49.6 22.4 28.3 25.1

P3 115 15 $33.34 102

(2)

13.2 4.8 8.8 4.2 37.9 50.4 26.7

P4: CPD 60 14 Rs.576.86 102

(5)

12.7 4.1 8.5 4.2 35.6 50.9 25.7

P4: Fisher 60 14 Rs.557.00 102

(6)

12.5 4.6 7.9 4.2 34.0 49.8 24.9

P4: T¨ornqvist 60 14 Rs.547.83 102

(7)

Note. The Deaton–Dupriez P4 calculations are only possible for about half the countries in the full sample, given that consumption expenditure surveys are required. Also, one country drops out of the reference group for calculating the poverty line. The poverty lines for column (5)–(7) P4s are Deaton–Dupriez “world rupees.”

P3 115 15 $38.00 110

PPP No. countries for poverty measures No. countries for poverty line Poverty line (per month) No. basic headings

(1)

TABLE VII AGGREGATE POVERTY RATE AND REGIONAL PROFILE FOR 2005 UNDER ALTERNATIVE PPPS

1616 QUARTERLY JOURNAL OF ECONOMICS

POVERTY IN THE DEVELOPING WORLD

1617

In all cases we recalculate the international poverty line under the new PPPs, as well as (of course) the poverty measures. Table VII gives the results by region for 2005, whereas Figure V plots the estimates by year. In both cases we give our benchmark estimates for the official ICP PPP for individual consumption using all 110 basic headings for consumption and results for the 102 basic headings comprising those that can be matched to surveys less the extra few categories that DD chose not to include. Because the sixteen countries used by DD did not include one of the fifteen countries in our reference group, the poverty line is recalculated for fourteen countries, giving a line of $1.23 a day ($37.41 per month). With the help of the World Bank’s ICP team, we also recalculated the official P3s for consumption using the set of basic headings chosen by DD. Column (1) reproduces the estimates from Table III, whereas column (2) gives the corresponding estimates for the full sample of countries using P3s calibrated to the 102 basic headings used by DD. Columns (3) and (4) give the results corresponding to columns (1) and (2) using the sixty-country subsample used by DD. Columns (5)–(7) give our estimates of the poverty measures using the P4s from DD, for each of their three methods. We give (populationweighted) aggregate results for the sample countries.47 It can be seen from Table VII that the switch from 110 to 102 basic headings reduces the aggregate poverty measures by about three percentage points, whereas switching from the 115country sample to the 60-country sample has the opposite effect, adding three points. The pure effect of switching from P3 to P4 is indicated by comparing column (4) with columns (5)–(7). This change has only a small impact using the EKS method (for either the Fischer or T¨ornqvist indices), though it has a slightly larger effect using the CPD method. On balance, the aggregate poverty count turns out to be quite similar between the P4s and our main estimates using standard P3s on the full sample. If one assumes that the countries without household surveys have the regional average poverty rates, then the Fisher P4 gives a count of 1,402 million for the number of poor, whereas the CPD and T¨ornqvist P4s give counts of 1,454 and 1,359 million, respectively, as compared to 1,377 million using 47. Note that this is a slightly different aggregation method from our earlier results, which assumed that the sample was representative at regional level. That is clearly not plausible for the sixty-country sample used by DD. We have recalculated the aggregates for the 115-country sample under the same basis as for the 60-country sample.

1618

QUARTERLY JOURNAL OF ECONOMICS

standard P3s. The regional profile is also fairly robust, the main difference being lower poverty rates in EECA using P4s, although the poor representation of EECA countries in the sixty-country sample used by DD is clearly playing a role here. The reduction in coverage of consumption items makes a bigger difference, with a higher poverty count in the aggregate (28% for these sixty countries using the standard PPP, versus 25% using the PPP excluding housing), due mainly to higher poverty rates in East and South Asia when all 110 basic headings for consumption are included. The trends are also similar (Figure V). This is not surprising given that, when the usual practice of doing the PPP conversion at only the benchmark year and then using national data sources over time is followed, the real growth rates and distributions at country level are unaffected. VII.B. Mixing National Accounts and Surveys Next we test sensitivity to using instead the geometric mean of the survey mean and its expected value given NAS consumption; as noted in Section IV, this can be given a Bayesian interpretation under certain assumptions. Table VIII gives the estimates implied by the geometric mean; in all other respects we follow the benchmark methodology. The expected value was formed by a separate regression at each reference year; a very good fit was obtained using a log-log specification (adding squared and cubed values of the log of NAS consumption per capita did little or nothing to increase the adjusted R2 ). In the aggregate for most years, and most regions, the level of poverty is lower using the mixed method than the survey-means only method. In the aggregate, the 2005 poverty rate is 18.6% (1,017 million people) using the geometric mean versus 25.2% (1,374 million) using unadjusted survey means. Nonetheless, the mixed method still gives a higher poverty rate for 2005 than implied by the 1993 PPPs. Using the $2.00 line, the 2005 poverty rate falls from 47.0% to 41.0%. Figure VI compares the aggregate headcount indices for $1.25 a day between the benchmark and mixed method. The trend rate of poverty reduction is almost identical between the two, at about 1% per year. (Using the mixed method, the OLS trend is −0.98% per year, with a standard error of 0.04%, versus −0.99% with a standard error of 0.06% using only the survey means.) The linear projection to 2015 implies a poverty rate of 9.95% (s.e. = 1.02%), less than one-third of its 1990 value.

1619

POVERTY IN THE DEVELOPING WORLD TABLE VIII HEADCOUNT INDEX USING MIXED METHOD (%)

1981 1984 1987 1990 1993 1996 1999 2002 2005 East Asia and Pacific Of which China Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total East Asia and Pacific Of which China Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

(a) $1.25 a day 67.1 57.4 49.4 48.5 40.6 28.4 26.5 20.3 12.1 73.0 62.3 51.9 55.5 45.0 30.6 29.0 22.4 12.1 1.9 1.7 1.7 2.6 4.8 6.1 5.7 3.8 3.1 13.9 16.3 16.7 18.0 15.0 15.8 14.0 15.3 9.8 7.6

6.5

6.4

5.0

5.0

5.4

5.4

4.4

4.4

42.7 42.3 51.9 43.6

39.3 38.7 54.0 39.6

39.0 38.0 53.7 36.6

33.6 32.2 55.6 35.3

30.4 30.4 55.9 31.7

28.1 26.4 56.5 27.2

28.1 26.4 56.9 26.5

26.2 25.1 55.3 23.7

21.6 20.3 51.0 18.6

(b) $2.00 a day 89.8 86.0 81.4 78.6 95.4 91.6 85.7 85.4 6.9 6.4 5.8 7.4 26.5 30.3 29.2 31.8

73.0 78.4 11.8 28.2

59.6 63.0 14.7 29.3

56.2 58.8 14.8 26.4

46.6 34.0 48.8 33.9 10.8 8.2 28.7 19.6

26.7 24.4 24.0 20.7 20.5 20.7 19.8 17.5 15.8 77.4 77.0 73.1 66.0

75.0 74.6 74.8 64.4

74.7 74.2 74.2 62.5

70.1 69.3 75.2 60.7

67.9 68.4 75.2 58.4

65.4 64.2 76.6 53.7

64.5 63.9 76.9 52.2

62.6 62.4 76.0 48.2

56.8 57.0 73.4 41.0

The mixed method gives a higher poverty rate for LAC and MENA and makes negligible difference for SSA. Other regions see a lower poverty rate. The $1.25-a-day rate for East Asia in 2005 falls from 17% to 12%. The largest change is for South Asia, where by 2005 the poverty rate for India falls to about 20% using the mixed method versus 42% using the unadjusted survey means; the proportionate gap was considerable lower in 1981 (42% using the mixed method versus 60% using the survey mean alone). India accounts for a large share of the discrepancies between the levels of poverty between the benchmark and the mixed method, reflecting both the country’s population weight and the large gap that has emerged in recent times between the surveybased and national accounts consumption aggregates for India. Figure VI also gives the complete series for $1.25 a day excluding India; it can be seen that the gap between the two methods narrows over time. If we focus on the poverty rates for the developing

1620

QUARTERLY JOURNAL OF ECONOMICS

FIGURE VI Aggregate Poverty Rates over Time for Benchmark and Mixed Method

world excluding India then the difference between the mixed method and the benchmark narrows considerably, from 21.1% (918 million people) to 18.2% (794 million) in 2005. (The $2.00 poverty rates are 39.8% and 36.7% respectively.) In 2005, about two-thirds of the drop in the count of the number of people living under $1.25 a day in moving from the benchmark to the mixed method is due to India.

VIII. CONCLUSIONS Global poverty measurement combines data from virtually all branches of the statistical system. The measures reported here bring together national poverty lines, household surveys, census data, national accounts and both national and international price data. Inevitably there are comparability and consistency problems when combining data from such diverse sources. Price indices for cross-country comparisons do not always accord well with those used for intertemporal comparisons within countries. In some countries, the surveys give a different picture of average living standards to the national accounts, and the methods used in both surveys and national accounts differ across countries.

POVERTY IN THE DEVELOPING WORLD

1621

However, thanks to the efforts and support of governmental statistics offices and international agencies, and improved technologies, the available data on the three key ingredients in international poverty measurement—national poverty lines, representative samples of household consumption expenditures (or incomes) and data on prices—have improved greatly since global poverty monitoring began. The expansion of country-level poverty assessments since the early 1990s has greatly increased the data available on national poverty lines. Side-by-side with this, the country coverage of credible household survey data, suitable for measuring poverty, has improved markedly, the frequency of data has increased, public access to these data has improved, and the lags in data availability have been reduced appreciably. And with the substantial global effort that went into the 2005 International Comparison Program we are also in a better position to assure that the poverty lines used in different countries have similar purchasing power, so that two people living in different countries but with the same real standard of living are treated the same way. The results of the 2005 ICP imply a higher cost of living in developing countries than past ICP data have indicated; the “Penn effect” is still evident, but it has been overstated. We have combined the new data on prices from the 2005 ICP and household surveys with a new compilation of national poverty lines, which updates (by fifteen years on average) the old national lines used to set the original $1-a-day line. Importantly, the new compilation of national lines is more representative of developing countries, given that the sample size is larger and it corrects the sample biases in the old data set. The pure effect of the PPP revisions is to bring the poverty count down but this is outweighed by the higher level of the national poverty lines in the poorest countries, as used to determine the international line. Our new calculations using the 2005 ICP and new international poverty line of $1.25 a day imply that 25% of the population of the developing world, 1.4 billion people, were poor in 2005, which is 400 million more for that year 2005 than implied by our old international poverty line based on national lines for the 1980s and the 1993 ICP. In China alone, which had not previously participated officially in the ICP, the new PPP implies that an extra 10% of the population is living below our international poverty line. But the impact is not confined to China; there are upward revisions to our past estimates for all regions. The higher global count is in no small measure the result of correcting the sample

1622

QUARTERLY JOURNAL OF ECONOMICS

bias in the original compilation of national poverty lines used to set the old “$1-a-day” line. Although there are a number of data and methodological issues that caution against comparisons across different sets of PPPs, it is notable that our poverty count for 2005 is quite robust to using alternative PPPs anchored to the consumption patterns of those living near the poverty line. Of course, different methods of determining the international poverty line give different poverty counts. If we use a line of $1.00 a day at 2005 PPP (almost exactly India’s official poverty line) then we get a poverty rate of 16%—slightly under 900 million people—whereas if we use the median poverty line for all developing countries in our povertyline sample, namely $2.00 a day, then the poverty rate rises to 50%, slightly more than two billion people. As a further sensitivity test we have proposed a simple Bayesian method of mixing the data on consumption from the national accounts consumption with that from survey means, whereby the survey mean is replaced by the geometric mean of the survey mean and its predicted value based on prior national accounts data. This is justified only under potentially strong assumptions, notably that consumption is identically log-normally distributed between the (national-accounts-based) prior and the surveys. These assumptions can be questioned, but they do at least provide a clear basis for an alternative hybrid estimator. This gives a lower poverty count for 2005, namely 19% living below $1.25 a day rather than 25%. A large share of this gap—twothirds of the drop in the count of the number of poor in switching to the mixed method—is due to India’s (unusually large) discrepancy between consumption measured in the national accounts and that measured by surveys. Although the new data suggest that the developing world is poorer than we thought, it has been no less successful in reducing the incidence of absolute poverty since the early 1980s. Indeed, the overall rate of progress against poverty is fairly similar to past estimates and robust to our various changes in methodology. The trend rate of global poverty reduction of 1% per year turns out to be slightly higher than we had estimated previously, due mainly to the higher weight on China’s remarkable pace of poverty reduction. The trend is even higher if we use our Bayesian mixedmethod. The developing world as a whole is clearly still on track to attaining the first Millennium Development Goal of halving the 1990s “extreme poverty” rate by 2015. China attained the

POVERTY IN THE DEVELOPING WORLD

1623

MDG early in the millennium, almost 15 years ahead of the target date. However, the developing world outside China will not attain the MDG without a higher rate of poverty reduction than we have seen over 1981–2005. The persistently high incidence and depth of poverty in SSA are particularly notable. There are encouraging signs of progress in this region since the late 1990s, although lags in survey data availability and problems of comparability and coverage leave us unsure about how robust this will prove to be. DEVELOPMENT RESEARCH GROUP, WORLD BANK DEVELOPMENT RESEARCH GROUP, WORLD BANK

REFERENCES Ackland, Robert, Steve Dowrick, and Benoit Freyens, “Measuring Global Poverty: Why PPP Methods Matter,” Australian National University, Canberra, Mimeo, 2007. African Development Bank, Comparative Consumption and Price Levels in African Countries (Tunis, Tunisia: African Development Bank, 2007). Ahmad, Sultan, “Purchasing Power Parity for International Comparison of Poverty: Sources and Methods,” Development Data Group, World Bank, 2003. Asian Development Bank, Comparing Poverty Across Countries: The Role of Purchasing Power Parities (Manila, the Philippines: Asian Development Bank, 2008). Atkinson, Anthony B., “On the Measurement of Poverty,” Econometrica, 55 (1987), 749–764. Balassa, Bela, “The Purchasing Power Parity Doctrine: A Reappraisal,” Journal of Political Economy, 72 (1964), 584–596. Bhalla, Surjit, Imagine There’s No Country: Poverty, Inequality and Growth in the Era of Globalization (Washington, DC: Institute for International Economics, 2002). Bidani, Benu, and Martin Ravallion, “A New Regional Poverty Profile for Indonesia,” Bulletin of Indonesian Economic Studies, 29 (1993), 37–68. Bourguignon, Franc¸ois, and Christian Morrisson, “Inequality among World Citizens: 1820–1992,” American Economic Review, 92 (2002), 727–744. Central Statistical Organization, Report of the Group for Examining Discrepancy in PFCE Estimates from NSSO Consumer Expenditure Data and Estimates Compiled by National Accounts Division (New Delhi: Central Statistical Organization, Ministry of Statistics, Government of India, 2008). Chaudhuri, Shubham, and Martin Ravallion, “How Well Do Static Indicators Identify the Chronically Poor?” Journal of Public Economics, 53 (1994), 367–394. Chaudhury, Nazmul, Jeffrey Hammer, Michael Kremer, Karthik Muralidharan, and F. Halsey Rogers, “Missing in Action: Teacher and Health Worker Absence in Developing Countries,” Journal of Economic Perspectives, 20 (2006), 91–116. Chen, Shaohua, and Martin Ravallion, “How Did the World’s Poor Fare in the 1990s?” Review of Income and Wealth, 47 (2001), 283–300. ——, “How Have the World’s Poorest Fared since the Early 1980s?” World Bank Research Observer, 19 (2004), 141–70. ——, “Absolute Poverty Measures for the Developing World, 1981–2004,” Proceedings of the National Academy of Sciences, 104 (2007), 16,757–16,762. ——, “The Developing World Is Poorer Than We Thought, But No Less Successful in the Fight against Poverty,” World Bank Policy Research Working Paper 4703, 2009.

1624

QUARTERLY JOURNAL OF ECONOMICS

——, “China Is Poorer Than We Thought, But No Less Successful in the Fight against Poverty,” in Debates on the Measurement of Poverty, Sudhir Anand, Paul Segal, and Joseph Stiglitz, eds. (Oxford, UK: Oxford University Press, 2010). Cuthbert, James R., and Margaret Cuthbert, “On Aggregation Methods of Purchasing Power Parities,” Working Paper 56, Department of Economics and Statistics, OECD, Paris, 1988. Dalgaard, Esben, and Henrik Sørensen, “Consistency between PPP Benchmarks and National Price and Volume Indices,” Paper Presented at the 27th General Conference of the International Association for Research on Income and Wealth, Sweden, 2002. Deaton, Angus, “Measuring Poverty in a Growing World (or Measuring Growth in a Poor World),” Review of Economics and Statistics, 87 (2005), 353–378. ——, “Price Indexes, Inequality, and the Measurement of World Poverty,” Presidential Address to the American Economics Association, Atlanta, GA, 2010. Deaton, Angus, and Olivier Dupriez, “Global Poverty and Global Price Indices,” Development Data Group, World Bank, Mimeo, 2009. Deaton, Angus, and Alan Heston, “Understanding PPPs and PPP-Based National Accounts,” American Economic Journal: Macroeconomics, forthcoming, 2010. Deaton, Angus, and Salman Zaidi, “Guidelines for Constructing Consumption Aggregates for Welfare Analysis,” World Bank Living Standards Measurement Study Working Paper 135, 2002. Diewert, Erwin, “New Methodology for Linking Regional PPPs,” International Comparison Program Bulletin, 5 (2008), 10–14. Groves, Robert E., and Mick P. Couper, Nonresponse in Household Interview Surveys (New York: Wiley, 1998). Hansen, Bruce E., “Sample Splitting and Threshold Estimation,” Econometrica, 68 (2000), 575–603. Ikl´e, Doris, “A New Approach to the Index Number Problem,” Quarterly Journal of Economics, 86 (1972), 188–211. Karshenas, Massoud, “Global Poverty: National Accounts Based versus Survey Based Estimates,” Development and Change, 34 (2003), 683–712. Korinek, Anton, Johan Mistiaen, and Martin Ravallion, “Survey Nonresponse and the Distribution of Income,” Journal of Economic Inequality, 4 (2006), 33–55. ——, “An Econometric Method of Correcting for Unit Nonresponse Bias in Surveys,” Journal of Econometrics, 136 (2007), 213–235. Lanjouw, Peter, and Martin Ravallion, “Poverty and Household Size,” Economic Journal, 105 (1995), 1415–1435. Lopez, Humberto, and Luis Serv´en, “A Normal Relationship? Poverty, Growth and Inequality,” World Bank Policy Research Working Paper 3814, 2006. Nuxoll, Daniel A., “Differences in Relative Prices and International Differences in Growth Rates,” American Economic Review, 84 (1994), 1423–1436. Pinkovskiy, Maxim, and Xavier Sala-i-Martin, “Parametric Estimations of the World Distribution of Income,” NBER Working Paper 15433, 2009. Planning Commission, Report of the Expert Group to Review the Methodology for Estimation of Poverty (New Delhi: Planning Commission, Government of India, 2009). Ravallion, Martin, Poverty Comparisons (Chur, Switzerland: Harwood Academic Press, 1994). ——, “Measuring Aggregate Economic Welfare in Developing Countries: How Well Do National Accounts and Surveys Agree?” Review of Economics and Statistics, 85 (2003), 645–652. ——, “A Global Perspective on Poverty in India,” Economic and Political Weekly, 43 (2008a), 31–37. ——, “On the Welfarist Rationale for Relative Poverty Lines,” in Social Welfare, Moral Philosophy and Development: Essays in Honour of Amartya Sen’s Seventy Fifth Birthday, Kaushik Basu and Ravi Kanbur, eds. (Oxford: Oxford University Press, 2008b). ——, “Poverty Lines,” in The New Palgrave Dictionary of Economics, Larry Blume and Steven Durlauf, eds. (London: Palgrave Macmillan, 2008c). ——, “Understanding PPPs and PPP-Based National Accounts: A Comment,” American Economic Journal: Macroeconomics, forthcoming, 2010.

POVERTY IN THE DEVELOPING WORLD

1625

Ravallion, Martin, and Shaohua Chen, “China’s (Uneven) Progress against Poverty,” Journal of Development Economics, 82 (2007), 1–42. ——, “Weakly Relative Poverty,” Review of Economics and Statistics, in press, 2010. Ravallion, Martin, Shaohua Chen, and Prem Sangraula, “New Evidence on the Urbanization of Global Poverty,” Population and Development Review, 33 (2007), 667–702. ——, “Dollar a Day Revisited,” World Bank Economic Review, 23 (2009), 163–184. Ravallion, Martin, Gaurav Datt, and Dominique van de Walle, “Quantifying Absolute Poverty in the Developing World,” Review of Income and Wealth, 37 (1991), 345–361. Sala-i-Martin, Xavier, “The World Distribution of Income: Falling Poverty and . . . Convergence. Period,” Quarterly Journal of Economics, 121 (2006), 351–397. Samuelson, Paul, “Theoretical Notes on Trade Problems,” Review of Economics and Statistics, 46 (1964), 145–154. Slesnick, Daniel, “Empirical Approaches to Measuring Welfare,” Journal of Economic Literature, 36 (1998), 2108–2165. Summers, Robert, and Alan Heston, “The Penn World Table (Mark 5): An Extended Set of International Comparisons, 1950–1988,” Quarterly Journal of Economics, 106 (1991), 327–368. United Nations, Evaluation of the International Comparison Programme (New York: UN Statistics Division, 1998). World Bank, World Development Report: Poverty (New York: Oxford University Press, 1990). ——, India: Achievements and Challenges in Reducing Poverty, Report No. 16483IN, 1997. ——, Comparisons of New 2005 PPPs with Previous Estimates (Revised Appendix G to World Bank [2008b]) (www.worldbank.org/data/icp, 2008a). ——, Global Purchasing Power Parities and Real Expenditures. 2005 International Comparison Program (www.worldbank.org/data/icp, 2008b).

POST-1500 POPULATION FLOWS AND THE LONG-RUN DETERMINANTS OF ECONOMIC GROWTH AND INEQUALITY∗ LOUIS PUTTERMAN AND DAVID N. WEIL We construct a matrix showing the share of the year 2000 population in every country that is descended from people in different source countries in the year 1500. Using the matrix to adjust indicators of early development so that they reflect the history of a population’s ancestors rather than the history of the place they live today greatly improves the ability of those indicators to predict current GDP. The variance of the early development history of a country’s inhabitants is a good predictor for current inequality, with ethnic groups originating in regions having longer histories of organized states tending to be at the upper end of a country’s income distribution.

I. INTRODUCTION Economists studying income differences among countries have been increasingly drawn to examine the influence of longterm historical factors. Although the theories underlying these analyses vary, the general finding is that things that were happening 500 or more years ago matter for economic outcomes today. Hibbs and Olsson (2004) and Olsson and Hibbs (2005), for example, find that geographic factors that predict the timing of the Neolithic revolution in a region also predict income and the quality of institutions in 1997. Comin, Easterly, and Gong (2006, 2010) show that the state of technology in a country 500, 2,000, or even 3,000 years ago has predictive power for the level of output today. Bockstette, Chanda, and Putterman (2002) find that an index of the presence of state-level political institutions from year 1 to 1950 has positive correlations, significant at the 1% level, with both 1995 income and 1960–1995 income growth. And Galor and Moav (2007) provide empirical evidence for a link from the timing of the transition to agriculture to current variations in life expectancy. ∗ We thank Charles Jones, Oded Galor, and seminar participants at Ben Gurion University, Brown University, the University of Haifa, Hebrew University of Jerusalem, the NBER Summer Institute, the Stockholm School of Economics, the CEGE annual conference at the University of California at Davis, Tel Aviv University, and University College London for helpful comments. We also thank Federico Droller, Bryce Millett, Momotazur Rahman, Isabel Tecu, Ishani Tewari, Yaheng Wang, and Joshua Wilde for valuable research assistance. Louis [email protected]; David [email protected] C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1627

1628

QUARTERLY JOURNAL OF ECONOMICS

Examining this sort of historical data immediately raises a problem, however: the further back into the past one looks, the more the economic history of a given place tends to diverge from the economic history of the people who currently live there. For example, the territory that is now the United States was inhabited in 1500 largely by hunting, fishing, and horticultural communities with pre-iron technology, organized into relatively small, pre-state political units.1 In contrast, a large fraction of the current U.S. population is descended from people who in 1500 lived in settled agricultural societies with advanced metallurgy, organized into large states. The example of the United States also makes it clear that, because of migration, the long-historical background of the people living in a given country can be quite heterogeneous. This observation, combined with the finding that the long history of a country’s residents affects the average level of income, naturally raises the question of whether heterogeneity in the background of a country’s residents is a determinant of income inequality within the country. Previous attempts to deal with the impact of migration in modifying the influence of long-term historical factors have been somewhat ad hoc. Hibbs and Olsson, for example, acknowledge the need to account for the movement of peoples and their technologies, but do so only by treating four non-European countries (Australia, Canada, New Zealand, and the United States) as if they were in Europe. Comin, Easterly, and Gong (2006) similarly add dummy variables to their regression model for countries with “major” European migration (the four mentioned above) and “minor” European migration (mostly in Latin America).2 In other cases, variables meant to measure other things may in fact be proxying for migration. For example, the measure of the origin of a country’s legal systems examined by La Porta et al. (1998) may be proxying for the origins of countries’ people. This is also true of Hall and Jones’s (1999) proportion speaking European languages measure. The apparent effect of institutions that were either brought along by European settlers or imposed by nonsettling colonial powers, as found in Acemoglu, Johnson, and Robinson 1. Anthropologists subscribing to cultural evolutionary models speak of political institutions evolving from the band to the tribe to the chiefdom and finally the state (see, for instance, Johnson and Earle [1987]). There were no pre-Columbian states north of the Rio Grande, according to such schema. 2. Comin, Easterly, and Gong use this technique in their 2006 working paper. In the 2010 version of the paper, they adjust for migration using Version 1.0 of our migration matrix.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1629

(2001, 2002), may be proxying for population shifts themselves, despite their attempt (discussed below) to control for the Europeandescended population share. In this paper we pursue the issue of migration’s role in shaping the current economic landscape in a much more systematic fashion than previous literature. (Throughout the paper, we use the term “migration” to refer to any movement of population across current national borders, although we are cognizant that these movements included transport of slaves and forced relocation as well as voluntary migration.) We construct a matrix detailing the year-1500 origins of the current population of almost every country in the world. In addition to the quantity and timing of migration, the matrix also reflects differential population growth rates among native and immigrant population groups. The matrix can be used as a tool to adjust historical data to reflect the status in the year 1500 of the ancestors of a country’s current population. That is, we can convert any measure applicable to countries into a measure applicable to the ancestors of the people who now live in each country. We use this technique to examine how early development impacted current income and inequality. The most thorough previous work along these lines is in the papers by Acemoglu, Johnson, and Robinson (AJR) mentioned above, where they calculate the share of the population that is of European descent for 1900 and 1975. There are a number of conceptual and operational differences between our approach and theirs. Our estimates break down ancestor populations much more finely than “European” and “non-European.” This distinction is important both in the Americas, where there is great variation in the fraction of the population descended from Amerindians vs. Africans, and also in other regions, where important nonnative populations are not descended from Europeans (consider the large Chinese-descended populations in Singapore and Malaysia, or Indian-descended populations in South Africa, Malaysia, and Fiji). Even when we use our matrix to construct a measure of the European population fraction, there are considerable differences between our data and AJR’s. They use as their measure of the European population the fraction of people who are “white,” whereas we also include an estimate of the fraction of European ancestors among mestizo populations. In Mexico, for example, AJR estimate the European population in 1975 to be 15%, even though (in their data) there is an additional 55% of the

1630

QUARTERLY JOURNAL OF ECONOMICS

population that is mestizo. Our estimate of the European share of ancestors for today’s Mexicans is 30%. The AJR estimates are primarily based on data in McEvedy and Jones (1978), which sometimes apply to whole regions, and occasionally involve extrapolation from as far in the past as 1800. Our data are based on a broader selection of more recent sources, including genetic analyses, encyclopedias, government reports, and compilations by religious groups, which are summarized in Appendix I and Online Appendix B.3 The correlation between our measure of the European fraction and the AJR measure is 0.89.4 The rest of this paper is structured as follows. Section II describes the construction of our migration matrix and then uses the matrix to lay out some of the important facts regarding the population movements that have reshaped genetic and cultural landscapes in the world since 1500. We find that a significant minority of the world’s countries have populations mainly descended from the people of other continents and that these countries themselves are quite heterogeneous. In Section III, we apply our migration matrix to analyze the determinants of current income. Using several measures of early development, we show that adjusting the data to reflect where people’s ancestors came from improves the ability of measures of early social and technological development to predict current levels of income. The positive effect of ancestry-adjusted early development on current income is robust to the inclusion of a variety of controls for geography, climate, and current language. We also examine the effect on current income of heterogeneity in early development. We find that, holding constant the average level of early development, heterogeneity in early development raises current income, a finding that might indicate spillovers of growth-promoting traits among national origin groups. In Section IV, we turn to the issue of inequality. We show that heterogeneity in the early development of a country’s ancestors predicts current income inequality and that this effect 3. Appendix I briefly describes our sources and methods. Online Appendix B provides further details, including summaries of the factors behind the estimate for each row. The entire matrix and all Appendices can be downloaded at http://www.econ.brown.edu/fac/Louis Putterman/. 4. The largest differences occur in the Americas. For example, for the five Central American countries of El Salvador, Nicaragua, Panama, Costa Rica, and Honduras, AJR use a uniform value of 20% European; our estimates range from 45% in Panama to 60% in Costa Rica. The largest outlier in the other direction is Trinidad and Tobago, which they list as 40% European and which is only 7% in our measure. Here they seem to have erroneously counted all non-Africans as European, despite the presence of a large Asian population.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1631

is robust to the inclusion of several other measures of the heterogeneity of the current population. We also show that ethnic groups originating in regions with higher levels of early development tend to be placed higher in a recipient country’s income distribution. Section V concludes.

II. LARGE-SCALE POPULATION MOVEMENTS SINCE 1500 We use the year 1500 as a rough starting point for the era of European colonization of the other continents. It is well known that most contemporary residents of countries such as Australia and the United States are not descendants of their territory’s inhabitants circa 1500 but of people who arrived subsequently from Europe, Africa, and other regions. But exactly what proportions of the ancestors of today’s inhabitants of each country derive from what regions and from the territories of which present-day countries has not been systematically studied. Accordingly, we examined a wide array of secondary compilations to form the best available estimates of where the ancestors of the long-term residents of today’s countries were living in 1500. Generally, these estimates have to work back from information presented in terms of ethnic groupings in modern populations. For example, sources roughly agree on the proportion of Mexico’s population considered to be mestizo, that is, to have both Spanish and indigenous ancestors, on the proportion having exclusively Spanish ancestors, on the proportion exclusively indigenous, and on the proportion descended from migrants from other countries. There is similar agreement about the proportion of Haitians descended from African slaves, the proportion of people of (East) Indian origin in Guyana, the proportion of “mixed” and “Asian” people in South Africa, and so on. A crucial and challenging piece of our methodology is the attribution, with proper weights, of mixed populations such as mestizos and mulattoes to their original source countries. Saying, for example, that Mexican mestizos are descended from Spanish immigrants and native Mexicans gives no information about the shares of these different groups in their ancestry. Socially constructed descriptions of race and ethnicity may differ from the mathematical contributions to individuals’ ancestry in which we are interested. Contributions from particular groups may be suppressed, exaggerated, or simply forgotten.

1632

QUARTERLY JOURNAL OF ECONOMICS

For these reasons, whenever possible we have used genetic evidence as the basis for dividing the ancestry of modern mixed groups that account for large fractions of their country’s population.5 The starting point for this analysis is differences in the frequencies with which different alleles (alternative DNA sequences at a fixed position on a chromosome) appear in ancestor populations from different parts of the world. Comparing the allele frequency in a modern population with the frequency in source populations, one can derive an estimate of the percentage contribution for each source. Early studies in this literature used blood group frequencies in modern populations to estimate ancestry. More recent studies use allele frequencies for multiple genes. In selecting among studies, we favored those based on larger samples with well-identified source populations as well as those done in more recent years using modern techniques.6 The genetic studies we consulted were sometimes of specific groups (such as mestizos) and sometimes of the population as a whole, unconditional on race or ethnicity. In the former case, we applied the genetic evidence to divide up ancestry in the particular mixed group, and multiplied by that group’s representation in the overall population.7 Examination of this genetic evidence produced a number of surprises regarding the ancestry of New World populations. For example, the usual historical narrative is that many native populations in the Caribbean, such as the Arawak who occupied the island of Hispaniola (present-day Haiti and the Dominican Republic), died out during the early decades of colonial rule due to disease and the effects of enslavement. However, genetic evidence 5. By “large,” we mean 30% or greater. In addition, we incorporated findings from genetic studies on U.S. African-Americans and on Puerto Ricans and Costa Ricans of primarily Spanish descent, for whom modern genetic studies indicate appreciable admixture (with Europeans and Amerindians, respectively) since 1500. 6. We focus on autosomal DNA, which is not sex-linked, in preference to information on either the Y chromosome, which indicates descent along the male line, or mitochondrial DNA, which indicates descent along the female line. However, evidence from sex-linked genes can provide a useful check on our historical understanding. For example, among many mixed populations in the Caribbean, Native American characteristics are far more common in mitochondrial DNA than on Y chromosomes, indicating that native men were largely unable to breed, whereas native women produced children with European and African men. 7. We used genetic evidence in our analyses of Belize, Bolivia, Brazil, Cape Verde, Chile, Colombia, Costa Rica, Cuba, the Dominican Republic, Ecuador, Guatemala, Mexico, Nicaragua, Paraguay, Peru, Puerto Rico, the United States, and Venezuela. We also searched for genetic data for other countries for which our conventional sources list large mixed-ancestry populations, but were unsuccessful in finding anything in the cases of El Salvador, Honduras, and Panama. See Section II.4 of Main Appendix 1.1 of Online Appendix B as well as the individual country entries in the regional Appendices for details.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1633

suggests that of the ancestors of current residents of the Dominican Republic alive in 1500, 3.6% were local Amerindians. In the case of Costa Rica, 86.5% of residents describe themselves as being of Spanish origin, but genetic evidence (unconditional on ethnicity or race) shows Costa Rican’s ancestry (apart from a small Chinese minority) to be 61% Spanish, 30% Amerindian, and 9% African. A final example: the genetic data we examined show a significant contribution of Africans (10%) to the ancestry of the mestizos who make up 60% of Mexico’s population. In cases where genetic evidence on the ancestry of mixed groups was not available, we relied on textual accounts and/or generalizations from countries with similar histories for which genetic data were available. Genetic information can distinguish only between broad ancestry groups, such as Africans, Native Americans, and Europeans. Beyond this genetic information, other sources were brought to bear to help in the decomposition of mixed categories. For example, we use an archive on the slave trade to estimate the proportion of slaves in a given region who originated from parts of Africa identifiable with certain presentday countries. We apply estimates of where the world’s Ashkenazi Jews and Gypsies lived in 1500 to map people with these ethnic identifications to specific countries of today. Similarly, in some countries such as the United States and Canada, national censuses contain information on the breakdown by specific country of ancestry. Using these methods, we constructed a matrix of migration since 1500. The matrix has 165 rows, each for a present-day country, and 172 columns (the same 165 countries plus seven other source countries with current populations of less than one half million). Its entries are the proportion of long-term residents’ ancestors estimated to have lived in each source country in 1500. Each row sums to one. To give an example, the row for Malaysia has five nonzero entries, corresponding to the five source countries for the current Malaysian population: Malaysia (0.60), China (0.26), India (0.075), Indonesia (0.04) and the Philippines (0.025). Throughout our analysis, we take a “fractional” view of ancestry and descent. Thus matrix entries measure the fraction of a country’s ancestry attributable to different source countries, without distinguishing between whether descendants from those source countries have mixed together or remained ethnically pure (although we did use this information in constructing the matrix). Similarly, when we calculate the number of descendants from a

1634

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I (a) Distribution of Countries by Proportion of Ancestors from Own or Immediate Neighboring Country; (b) Distribution of World Population by Proportion of Ancestors from Own or Immediate Neighboring Country

source country we add up people based on the fraction of their ancestry attributable to the source country. The principal diagonal of the matrix provides a quick indication of differences in the degree to which countries are now populated by the ancestors of their historical populations. The diagonal entries for China and Ethiopia (with shares below one-half percent being ignored) are 1.0, whereas the corresponding entries for Jamaica, Haiti, and Mauritius are 0.0 and that of Fiji is close to 0.5. In some cases, the diagonal entry may give a misleading impression without further analysis; for example, the diagonal entry for Botswana is 0.31 because only 31% of Botswanans’ ancestors are estimated to have lived in present-day Botswana in 1500, but another 67% were Africans who migrated to Botswana from what is now neighboring South Africa in the seventeenth and eighteenth centuries. Figures Ia and Ib are histograms of the proportions of countries and people, respectively, falling into decile bands with respect to the proportion of the current people’s ancestors residing

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1635

in the same or an immediate neighboring country in 1500.8 The figures show bimodal distributions, with 9.7% of countries having 0% to 10% indigenous or near-indigenous ancestry and 70.3% of countries having 90% to 100% such ancestry. Altogether, 80.9% of the world’s people (excluding those in the smallest countries, which are not covered) live in countries that are more than 90% indigenous in population, whereas 10.0% live in countries that are less than 30% indigenous, with the rest (dominated by Central America, the Andes, and Malaysia) falling in between. The compositions of nonindigenous populations are also of interest. The populations of Australia, New Zealand, and Canada are overwhelmingly of European origin, whereas Central American and Andean countries have both large Amerindian and substantial European-descended populations, and Caribbean countries and Brazil have substantial African-descended populations. Guyana, Fiji, Malaysia, and Singapore are among the countries with substantial minorities descended from South Asians, whereas Malaysia and Singapore also have large Chinesedescended populations.9 We illustrate differences both in the proportions of people of nonlocal descent and in the composition of those people by means of Figure II. Country shading indicates the proportion of the population not descended from residents of the same or immediate neighboring countries. Pie charts, drawn for thirteen macro-regions, show the average proportions descended from European migrants, from migrants (or slaves) from Africa, and from migrants from other regions, as well as the proportion descended from people of the same region.10 In terms 8. We define an immediate neighbor as sharing a land boundary or being separated by less than 24 miles of water. Data are from the Correlates of War Project (2000). 9. The populations of Hong Kong and Taiwan are also overwhelmingly descended from Chinese who came to their territories after 1500, giving those entities 97.1% and 98% ancestry from what is now China, according to the matrix. 10. Regions were defined with the aim of keeping their number small enough for purposes of display and grouping countries with similar population profiles. The Caribbean includes Cuba, the Dominican Republic, Haiti, Jamaica, Puerto Rico, and Trinidad and Tobago. Europe is inclusive of the Russian Republic. North Africa and West and Central Asia includes all African and Asian countries bordering the Mediterranean, including Turkey, the traditional Middle East, Afghanistan, and former Soviet republics in the Caucasus and Central Asia. South Asia includes Pakistan, India, Bangladesh, Sri Lanka, Nepal, and Bhutan. East Asia includes Mongolia, China, Hong Kong, North and South Korea, Japan, and Taiwan. Southeast Asia includes the remainder of Asia plus New Guinea and Fiji. Note that for calculation of the pie chart shares, ancestors are assumed to be from “the same region” if they are from countries in the regions thus indicated. This assumption means that Europeans are left out of the “European migrant” category of the pie charts if they live in Europe, even if they have migrated within the continent, and likewise for sub-Saharan Africans in SSA.

QUARTERLY JOURNAL OF ECONOMICS

FIGURE II Regional Ethnic Origins

1636

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1637

of territory, about half the world’s land mass (excluding Greenland and Antarctica), comprising almost all of Africa, Europe, and Asia, is in countries with almost entirely indigenous populations (shown in black), whereas about one-third has less than 20% indigenous inhabitants, and the remainder, dominated by Central America, the Andes, and Malaysia, falls somewhere in between. The heterogeneity of regions in the Americas and Australia/New Zealand is highlighted by the pie charts, showing strong European dominance in Australia/New Zealand, the United States, Canada, and eastern South America, stronger indigenous presence in the Andes, and strong African representation in the Caribbean. We consider the effects of this heterogeneity in Section IV. Although we are mostly interested in using the migration matrix to better understand the determinants of long-run economic performance in countries as presently populated, the versatility of the data can be illustrated by using them to calculate the number of descendants of populations that lived five centuries ago and to see how they have fared. Given data on country populations in 2000, the matrix will tell the total number of people today who are descended from each 1500 source country, and where on the globe they are to be found. For instance, using 2000 population figures from the Penn World Tables 6.2, we find that there were 32.9 million descendants of 1500’s Irish alive at the turn of the millennium, of whom 11.3% lived in Ireland itself, 77.2% in the United States, 5.0% in Australia, and 4.1% in Canada. Combining the information in the matrix with population data for the years 1500 and 2000 yields a number of interesting insights. Because population data for 1500 are very noisy, particularly at the country level, we confine our analysis to looking at 11 large regions.11 The first two columns of Table I list the estimated population of each region in 1500 and 2000. The third column shows the increase in total population over the 500-year period. The primary determinant of this increase in density is the level of economic development in 1500. Europe, East Asia, and South Asia, which were highly developed, had the smallest increases in density. The United States and Canada, Australia and New Zealand, and the Caribbean, which were relatively lightly populated, lacked urban centers and were still home to many 11. Data are from McEvedy and Jones (1978). The regions are the same as those in Figure II, except that the three parts of South America are collapsed into a single region.

U.S. and Canada Mexico and Central America The Caribbean South America Europe North Africa/West and Central Asia South Asia East Asia Southeast Asia Australia and New Zealand Sub-Saharan Africa

Region 315 137 34.4 349 680 530 1,320 1,490 555 22.9 656

0.186 7.65 77.7 35.5

103 132 18.7 0.200

38.3

Population 2000 (millions)

1.12 5.80

Population 1500 (millions)

17.1

12.8 11.3 29.7 114

185 45.6 8.76 14.9

281 23.6

Population growth factor

19.5

12.9 11.6 28.5 3.68

17.8 10.5 16.0 14.6

9.14 16.8

Descendants per person of 1500

Fraction of descendants of 1500 population that live in same region 1.00 .846 .381 .988 .535 .958 .990 .976 .988 1.00 .862

Fraction of current population descended from region’s 1500 ancestors .0325 .602 .0367 .227 .975 .939 .999 1.00 .946 .0322 .981

TABLE I CURRENT POPULATION AND DESCENDANTS, BY REGION

103

13.2 36.7 6.50 0.00

2.05 0.927 578 22.0

0.00 15.0

Number of descendants living outside the region (millions)

1638 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1639

preagricultural societies in 1500, had the largest increases.12 The next four columns of the table use the matrix to track the relationship between ancestor and descendant populations. In column (4), we calculate the number of descendants per capita for each region in 1500, which can be thought of as a kind of “genetic success” quotient. The lowest values of this measure are in the United States and Canada and in Australia and New Zealand, where native populations were largely displaced by European colonizers. Among the regions that were relatively developed in 1500, Europe, not surprisingly, has the largest number of descendants per capita. The two regions with the highest genetic success are sub-Saharan Africa and Southeast Asia, which were both relatively poor (and thus less densely populated) in 1500 but in which the native population was hardly at all displaced by migrants. Column (5) calculates the fraction of the current regional population that is descended from the region’s own 1500 ancestors. This ranges from 0.03 for the United States and Canada and in Australia and New Zealand to almost one for South Asia and East Asia. Column (6) shows the fraction of descendants of the 1500 population that still live in the same region. This is lowest in the Caribbean (38%), Europe (54%), Mexico and Central America (85%), and sub-Saharan Africa (86%). The last column of the table calculates the total number of people descended from a region’s 1500 population who live outside it. There were a total of 777 million such people in 2000, amounting to 12.8% of world population. Here Europe is by far the dominant contributor, with 578 million descendants living outside the region, followed by sub-Saharan Africa with 103 million and East Asia with 37 million.13 III. REASSESSING THE EFFECTS OF EARLY ECONOMIC DEVELOPMENT III.A. Measures of Early Development In the Introduction, we noted that studies including Hibbs and Olsson (2004), Chanda and Putterman (2005), Olsson and Hibbs (2005), and Comin, Easterly, and Gong (2006) find strong 12. Estimates of pre-Columbian population in the Americas are highly controversial because of considerable uncertainty about the death rates in the epidemics that followed European contact. Because McEvedy and Jones’s estimates fall toward the low end of some more recent appraisals, the resulting estimates of the increase in population density since 1500 could be overstated. 13. It is worth reminding the reader that we calculate “descendants” by adding up fractions of individuals’ ancestry. Thus two individuals who each have half their ancestry from Europe add up to one descendant in our usage.

1640

QUARTERLY JOURNAL OF ECONOMICS

correlations between measures of early agricultural, technological, or political development and current levels of economic development, but that these studies make relatively ad hoc adjustments, if any, to account for the large population movements on which this paper focuses. The new migration matrix puts us in a position to remedy these shortcomings and thereby put the theory that very early development persists in its effects on economic outcomes to a more stringent test. We use two measures of early development. The first is an index of state history called statehist. The index takes into account whether what is now a country had a supratribal government, the geographic scope of that government, and whether that government was indigenous or by an outside power. The version used by us, as in Chanda and Putterman (2005, 2007), considers state history for the fifteen centuries to 1500, and discounts the past, reducing the weight on each half century before 1451–1500 by an additional 5%. Let sit be the state history variable in country i for the fifty-year period t. sit ranges between 0 and 50 by definition, being 0 if there was no supratribal state, 50 if there was a home-based supratribal state covering most of the present-day country’s territory, and 25 if there was supratribal rule over that territory by a foreign power, and taking values ranging from 15 (7.5) to 37.5 (18.75) for home- (foreign-) based states covering between 10% and 50% of the present-day territory or for several small states coexisting on that territory. statehist is computed by taking the discounted sum of the state history variables over the thirty half-centuries and normalizing it to be between 0 and 1 (by dividing it by the maximum achievable, i.e., the statehist value of a country that had sit = 50 in each period). In a formula: 29

(1.05)−t si,t . statehist = 29t=0 −t t=0 50 × (1.05) For illustration, Ethiopia has the maximum value of 1, China’s statehist value is 0.906 (due to periods of political disunity), Egypt’s value is 0.760, Spain’s 0.562, Mexico’s 0.533, Senegal’s 0.398, and Canada, the United States, Australia, and New Guinea have statehist values of 0.14 14. Bockstette, Chanda, and Putterman (2002) and Chanda and Putterman (2005) also use versions of statehist that include data for the years between 1501 and 1950. The variable that we call statehist in this paper is the same as what Chanda and Putterman (2005, 2007) call statehist1500. Details on the construction

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1641

Our second measure of early development, agyears, is the number of millennia since a country transitioned from hunting and gathering to agriculture. Unlike a similar measure used by Hibbs and Olsson, which had values for eight macro regions, these data are based on individual country information augmented by extrapolation to fill gaps within regions. The data were assembled by Putterman with Trainor (2006) by consulting region- and country-specific as well as wider-ranging studies on the transition to agriculture, such as MacNeish (1991) and Smith (1995). The variable agyears is simply the number of years prior to 2000, in thousands, since a significant number of people in an area within the country’s present borders are believed to have met most of their food needs from cultivated foods. The highest value, 10.5, occurs for four Fertile Crescent countries (Israel, Jordan, Lebanon, and Syria), followed closely by Iraq and Turkey (10), Iran (9.5), China (9), and India (8.5). Near the middle of the pack are countries such as Belarus (4.5), Ecuador (4), the Cˆote d’Ivoire (3.5), and Congo (3). At the bottom are countries such as Haiti and Jamaica (1), which received crop-growing immigrants from the American mainland only a few hundred years before Columbus, New Zealand (0.8), which obtained agriculture late in the Austronesian expansion, and Cape Verde (0.5), Australia (0.4), and others in which agriculture arrived for the first time with European colonists.15 It is worth noting that, whereas statehist measures a stock of experience with state-level organization that takes into account, for example, setbacks such as the disappearance, breakup, or annexation of an existing state by a neighboring empire, agyears simply measures the time elapsed since agriculture’s founding in the country, with no attempt to gauge temporal changes in the kind, intensity, or prevalence of farming within the country’s territory.16 We examine each of these variables both in its original form and adjusted to account for migration. Assuming the “early developmental advantages” proxied by statehist and agyears to be of the state history index, and the data themselves, can be found in Putterman (2004). Note that by beginning with 1 CE, statehist ignores some differences in the onset of state-level society, that is, those between the most ancient states such as Mesopotamia and Egypt (third millennium BCE), and more recent ones such as Rome and pre-Columbian Mesoamerica (first millennium BCE). 15. For further description, see Putterman with Trainor (2006). 16. The difference is primarily due to data availability. Accounts of the histories of kingdoms, dynasties, and empires are considerably easier to come by than are detailed agricultural histories.

1642

QUARTERLY JOURNAL OF ECONOMICS

FIGURE III Adjusted vs. Unadjusted statehist

something that migrants bring with them to their new country, the adjusted variables measure the average level of such advantages in a present-day country as the weighted average of statehist or agyears in the countries of ancestry, with weights equal to population shares. For instance, ancestry-adjusted statehist for Botswana is simply 0.312 times the statehist value for Botswana plus 0.673 times statehist for South Africa (referring to the people in South Africa in 1500, not those there presently) plus weights of 0.005 each times the statehist values of France, Germany, and the Netherlands (the ancestral homes of Botswana’s small Afrikaner population). Algebraically, the “matrix adjusted” form of any variable is Xv , where X is the migration matrix and v is the variable in its unadjusted form. Figures III and IV show the effect of this adjustment on the variables statehist and agyears, respectively. The horizontal axis shows the variable in its unadjusted form and the vertical axis shows the variable in its adjusted form. In the case of statehist the data form a sort of check mark: there are a large number of countries along the 45◦ line, where adjusted and unadjusted statehist are the same because there has been little or no in-migration. These range from China and Ethiopia, with very high levels of statehist, down to eleven countries at or very near the origin, where there was no history of organized states before 1500 and there has been insignificant migration of people from countries that did have organized states in 1500. There are also a large

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1643

FIGURE IV Adjusted vs. Unadjusted agyears

number of countries along the vertical axis, where a population that had zero statehist has been replaced by migrants who have positive values. There is a great deal of dispersion in the adjusted values of statehist in this group, however, reflecting different mixes of immigrants (primarily European vs. African) and different degrees to which the native population was displaced. Only a handful of countries do not fall into one of these two categories. In the case of agyears, as shown in Figure IV, there are still many countries along the 45◦ line where there has been no inmigration. However, because almost all countries had a history of agriculture prior to the spread of European colonialization after 1500, there is not the strong vertical element that is seen in Figure III. In this sense, agyears is clearly picking up a different and prior aspect of early development than statehist.17 III.B. The Effect of Early Development on Current National Income Table II shows the results of regressing the log of year 2000 per capita income on our early development measures. Each 17. Agriculture began in places such as the Fertile Crescent, China, and Mesoamerica millennia before states arose there, and there are numerous presentday countries, for example, in the Americas and Africa, on the territories of which agriculture had arisen but states had not as of 1500.

1644

QUARTERLY JOURNAL OF ECONOMICS TABLE II HISTORICAL DETERMINANTS OF CURRENT INCOME ln(GDP per capita 2000)

Dependent var. statehist

(1) 0.892∗∗∗ (0.330)

Ancestry-adjusted statehist agyears Ancestry-adjusted agyears Constant No. obs. R2

8.17∗∗∗ (0.14) 136 .060

(2)

(3)

2.01∗∗∗ (0.38)

−1.43∗∗∗ (0.32) 3.37∗∗∗ (0.41)

7.61∗∗∗ (0.17) 136 .219

7.51∗∗∗ (0.16) 136 .271

(4)

0.134∗∗∗ (0.035) 7.87∗∗∗ (0.21) 147 .080

(5)

(6)

0.269∗∗∗ (0.040) 7.05∗∗∗ (0.23) 147 .240

−0.198∗∗∗ (0.044) 0.461∗∗∗ (0.054) 6.96∗∗∗ (0.22) 147 .293

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

regression includes the unadjusted form of one early development measure, the adjusted form, or both. Not surprisingly, given previous work, the tests suggest significant predictive power for the unadjusted variables. However, for both measures of early development, adjusting for migration produces a very large increase in explanatory power. In the case of statehist, R2 goes from .06 to .22, whereas in the case of agyears it goes from .08 to .24. The coefficients on the measures of early development are also much larger using the adjusted than the unadjusted values. In the third and sixth columns of the table we run “horse race” regressions including both the adjusted and unadjusted measures of early development. We find that the coefficients on the adjusted measures retain their significance and become larger, whereas the coefficients on the unadjusted measures become negative and significant. Before proceeding further, we test the robustness of our finding to different indicators of population flows, the addition of controls for geography, and alternative measures of early development. In Table III, we start by constructing measures of statehist and agyears that are adjusted in the spirit of Hibbs and Olsson (2004) and Olsson and Hibbs (2005) by simply assigning to four “neo-European” countries (the United States, Canada, New Zealand, and Australia) the statehist and agyears values of the

8.02∗∗∗ 7.55∗∗∗ (0.14) (0.17) 136 136 .122 .230

7.66∗∗∗ (0.19) 147 .127

7.00∗∗∗ (0.22) 147 .259

0.173∗∗∗ −0.133∗∗∗ (0.034) (0.040)

8.87∗∗∗ (0.44) 129 .286

−0.867∗∗∗ (0.265) −0.800∗∗ (0.361)

(0.38)

0.400∗∗∗ (0.050)

(5)

(0.46)

(4) 2.09∗∗∗

(3)

2.76∗∗∗

(2)

1.27∗∗∗ −0.741∗∗ (0.32) (0.355)

(1)

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Fraction European descent Fraction European languages Constant

Retained

Ancestry-adjusted statehist Ancestry-adjusted agyears “Neo-Europes” adjusted statehist “Neo-Europes” adjusted agyears Native

Dependent var. (0.32)

1.48∗∗∗

(8)

0.152∗∗∗ (0.035)

(9)

1.82∗∗∗ 1.63∗∗∗ 1.58∗∗∗ (0.16) (0.16) (0.17)

(7)

8.07∗∗∗ 7.83∗∗∗ 7.27∗∗∗ 7.11∗∗∗ (0.43) (0.10) (0.13) (0.19) 139 138 138 138 .281 .458 .572 .526

−0.744∗∗∗ (0.270) −0.583 (0.358)

0.270∗∗∗ (0.041)

(6)

ln(GDP per capita 2000)

TABLE III ROBUSTNESS TO ALTERNATIVE MEASURES OF MIGRATION, DESCENT, AND LANGUAGE

1.31∗∗∗ (0.21) 8.10∗∗∗ (0.14) 113 .195

(10)

1.04∗∗∗ (0.18) 7.26∗∗∗ (0.17) 113 .418

(0.38)

2.11∗∗∗

(11)

1.06∗∗∗ (0.19) 6.86∗∗∗ (0.23) 113 .393

0.256∗∗∗ (0.043)

(12)

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1645

1646

QUARTERLY JOURNAL OF ECONOMICS

United Kingdom.18 As the table shows, these adjusted versions perform better than the unadjusted ones, but not nearly as well as the versions we construct using the migration matrix. When we run “horse race” regressions including statehist and agyears adjusted using both our matrix and the “neo-Europes” method (columns (2) and (4)), the coefficients on the matrix-adjusted measures rise in size and significance, whereas the coefficients on the “neo-Europes” adjusted measures become negative and significant. We then construct a series of other measures from our matrix. The first is the fraction of the population made up of “natives” (that is, people whose ancestors lived there in 1500). We include this alongside our measures of adjusted statehist and agyears in order to check that we are not just picking up the fact that there is a correlation between the share of a population’s ancestors who lived elsewhere and the types of countries they lived in. In a similar spirit, we construct a measure of the fraction of the descendants of each country’s people in 1500 who live in that country today, which we call “retained population.” For example, only 40.2% of those descended from the 1500 population of what’s now the United Kingdom live there today, whereas 97.4% of those of Indian descent still live in India.19 Neither of these measures eliminates the statistical significance of our adjusted history measures. native is negative and significant, showing that immigrantpopulated countries are better off on average. Retained population enters our regression with a negative sign and is marginally significant, suggesting either that the venting of surplus population may have aided growth or that characteristics that led to countries being able to implant their population abroad also led them to be richer today. Our third set of robustness checks examines whether our adjusted measures of statehist and agyears are simply proxying for a large European population or for speaking a European language. In columns (7)–(9) we include the fraction of the population 18. Hibbs and Olsson actually assign these countries the values for the region treated as inheriting the Mesopotamian agrarian tradition, which includes all of North Africa, the Middle East, and Europe. 19. Note that the migration matrix is a rather blunt tool to use for this sort of exercise, because (even with the added population data) it doesn’t tell us how many people left the country in question but only how many descendants they have today and where the descendants live. A small number of e´ migr´es may have produced a large number of descendants (for example, the French Canadians) or a large number of e´ migr´es may have produced relatively few (for example, African slaves shipped to the Caribbean).

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1647

descended from 1500 inhabitants of European countries, a variable that we create using the matrix. Not surprisingly, given that most of the world’s highest-income countries are either in Europe or mainly populated by persons of European descent, the European descent variable comes in very significantly. By itself, it explains 46% of the variance in the log of GDP per capita. However, even controlling for this variable, our adjusted measures of state history and agriculture are quite significant. It is also worth pointing out that in controlling for European descent rather than, say, Chinese or Indian descent, we are implicitly taking advantage of ex post knowledge about which of the regions that were well developed in 1500 would have the wealthiest descendants today. In columns (10)–(12), we include the fraction of the population speaking one of five European languages (English, French, German, Spanish, or Italian), which is used by Hall and Jones (1999) as an instrument for “social infrastructure.” This variable explains only 20% of the variation in log of income per capita by itself and has a negligible effect on the magnitude and significance of our measures of early development. In Table IV, we consider the effect of a series of measures of geography on the statistical significance of our adjusted statehist and agyears variables, in order to make sure that our measures of early development are not somehow proxying for physical characteristics of the countries to which people moved. Specifically, we control for a country’s absolute latitude, a dummy for being landlocked, a dummy for being in Eurasia (defined as Europe, Asia, and North Africa), and a measure of the suitability of a country for agriculture. This last variable, constructed by Hibbs and Olsson (2004), takes discrete values between 0 (tropical dry) and 3 (Mediterranean). Taken one at a time, each of these controls has a significant effect on log income, with the predictable sign. However, none of them individually, or even all four taken together, eliminates the statistical significance of matrix-adjusted statehist or agyears. Our final check for robustness is to see whether our matrixadjustment procedure works similarly well on measures or predictors of early development other than statehist and agyears. We consider four other indicators of early development. The first two come from Olsson and Hibbs (2005) and are meant to capture the conditions that favored the early transition of a region to agriculture, as proposed by Diamond (1997). geo conditions is the first principal component of climate (as measured above), latitude, the

1648

QUARTERLY JOURNAL OF ECONOMICS TABLE IV HISTORICAL AND GEOGRAPHICAL DETERMINANTS OF CURRENT INCOME

Dependent var. Ancestryadjusted statehist Absolute latitude Landlocked

ln(GDP per capita 2000) (1) 2.38∗∗∗ (0.40)

(2) 1.32∗∗∗ (0.43)

(3) Panel A 2.21∗∗∗ (0.41)

0.0386∗∗∗ (0.0062)

−0.628∗∗ (0.272)

Eurasia

(4) 1.75∗∗∗ (0.55)

0.594∗∗ (0.286)

Climate Constant No. obs. R2 Ancestryadjusted agyears Absolute latitude Landlocked

7.44∗∗∗ (0.17) 111 .294

6.94∗∗∗ (0.15) 111 .527

0.313∗∗∗ 0.172∗∗∗ (0.048) (0.053) 0.0393∗∗∗ (0.0058)

−0.500∗∗ (0.236)

0.631∗∗ (0.250)

Climate

No. obs. R2

6.85∗∗∗ (0.25) 116 .293

1.31∗∗∗ (0.42)

0.609∗∗∗ (0.096) 6.92∗∗∗ (0.17) 111 .494

7.65∗∗∗ 7.44∗∗∗ (0.21) (0.16) 111 111 .339 .334 Panel B 0.289∗∗∗ 0.219∗∗∗ 0.178∗∗∗ (0.051) (0.062) (0.060)

Eurasia

Constant

(5)

6.61∗∗∗ (0.21) 116 .523

7.07∗∗∗ (0.28) 116 .320

7.04∗∗∗ (0.26) 116 .334

0.516∗∗∗ (0.101) 6.74∗∗∗ (0.25) 116 .426

(6) 1.24∗∗∗ (0.42) 0.0337∗∗∗ (0.0084) −0.558∗∗∗ (0.172) −0.327 (0.247) 0.235∗ (0.121) 6.99∗∗∗ (0.20) 111 .593 0.153∗∗∗ (0.054) 0.0404∗∗∗ (0.0087) −0.577∗∗∗ (0.160) −0.172 (0.237) 0.053 (0.133) 6.80∗∗∗ (0.25) 116 .563

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

size of the landmass on which a country is located, and a measure of a landmass’s East–West orientation. bio conditions is the first principal component of the number of heavy-seeded wild grasses and the number of large domesticable animals known to have

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1649

existed in a macro region in prehistory. The other two measures come from Comin, Easterly, and Gong (2010) and measure the degree of technological sophistication in the years 1 and 1500 CE in the regions that correspond to modern countries. In Table V we show univariate regressions in which the dependent variable is the log of GDP per capita in 2000 and each measure of early development appears either in its original form or adjusted using the migration matrix. The most notable finding of the table is that, as expected, adjusting for migration substantially improves the predictive power of any of the alternative measures of early development that we consider. In the cases of the two Hibb–Olsson measures as well as the technology index for 1 CE, the R2 of the regression rises by roughly fifteen percentage points. In the case of the technology index for 1500, the R2 rises by 34 percentage points.20 A second finding of Table V is that the migration-adjusted versions of three of the variables we look at—bio conditions, geo conditions, and the technology index for 1500—do a better job of predicting income today than the matrix adjusted versions of statehist and agyears. (This finding is confirmed in Appendix II, which presents a complete set of horse race regressions using all combinations of two of the six ancestry-adjusted measures of early development.) In the case of technology in 1500, this is not particularly surprising. statehist and agyears are meant to measure political and economic development in the millennia before the great shuffling of population that is captured in the migration matrix (for example, the average value of agyears is 4.7 millennia). The technology measure, by contrast, measures development immediately prior to that shuffling, and so focuses on information that is more likely to be predictive of current outcomes. By contrast, the fact that the matrix-adjusted versions of geo conditions and bio conditions outperform the similarly adjusted versions of agyears in predicting income today is more mysterious. The Hibbs and Olsson variables are designed to be a measure of the suitability of local conditions to the emergence of agriculture. Hibbs and Olsson think that these variables should predict the timing of the Neolithic revolution, and through that channel predict income today. One would thus expect that a measure of when agriculture 20. Comin, Easterly, and Gong (2010) perform a similar exercise using Version 1.0 of our matrix.

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Ancestry-adjusted technology index 1500 CE

Technology index 1500 CE

Ancestry-adjusted technology index 1 CE

Technology index 1 CE

Ancestry-adjusted bio conditions

bio conditions

Ancestry-adjusted geo conditions

geo conditions

Dependent var.

8.42∗∗∗ (0.09) 105 .415

0.752∗∗∗ (0.075)

(1)

8.19∗∗∗ (0.08) 105 .574

0.952∗∗∗ (0.069)

(2)

8.43∗∗∗ (0.09) 105 .417

0.746∗∗∗ (0.081)

(3)

8.21∗∗∗ (0.07) 105 .581

0.947∗∗∗ (0.074)

(4)

8.42∗∗∗ (0.28) 125 .000

0.0924 (0.3758)

(5)

ln(GDP per capita 2000)

TABLE V ALTERNATIVE MEASURES/PREDICTORS OF EARLY HISTORICAL DEVELOPMENT

6.41∗∗∗ (0.46) 125 .133

2.51∗∗∗ (0.59)

(6)

7.77∗∗∗ (0.20) 114 .183

1.55∗∗∗ (0.30)

(7)

3.26∗∗∗ (0.30) 6.54∗∗∗ (0.21) 114 .525

(8)

1650 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1651

actually did emerge, agyears, would have superior predictive power.21 Overall, the results in Tables II–V show that adjusting for migration improves the predictive power of measures of early development, and that once migration is taken into account, the ability of these historical measures to predict income today is surprisingly high. This finding is consistent with the hypothesis that especially Europeans and to some extent East and South Asians carried something with them—human capital, culture, institutions, or something else—that raised the level of income in the Americas, Australia, Malaysia, and elsewhere. The findings are also consistent with the possibility that a corresponding disadvantage of Africans and Amerindians with respect to one or more of these characteristics has played out in their countries of origin and, for Africans, those to which they were transported as slaves. Any such preexisting disadvantages were almost certainly made worse by the nature of European contact. In the case of Africans, for example, the obvious candidate is the impact of slavery both on the descendants of slaves themselves and on the culture and institutions of the regions from which slaves were taken (Nunn 2008). By contrast, the findings of Tables II–V cast doubt on the idea that the same favorable climactic conditions that led some regions to develop early are also responsible for those regions enjoying an economic advantage today. This can be seen both in 21. Note that for most countries, bio conditions refers not to conditions in the country itself but to those in the region from which it is judged to have inherited its package of agricultural technology (there are only eight of these “macro regions”). The superior predictive power of bio conditions results, most importantly, from the grouping of countries stretching from Ireland to Pakistan into a single macro region that is assigned the sample’s highest bio conditions, those of the Fertile Crescent. By contrast, our variable agyears has a value of 5.5 millennia for the United Kingdom and 10 millennia on average for the Fertile Crescent countries. By assigning to Europe, the region that was the source of most rich country populations today, the high value of the Fertile Crescent, the Hibbs–Olsson measure mechanically makes bio conditions an excellent predictor of income today, when adjusted by the migration matrix. Another way to see this problem is to note that despite the fact that China had lower values for bio conditions than does a Europe treated as part of a “greater Fertile Crescent” macro region (0.153 vs. 1.46, on a scale with a mean zero and standard deviation one), China developed agriculture some three thousand years earlier than Europe. Thus the prediction of the Hibbs–Olsson story—that biogeographic conditions should predict the timing of the Neolithic revolution, which in turn predicts income today—is falsified by more location-specific Neolithic revolution timing. Similarly, the mapping from geo conditions to the development of agriculture does not work nearly as well as the mapping from geo conditions (in its matrix adjusted form) to current income. For example, within Europe, the region with the highest values of geo conditions, the correlation between geo conditions and agyears is slightly negative, driven by the strongly negative correlation between latitude and agyears.

1652

QUARTERLY JOURNAL OF ECONOMICS

the superiority of migration-adjusted measures of early development to the unadjusted versions of these measures and in the robustness of migration-adjusted early development measures to inclusion of measures of countries’ geographic characteristics. As implied above, our preferred interpretation of these results is that they reflect a causal link from migration of people from countries with higher levels of early development to subsequent economic growth. It is nonetheless worth considering whether the results might instead simply reflect, or at least be biased by, the endogeneity of migration. Suppose that people from countries with earlier development ended up migrating to places that were better in some respect (climate, institutions, etc.) and that it was this aspect of quality rather than the presence of migrants from areas of early development that ended up making these places wealthy. Some reassurance that this is not all that is going on is provided by Tables III and IV. Controlling for aspects of the quality of physical environment in destination countries, such as climate and latitude, does not make the effect of early development go away. Similarly, if relative emptiness of some countries both attracted a lot of migrants from early developing areas and also made those countries wealthy, then this effect would be picked up by the variable native in Table III. Further, Engerman and Sokoloff (2002) show that European migrants, when they were able to choose where in the New World to migrate, were not attracted to those regions that in the long run would achieve the highest levels of economic success. In future work we hope to further address the issue of causality by looking more closely at the timing of migration and changes in institutions and income. For now, however, we continue to examine the link between early development of a population and the subsequent income of their descendants, wherever they may live, with the strong suspicion that this is causal. III.C. Mechanism Under the assumption that early development of a country’s population is causally linked to current income, one would want to know the specific channel through which this effect flows. For the most part, we consider this an issue for future research. However, we cannot resist taking an initial look at two possibilities. Recent literature has stressed the roles of institutions and culture as fundamental determinants of national income. One could well imagine that whatever it was that immigrants with long histories of state development took with them that led to higher income

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1653

manifests itself in better institutions or in a culture more favorable to economic success. In the first part of Table VI, we look at the relationship between our statehist measure and three indicators of institutional quality: executive constraints, expropriation risk, and government effectiveness (all from Glaeser et al. [2004]). In each case, the dependent variable is normalized to have a standard deviation of one. Not surprisingly, using the matrix to adjust statehist to reflect the experience of a country’s population greatly improves the ability of this variable to predict the quality of institutions. Once matrix adjusted, it is also statistically significant in all three cases. Similarly, the estimated coefficient on statehist rises in each case moving from the unadjusted to the adjusted measure. The coefficient in column (6), for example, implies that moving from statehist of zero (for example, Rwanda) to statehist of one (Ethiopia), raises governmental effectiveness by 1.32 standard deviations—roughly the difference between Bhutan or Bahrain, on the one hand, and the United States, on the other The rest of Table VI considers some indicators of culture taken from the World Values Survey (WVS). Tabellini (2005) focuses on three measures—trust, control, and respect—that he finds positively and one measure—obedience—that he finds negatively correlated with per capita income in 69 regions of Belgium, France, Germany, Italy, the Netherlands, Portugal, Spain, and the United Kingdom.22 Trust is the generalized trust measure used in Knack and Keefer (1997) and other studies, control is the response to a question about the degree to which respondents feel that what they do affects what happens to them, and respect and obedience indicate respondents’ answers regarding how important is teaching children “tolerance and respect for others” and “obedience,” respectively. We also examine the variable thrift, a cultural factor that Guiso, Sapienza, and Zingales (2006) find to significantly predict savings, and that is derived from the same WVS question, where one of the qualities the respondent may list as important to encourage children to learn at home is “thrift, saving money and things.” In columns (7)–(16) we show regressions of the same form as for the previous variables, where each of the five measures is the dependent variable in one pair of regressions. As above, the 22. Tabellini also reports a similar correlation for the first principal component of the four measures for a cross section of 46 countries.

(2)

0.522∗∗∗ (0.167) 80 .156

−1.31∗∗∗ (0.32)

(9)

−0.063 (0.434) 0.031 (0.245) 80 .000

(10)

0.670∗∗ (0.309) 1.95∗∗∗ 1.71∗∗∗ (0.13) (0.15) 141 141 .002 .033 Control

0.158 (0.274)

(1)

(4)

0.373∗∗ (0.158) 81 .078

−0.931∗∗ (0.432)

(11)

−0.403 (0.561) 0.203 (0.254) 81 .010

(12)

1.33∗∗∗ (0.33) 3.89∗∗∗ 3.51∗∗∗ (0.13) (0.15) 111 111 .047 .134 Respect

(0.287)

0.658∗∗

(3)

Expropriation risk (6)

0.449∗∗ (0.180) 81 .113

−1.12∗∗∗ (0.35)

(13)

−1.64∗∗∗ (0.444) 0.824∗∗∗ (0.258) 81 .157

(14)

1.32∗∗∗ (0.30) −0.180 −0.604∗∗∗ (0.114) (0.118) 144 144 .019 .123 Obedience

0.445 (0.271)

(5)

Government effectiveness

Note. All dependent variables are normalized to have a standard deviation of one. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Ancestry-adjusted statehist

statehist

Dependent var.

No. obs. R2

Constant

Ancestry-adjusted statehist

statehist

Dependent var.

Executive constraints

TABLE VI INFLUENCE OF EARLY DEVELOPMENT ON CURRENT INSTITUTIONS AND CULTURE

(8)

−0.365∗∗ (0.159) 81 .075

0.911 (0.382)∗∗

(15)

0.822∗ (0.478) −0.413∗ (0.241) 81 .040

(16)

1.40∗∗∗ (0.39) −0.349∗ −0.702∗∗∗ (0.186) (0.222) 81 81 .069 .114 Thrift

(0.350)

0.872∗∗

(7)

Trust

1654 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1655

dependent variables are normalized to have standard deviations of one. In the regressions using statehist, that variable is significantly positively associated with trust and thrift, and significantly negatively associated with control, respect, and obedience. Replacing statehist with its matrix adjusted version, we find that the correlations with trust and obedience become stronger, whereas those with respect, control, and thrift become nonsignificant (although in the case of thrift the change from unadjusted to adjusted is small). The correlations that remain significant are consistent with the possibility that one of the ways in which early development raises income is by promoting cultural traits that favor good economic outcomes. Overall, the results for measures of institutions are somewhat more internally consistent than for the cultural variables, although for the cultural variables that work best in terms of being linked to adjusted state history, trust and obedience, the coefficient size and fit of the regression are quite comparable to the best-working measures of institutions, expropriation risk and government effectiveness. Of course, this exercise does not say anything conclusive about the causal path from early development to high income, good institutions, and growth-promoting culture. Early development could cause good institutions and/or good culture, which in turn cause high income; or early development could cause high income through some other channel and affect institutions and culture only through income. III.D. Source Region and Current Region Regressions Although our interest in most of this paper is in how the migration matrix can be used to map data on place-specific early development into a measure of early development appropriate to a country’s current population, the matrix can also be used to infer characteristics of the source countries based only on current data. More specifically, if we assume that emigrants from a particular region share some characteristics that affect the income of countries to which they have migrated, then we can back out these characteristics by looking at data on current outcomes and migration patterns. To pursue this idea we regress log GDP per capita in 2000 on the fraction of the current population that comes from each of the 11 regions defined previously for the exercises of Table I. We call the coefficients from this regression, shown in column (1) of Table VII, “source region coefficients.” Loosely speaking,

1656

QUARTERLY JOURNAL OF ECONOMICS

TABLE VII SOURCE REGIONS AND CURRENT REGIONS AS DETERMINANTS OF CURRENT INCOME ln(GDP per capita 2000)

U.S. and Canada Mexico and Central America The Caribbean South America Europe North Africa/West and Central Asia South Asia East Asia Southeast Asia Australia and New Zealand Constant No. obs. R2

(1)

(2)

Source regions

Current regions

33.7∗∗∗ (5.6) 0.380 (0.495) 3.67∗∗∗ (0.81) 0.498∗∗ (0.229) 2.35∗∗∗ (0.16) 1.29∗∗∗ (0.21) 0.872∗∗∗ (0.265) 2.15∗∗∗ (0.54) 0.805∗∗∗ (0.242) 8.09∗∗∗ (2.10) 7.27∗∗∗ (0.11) 152 .631

3.03∗∗∗ (0.16) 1.10∗∗∗ (0.24) 1.33∗∗∗ (0.30) 1.35∗∗∗ (0.20) 2.23∗∗∗ (0.18) 1.28∗∗∗ (0.21) 0.388∗∗ (0.175) 1.81∗∗∗ (0.56) 1.07∗∗∗ (0.32) 2.72∗∗∗ (0.17) 7.34∗∗∗ (0.13) 152 .584

(3) Source regions

Current regions

−2,273∗∗∗ 74.8∗∗∗ (384) (12.4) 1.90 −0.870 (1.25) (0.710) 0.834 0.268 (1.884) (0.221) 1.11∗∗ −0.415 (0.51) (0.419) 2.66∗∗∗ −0.265 (0.47) (0.476) 0.654 0.613 (1.349) (1.248) 3.05∗∗∗ −2.53∗∗∗ (0.39) (0.39) 4.77∗∗∗ −2.81∗∗∗ (0.57) (0.87) 1.59∗∗ −0.913∗ (0.63) (0.500) −1.59∗ 0.436 (0.87) (0.444) 7.22∗∗∗ (0.12) 152 .681

Note. In regression (1), the independent variables are the shares of the population in each country originating in each region. In regression (2), the independent variables are dummies for a country being located in a particular region. In regression (3), the independent variables are both of the above. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

they measure how having a country’s population composed of people from a particular region can be expected to affect GDP per capita. For example, the source region coefficient for Europe is 2.35, whereas that for sub-Saharan Africa is zero, because this is the omitted category. Thus these coefficients say that moving 10% of a country’s population from European to African origin would be expected to lower ln(GDP) by .235 points.23 23. There are three surprisingly high coefficients in this column: the United States and Canada, the Caribbean, and Australia and New Zealand. In all three

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1657

The second column of Table VII shows a more conventional regression of the log of GDP per capita in the year 2000 on dummies for the region in which the country is located (as in the first column, sub-Saharan Africa is the omitted region). We call these “current region coefficients.” The R2 of the regression with current region dummies is about .05 lower than the R2 of the regression with source region shares. It is also interesting to compare the coefficients on the source and current regions. There is a strong tendency for regions that are rich to also have large values for their source region coefficients. For example, among the six source regions that account for 97% of the world’s population (in size order: East Asia, South Asia, Europe, sub-Saharan Africa, Southeast Asia, and North Africa/West and Central Asia), the magnitudes of the coefficients are very similar, with the single exception of South Asia. This similarity of coefficients in the two regressions is not much of a surprise, given the fact, discussed above, that most countries are populated primarily by people whose ancestors lived in that same country 500 years ago. In column (3) of Table VII, we regress log income in 2000 on both the source region and current region measures. The R2 is somewhat higher than in the first two columns, indicating that source regions are not simply proxying for current regions, or vice versa. F-tests easily reject (p-values of .000) the null hypotheses that either the coefficients on source region or the coefficients on current region are zero. Interestingly, the source region coefficients on Europe and East Asia remain positive, whereas the current region coefficients become negative, suggesting that having population from these regions, rather than being located in them, is what tends to make countries rich. III.E. Population Heterogeneity and Income Levels The exercises in Section III.B show that a higher average level of early development in a country is robustly correlated with higher current income. The most likely explanation for this finding is that people whose ancestors were living in countries that cases the explanation is that the source populations in question contributed a small share of the population to only a few current countries. For example, descendants of people living in the United States and Canada as of 1500 contribute only 3.1% and 3.3% of the populations of those two countries and are found nowhere else in the world. Thus, because the United States and Canada are wealthy, this source population gets assigned a high coefficient in the regression. For this reason, we focus our attention on source region coefficients for populations that account for larger population shares in more countries.

1658

QUARTERLY JOURNAL OF ECONOMICS

developed earlier (in the sense of implementing agriculture or creating organized states) brought with them some advantage— such as human capital, knowledge, culture, or institutions—that raises the level of income today. Depending on what exact advantage is conferred by earlier development, there might also be implications for how the variance of early development among a country’s contributing populations would affect output. For example, if early development conferred some cultural attribute that was good for growth, then in a population containing some people with a long history of development and some with a short history, this growth-promoting cultural trait might simply be transferred from the long-history group to the short-history group. Similarly, growth-promoting institutions brought along by people with a long history of development could be extended to benefit people with short histories of development. An obvious model for such transfer is language: in many parts of the world, descendants of people with shorter histories of states and agriculture speak languages that come from Europe, which has a longer history in those respects. If growth-promoting characteristics transfer in this fashion, then a country with half its population coming from areas with high statehist and half from areas with low statehist might be richer than a country with the same average statehist but no heterogeneity. The above logic would tend to predict that, holding average history of early development constant, a higher variance of early development would raise a country’s level of income. However, there are channels that work in the opposite direction. As will be shown below, higher variance of early development predicts higher inequality. Inequality is often found to impact growth negatively (see, for example, Easterly [2007]), and one could easily imagine that the inequality generated by heterogeneity in early development history would lead to inefficient struggles over income redistribution or the creation of growth-impeding institutions. This is certainly the flavor of the story told by Sokoloff and Engerman (2000). Similarly, the ethnic diversity that comes along with a population that is heterogeneous in its early development history could hinder the creation of growth-promoting institutions. To assess the effect of heterogeneity in early development, we create measures of the weighted within-country standard deviations of statehist, agyears, and source region coefficients, where the weights are the fractions of that source country’s descendants in

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1659

current population. The mean within-country standard deviation of statehist is 0.097, and the standard deviation across countries is 0.089. For agyears the mean standard deviation is 0.764, and the standard deviation across countries is 0.718. For the source region coefficients, the values are 0.347 and 0.686, respectively. In all cases, the distribution of the heterogeneity measures is skewed to the right, with a significant number of countries (those that experienced no immigration) having values of zero. In Table VIII we present regressions of the log of current income per capita on the standard deviation of each of our three measures of early development (statehist, agyears, and source region coefficients), with and without controls for the mean of each of the variables. Once the mean level of statehist is controlled for, the standard deviation of statehist has a positive and significant effect on current income. The same is true for agyears. Interestingly, the coefficient on the standard deviation of the source region coefficient is not significant at all once the mean of the source region coefficients is included. Including these measures of standard deviation has little effect on the size or significance of the coefficients on the means of statehist or agyears, as seen in Table II.24 In columns (3), (6), and (9) of the table, we experiment with including the share of the population that is of European descent, to make sure that our heterogeneity measure is not simply reflecting the presence of Europeans in some countries and not others. As can be seen, the coefficients on the standard deviations of our early development measures are not greatly affected. The positive coefficients on the standard deviations of statehist and agyears imply, as discussed above, that a heterogeneous population will be better off than a homogeneous population with 24. We also considered the possibility that the effect of heterogeneity in early development on current income is nonlinear. Ashraf and Galor (2008) argue that this is the case for genetic diversity: people with different genetic backgrounds are complements in production of knowledge, but genetic diversity also reduces social cohesion and hinders the transmission of human capital within and across generations. As a result, there should be a hump-shaped relationship between genetic diversity and income. Ashraf and Galor find evidence for this in crosscountry data. Proxying for genetic diversity with migratory distance from East Africa, they find that the optimal level of genetic diversity occurs in East Asia. About three-fourths of the countries in the world have genetic diversity that is higher than optimal. We tested for a similar effect by including the square of the standard deviation of the relevant early development measure in columns (2), (4), and (6) of Table VIII. In the cases of statehist and agyears, this new term entered insignificantly. In the case of the source region coefficients, the coefficient on the square of the standard deviation was negative and significant, implying a hump-shaped relationship. However, the peak of the hump was when the standard deviation of the source region coefficients was equal to 2.95. Only two countries, the United States and Canada, had values that exceeded this level.

(1)

8.33∗∗∗ (0.15) 136 .013

1.40 (0.91)

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Fraction European descent

Standard deviation of source region coefficients Mean source region coefficient

Ancestry-adjusted agyears

Standard deviation of agyears

Ancestry-adjusted statehist

Standard deviation of statehist

Dependent var. (0.65) 1.55∗∗∗ (0.32)

(0.78) 2.08∗∗∗ (0.37)

7.38∗∗∗ (0.18) 136 .245

1.70∗∗∗

2.02∗∗

1.60∗∗∗ (0.16) 7.08∗∗∗ (0.14) 136 .586

(3)

(2)

8.21∗∗∗ (0.14) 147 .056

0.377∗∗∗ (0.108)

(4)

6.86∗∗∗ (0.23) 147 .278

0.312∗∗∗ (0.094) 0.260∗∗∗ (0.04)

(5)

1.49∗∗∗ (0.16) 6.83∗∗∗ (0.19) 147 .546

0.278∗∗∗ (0.086) 0.178∗∗∗ (0.03)

(6)

ln(GDP per capita 2000)

8.33∗∗∗ (0.11) 152 .065

0.414∗∗∗ (0.06)

(7)

TABLE VIII THE EFFECT OF HETEROGENEITY IN EARLY DEVELOPMENT ON CURRENT INCOME

7.26∗∗∗ (0.09) 152 .634

0.0844 (0.0636) 0.982∗∗∗ (0.067)

(8)

0.0846 (0.0647) 0.978∗∗∗ (0.141) 0.0116 (0.2967) 7.26∗∗∗ (0.10) 152 .634

(9)

1660 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1661

the same average level of early development. For example, using the coefficients in column (2) of Table VIII, a country with a population composed of 50% people with a statehist of 0.4 and 50% with a statehist of 0.6 will be 20% richer than a homogenous country with statehist of 0.5. A country with 50% of the population having statehist of 1.0 and 50% with statehist of zero would be twice as rich as a homogenous country with the same average statehist. (This latter example is quite outside the range of the data, however. The highest values of the standard deviation of statehist in our data set are for Fiji (0.346), Cape Verde (0.301), and Guyana (0.293). In the example, the standard deviation is 0.5.) The coefficients also have the unpalatable property that a country’s predicted income can sometimes be raised by replacing high statehist people with low statehist people, because the decline in the average level of statehist will be more than balanced by the increase in the standard deviation. For example, the coefficients just discussed imply that combining populations with statehist of 1 and 0, the optimal mix is 86% statehist = 1 and 14% statehist = 0. A country with such a mix would be 41% richer than a country with 100% of the population having a statehist of 1.25 We think that this somewhat counterintuitive finding may result from a particular set of historical contingencies that make simple policy inferences problematic. First, during the long era of European expansion spanning the fifteenth to early twentieth centuries, European-settled countries such as the United States, Chile, Mexico, and Brazil having substantial African and/or Amerindian minorities attained considerably higher incomes than many homogenously populated Asian countries with relatively long state histories, including Bangladesh, Pakistan, India, Sri Lanka, Indonesia, and China. Second, the latter group of countries experienced little growth, or negative growth, during those same centuries. Chanda and Putterman (2007) argue that the underperformance of the populous Asian countries during the period 1500–1960 is an exception to the rule (which they find to have held up to 1500 and again since 1960) that earlier development of agriculture and states has been associated with faster economic development during most of world history. Although our regression result reflects the fact that population heterogeneity 25. The specification that we use implies that this property must hold as long as the coefficients on both the mean and standard deviation are positive. However, when we use variance on the right hand side, in which case the property does not automatically hold, it is nonetheless implied by the estimates.

1662

QUARTERLY JOURNAL OF ECONOMICS

has not detracted from economic development in the first group of countries, it seems best not to infer from it that “catch up” by homogeneous Old World countries would be speeded up by infusions of low statehist populations into existing high statehist countries. IV. POPULATION HETEROGENEITY AND INCOME INEQUALITY The finding that current income is influenced by the early development of a country’s people, rather than of the place itself, provides evidence against some theories of why early development is important but leaves many others viable. Early development may matter for income today because of the persistence of institutions (among people, rather than places), because of cultural factors that migrants brought with them, because of long-term persistence in human capital, or because of genetic factors that are related to the amount of time since a population group began its transition to agriculture. Many of the theories that explain the importance of early development in determining the level of income at the national level would also support the implication that heterogeneity in the early development of a country’s population should raise the level of income inequality. For example, if experience living in settled agricultural societies conveys to individuals some cultural characteristics that are economically advantageous in the context of an industrial society, and if these characteristics have a high degree of persistence over generations, then a society in which individuals come from heterogeneous backgrounds in terms of their families’ economic histories should ceteris paribus be more unequal. We pursue three different approaches to examining the determinants of within-country income inequality. We begin by showing that heterogeneity in the historical level of development of countries’ residents predicts the level of income inequality in a cross-country regression. Second, we construct measures of population heterogeneity based both on the current ethnic and linguistic groupings and on the ethnic and linguistic differences among the sources of a country’s current population. We show that allowing for these other measures of heterogeneity does not reduce the importance of heterogeneity in historical development as a predictor of current inequality. Finally, we pursue an implication of these findings by asking whether, within a country, people originating from countries that had characteristics predictive of low national income are in fact found to be lower in the income distribution.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1663

IV.A. Historical Determinants of Current Inequality In this exercise our dependent variable is the Gini coefficient 2000–2004 or in the most recent decade using data from the UN World Income Inequality Database as supplemented by Barro (2008). Our key right hand–side variables are the weighted within-country standard deviations of statehist, agyears, and source region coefficients, as constructed in Section III.E. We experiment with including the levels of these matrix-adjusted early development measures as additional controls. The results are shown in Table IX. Our finding is that heterogeneity in the early development experience of the ancestors of a country’s population is significantly related to current inequality. To give a feel for the size of the coefficients, we look at the case of agyears. The standard deviation of agyears in Brazil is 1.976 millennia. By contrast, in countries that have essentially no in-migration, such as Japan, the standard deviation is zero. Applying the regression coefficient of 0.0571 from the fourth column of Table IX, this would say that variation in early development in Brazil would be expected to raise the Gini there by 0.11, which is certainly an economically significant amount. Because Brazil’s Gini was 0.57 and Japan’s 0.25, the exercise suggests that about one-third of the difference in inequality between the two countries may be attributable to the difference in the heterogeneity of their populations’ early development experiences. The results in columns (5) and (6) for source region coefficients are similar in flavor but somewhat smaller in magnitude. Taking the case of Brazil again, the variance of the source region coefficient in that country is 0.888, reflecting a composition of 74.4% people from Europe (SRC of 2.53), 9.1% from South America (SRC of 0.498), and 15.7% from Africa (SRC of 0). The coefficient in the sixth column of Table IX implies that the difference in standard deviation of the source region coefficients between Brazil, on the one hand, and a country such Japan where the standard deviation of source region coefficients is zero, on the other, would be expected to raise the Gini coefficient by 0.043. IV.B. Other Measures of Heterogeneity Our main finding in the last section was that heterogeneity of a country’s population’s ancestors with respect to measures of early development contributes to current income inequality. We

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Mean source region coefficient

Standard deviation of source region coefficients

Ancestry-adjusted agyears

Standard deviation of agyears

Ancestry-adjusted statehist

Standard deviation of statehist

Dependent var.

0.445∗∗∗ (0.024) 135 .267

0.408∗∗∗ (0.084) −0.148∗∗∗ (0.036)

0.456∗∗∗ (0.088)

0.375∗∗∗ (0.014) 135 .140

(2)

(1)

0.381∗∗∗ (0.014) 140 .108

0.0512∗∗∗ (0.0121)

(3)

0.493∗∗∗ (0.031) 140 .260

0.0571∗∗∗ (0.0108) −0.0217∗∗∗ (0.0052)

(4)

Gini coefficient

TABLE IX HISTORICAL DETERMINANTS OF CURRENT INEQUALITY

0.413∗∗∗ (0.011) 141 .018

0.0207 (0.0166)

(5)

0.0453∗∗∗ (0.0153) −0.0743∗∗∗ (0.0089) 0.498∗∗∗ (0.016) 141 .365

(6)

1664 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1665

now pursue the question of whether heterogeneity in the background of migrants more generally may affect the level of income inequality in a country. If this were the case, then in our previous findings heterogeneity of early development might simply be proxying for more general heterogeneity. To address this issue, we examine two standard measures of heterogeneity as well as two new measures created using the matrix, and we compare the predictive power of these measures to each other and to the measures that incorporate early development. The theory implicit in this exercise is that a country made up of people who are similar in terms of culture, language, religion, skin color, or similar attributes will ceteris paribus have lower inequality. This could come about through a number of different channels. Populations that are similar in the dimensions just listed may be more likely to intermarry and mix socially than populations that are diverse. This mixing could by itself reduce any inequality in the groups’ initial endowments, and would also likely be associated with an absence of institutions that magnify ethnic, racial, or economic distinctions. Countries in which people feel a strong sense of kinship with other citizens might also be expected to redistribute income more actively or promote economic mobility. The first heterogeneity measure we use is ethnic fractionalization from Alesina et al. (2003). This is the probability that two randomly selected individuals will belong to different ethnic groups. Alesina et al. find that higher ethnic fractionalization is robustly correlated with poor government performance on a variety of dimensions. We create a second measure of fractionalization using the data in the matrix, which we call “historic fractionalization.” This is wi2 , 1− i

where wi is the fraction of a country’s ancestors coming from country i. Unlike the ethnic fractionalization index, the historic fractionalization index does not take into account ethnic groups composed of people who came from several source countries, such as African-Americans, but instead differentiates among, for example, Ghanaian, Senegalese, Angolan, and other ancestors of current residents of the United States. As Alesina et al. point out, individual self-identification with ethnic groups can change as

1666

QUARTERLY JOURNAL OF ECONOMICS

a result of economic, social, or political forces. Thus ethnicity has a significant endogenous component that is absent in the case of historical fractionalization. Ethnic and historical fractionalization are almost uncorrelated (correlation coefficient .15). In particular, a large number of African countries have values of ethnic fractionalization near one but historical fractionalization near zero. The reason is that in these countries there is fractionalization based on tribal affiliation that is unrelated to the movement of people over current international borders over the last 500 years. There are also several countries (Haiti, Jamaica, Argentina, Israel, the United States) that have a high historic fractionalization because they contain immigrants from many different countries, but a low level of ethnic fractionalization because immigrant groups from similar countries are viewed as having a single ethnicity. The third measure of heterogeneity we use is “cultural diversity,” as constructed by Fearon (2003). Fearon’s measure is similar in spirit to the ethnic heterogeneity measure described above but goes further in making an additional adjustment for different degrees of dissimilarity (as measured by linguistic distance) among the ethnic groups in a country’s population. Desmet, ˜ Ortuno-Ort´ ın, and Weber (2009), using a similar measure, find that higher linguistic heterogeneity predicts a lower degree of government income redistribution. Our final measure of heterogeneity is similar in approach to Fearon’s, but instead of using the language that a country’s residents speak today, we use data on the languages spoken in the countries inhabited by their ancestors in 1500, according to our matrix. Differences in language may directly impede mixing of people from different source countries. In addition, linguistic closeness may well be proxying for other dimensions of culture (such as religion) that could have similar impacts on the degree of mixing among a country’s constituent populations and/or the openness of institutions.26 For these reasons, historical diversity in languages of a country’s ancestors may have an impact on 26. Spolaore and Wacziarg (2009) use genetic distance, a measure of the time since two populations shared a common ancestor, as an indicator of cultural similarity between countries. They argue that genetic distance determines the ability of countries to learn from each other, and show that it predicts income gaps among pairs of countries. Ethnic distance and genetic distance are closely related in practice, as shown by Cavalli-Sforza and Cavalli-Sforza (1995).

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1667

inequality that lasts long after the residents of a country have come to speak the same language. We call the variable we create historical linguistic fractionalization. (Our methodology and data are described in Online Appendix C.) Table X presents regressions of income inequality, as measured by the Gini coefficient, on our various measures of heterogeneity. The first four columns compare the four measures of heterogeneity described above. Cultural (linguistic) diversity is statistically insignificant. By contrast, the two variables that use the matrix to measure historical heterogeneity, historical fractionalization and historical linguistic fractionalization, as well as ethnic fractionalization enter very significantly with the expected positive sign. It is notable that in each case the measure of diversity based on historic variation performs better than the corresponding measure based on the current variation. For example, distance among the languages spoken by people’s ancestors predicts inequalities today far better than does distance among the languages spoken by those people themselves. In the case of variation in language, much of the superior predictive power is driven by Latin America, which in terms of language currently spoken does not look very heterogeneous but does look heterogeneous in terms of historic languages. Patterns of social differentiation that arose during the encounters of people from different continents appear to show persistence even after extensive intermixing and linguistic homogenization. Part of the reason for this could be that linguistic distance between ancestral populations posed barriers to transmission of technologies within countries of a kind similar to those that Spolaore and Wacziarg (2009) posit for genetic distance in international diffusion of technology. The next four columns of Table X repeat these regressions, controlling for the mean and standard deviation of the state history measures, as in columns (1) and (2) of Table IX.27 The somewhat surprising finding here is that variation in terms of state history dominates the other forms of heterogeneity that we examine. None of the other four measures of heterogeneity is statistically significant. Variation in early development among a country’s people is far more important than more standard forms 27. To save space, we don’t report parallel exercises using the standard deviation of agyears. In Section IV.C, we also focus on statehist. Tables II, III, and IV show that statehist and agyears have similar explanatory power, and we accord slight priority to statehist because of its more nuanced tracking of 1,500 years of social history (see our discussion comparing the two measures in Section III).

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Ancestry-adjusted statehist

Standard deviation of statehist

Historical linguistic fractionalization

Cultural diversity

Historical fractionalization

Ethnic fractionalization

Dependent var.

0.367∗∗∗ (0.019) 132 .073

0.117∗∗∗ (0.037)

(1)

0.382∗∗∗ (0.014) 135 .101

0.134∗∗∗ (0.034)

(2)

0.407∗∗∗ (0.017) 132 .005

0.0354 (0.0437)

(3)

0.385∗∗∗ (0.013) 135 .115

0.168∗∗∗ (0.039)

(4)

0.392∗∗∗ (0.085) −0.130∗∗∗ (0.040) 0.415∗∗∗ (0.033) 132 .276

0.0517 (0.0355)

(5)

Gini coefficient

0.435∗∗∗ (0.124) −0.148∗∗∗ (0.037) 0.446∗∗∗ (0.025) 135 .267

−0.0116 (0.0489)

(6)

TABLE X ETHNIC, LINGUISTIC, AND HISTORICAL DETERMINANTS OF CURRENT INEQUALITY

0.401∗∗∗ (0.086) −0.151∗∗∗ (0.038) 0.441∗∗∗ (0.030) 132 .275

0.0126 (0.0382)

(7)

0.0460 (0.0721) 0.310∗ (0.167) −0.149∗∗∗ (0.036) 0.445∗∗∗ (0.024) 135 .269

(8)

1668 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1669

of heterogeneity (in language or ethnicity) as an explanation for inequality. Similarly, variation in the linguistic background of a country’s ancestors, despite its surprising predictive power relative to that of present languages spoken, is not important once one controls for variation in early development. IV.C. Source Country Early Development as a Determinant of Relative Income The results in Tables IX and X show that heterogeneity in the historical background of a country’s residents is correlated with income inequality today. A number of mechanisms could produce such a correlation. One simple theory is that when people with high and low statehist are mixed together, the high statehist people have some advantage that leads to their percolating up to the top of the income distribution, and then there is enough persistence so that their descendants are still there hundreds of years later. A second theory is that situations in which high and low statehist people are mixed together tended to occur in cases of colonialization and/or slavery, and that in these circumstances high statehist people were able to create institutions that enabled groups at the top of the income distribution to remain there. We do not propose to test these theories against each other. Instead we test an auxiliary prediction that follows from either of them: specifically, in countries with a high standard deviation of statehist, it is the ethnic groups that come from high statehist countries that tend to be at the top of the income distribution. Confirming this prediction would give us additional confidence that the link between the standard deviations of statehist and the current level of inequality is not spurious. To test this prediction, we looked for accounts of socioeconomic heterogeneity by country or region of ancestral origin in the ten countries in our sample having the highest standard deviation of statehist. It is in countries where statehist is highly variable that we would be most likely to find differences in outcomes among nationality groups with different values of statehist. The countries are listed in Table XI. Not surprisingly, all are former colonies, seven of them in the Americas. Of the latter, three are in Central America, three in South America, and one in the Caribbean. We also list in Table XI the United States, which has the eighteeenth highest standard deviation of statehist in the sample and is of particular interest due to its size, economic importance, and good data availability.

0.346

0.301

0.293

0.292

0.291

1 Fiji

2 Cape Verde

3 Guyana

4 Panama

5 Paraguay

# Country

Standard dev. of statehist

0.000 0.749 0.562 0.000

35.7 5.5 46.8 46.1

Panamanian 0.552 European, non-Spanish Spanish Paraguayan/ Brazilian

0.677 0.142 0.000 0.906 0.677 0.578 0.150

54.0 39.0 5.0 1.5 4.0 45.2 13.0

S. Asian African Guyanese 0.548 Chinese S. Asian European African

0.906 0.723

0.693 0.688 0.000 0.723 0.142

0.7 1.3

2.2 45.0 52.1 41.4 58.6

Percent statehist population (average)

0.540 Chinese Portuguese

0.441 European Indian Fijian 0.51 Portuguese African

Gini

Component groups (region) Othera Indo-Fijian Fijian White Creole Black Chinese Portuguese Mixedb East Indian Black Amerindian Chinese White Mestizo Mixed West-Indian (Black) Amerindian European (incl. Spanish) Mestizo Amerindian

Component groups (ethnic)

94.7 1.1

6.0 3.8

4 41 55 1.0 71.0 28.0 0.3 0.4 11.2 51.9 3.8 5.3 2.0 1.0 68.0 13.0

0.281 0.000

0.000 0.575

0.745 0.688 0.000 0.723 0.473 0.142 0.906 0.723 0.410 0.677 0.142 0.000 0.906 0.578 0.281 0.150

Percent statehist population (average)

TABLE XI STATEHIST AND RELATIVE INCOME FOR ANCESTRY GROUPS AND CURRENT ETHNIC GROUPS, SELECTED COUNTRIES

Middle Low

Low High

High Middle Low High Middle Low High Middle Middle Middle Middle Low High High Upper middle Lower middle

Relative income

1670 QUARTERLY JOURNAL OF ECONOMICS

0.289

0.288

0.284

0.281

6 South Africa

7 Brazil

8 Trinidad and Tobago

9 El Salvador

# Country

Standard dev. of statehist

7.1 45.4 46.0

European S. Asian African 5.0 5.0

15.7 9.1 1.5

African Brazilian 0.402 Chinese

0.484 Spanish Salvadoran

0.8 74.4

18.0 3.4 78.7

0.562 0.000

0.671 0.677 0.166

0.086 0.000 0.906

0.834 0.715

0.710 0.670 0.000

Percent statehist population (average)

0.566 Japanese European

0.565 European Indian/S. Asian South African

Gini

Component groups (region)

TABLE XI (CONTINUED)

White/Caucasian Indian Mixede African White Mestizo Amerindian

White Indian/Asian Colored (mixed)c Black African Asian White Mixedd Black Amerindian Chinese

Component groups (ethnic)

0.7 4.5 14.9 43.5 9.0 9.0 1.0

9.2 2.5 8.9 79.4 0.4 53.7 38.5 6.2 0.4 0.2

0.671 0.677 0.504 0.166 0.562 0.281 0.000

0.710 0.670 0.452 0.000 0.834 0.715 0.384 0.086 0.000 0.906

Percent statehist population (average)

High Low Lower middle Low High Middle Low

High Upper middle Lower middle Low High Middle Low Low Low Upper middle

Relative income

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1671

0.232

18 United States

Asian Central and South American Sub-Saharan African North-American f

0.464 European

0.544 European African Nicaraguan

Component groups (region)

0.640 0.433 0.146 0.000

9.6 3.2

0.648

0.568 0.150 0.000

4.1 6.3

75.7

51.0 9.0 4.0

Percent statehist population (average)

American Indian and Alaska Native

Asian Hispanic of any race Black

White African (Creole) Mestizo Amerindian White not Hispanic

Component groups (ethnic)

0.240 0.000

1.0

0.640 0.485

0.568 0.150 0.281 0.000 0.650

12.8

4.2 14.1

17.0 9.0 69.0 5.0 67.4

Percent statehist population (average)

Lower middle

Low

High Lower middle

High Middle Middle Low Upper middle

Relative income

Note. For detailed information sources, see Online Appendix D. Gini coefficient is from the UN World Income Inequality Database (2007), except Cape Verde: World Development Indicators. Component region group population percentages computed from the matrix. Component ethnic groups based on: Fiji : Household Survey 2002–2003; Cape Verde: Census 1950 (quoted in Lobban [1995, p. 199]); Guyana: Census 1980; Paraguay: Census 2002; Panama: data from Fearon (2003); South Africa: Household Survey 2005; Trinidad and Tobago: Continuous Sample Survey of Population; El Salvador: CIA Factbook; Nicaragua: CIA Factbook; Venezuela: CIA Factbook; United States: U.S. Census, Population Estimates Program, Vintage 2004. a Europeans, Chinese. b One-half East Indian, one-half African. c 0.35 African, 0.1 S. Asian, 0.05 Indonesian, 0.1 UK, 0.1 Netherlands, 0.1 France, 0.1 Germany, and 0.1 Portugal. d 0.506 European, 0.239 Amerindian, 0.255 African. e One-third African, one-third South Asian, one-third European. f Includes Hawaiian and Alaskan.

0.277

10 Nicaragua

# Country

Standard dev. of statehist Gini

TABLE XI (CONTINUED)

1672 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1673

For each country in the table we first show the breakdown of the population in terms of origin countries or groups of similar countries, according to the matrix. We then show the weighted average value of statehist for each origin country or group. The next three columns are based on information about the current ethnic breakdown in the country. Ethnic groups as currently identified sometimes correspond to individual origin groups but are often combinations, referred to for example as mestizo, creole, colored, or mixed. For each current ethnic group, we then present estimates of average statehist and the relative value of current income, listed as high, middle and low or high, upper middle, lower middle, and low (see Online Appendix D for the countryspecific sources of this income breakdown). To estimate statehist for a mixed ethnic group, we use the assumptions underlying the matrix that relate mixed groups to source populations. For example, the group termed “colored” in South Africa is assumed to have half of its ancestors coming in equal proportions from five European countries (England, Portugal, and Afrikaner source countries Netherlands, France, and Germany) and the other half in unequal proportions from South Africa itself (35%), India (10%), and Indonesia (5%). These assumptions are reported in the region Appendices describing the construction of the matrix. Leaving details to Online Appendix D, we note immediately that the ordering of statehist values and the ordering of socioeconomic status in Table XI has at least some correspondence in every country. For nine of the eleven countries listed—Fiji, Cape Verde, Guyana, Paraguay, Panama, South Africa, Brazil, El Salvador, and Nicaragua—the socioeconomic ordering perfectly dovetails with that of statehist values. In two countries—Trinidad and Tobago and the United States—there are discrepancies in the orderings of Asians and “Whites,” with Chinese and (South Asian) Indians having lower incomes than Whites in the first country despite having higher statehist, although Asians in general have higher incomes than Whites in the United States despite lower average statehist. For the United States, there is a further discrepancy in that “Black” Americans have lower average incomes than American Indians and Alaska Natives, despite having somewhat higher average statehist values. Although no statistical significance should be attached to the counts just mentioned, because the categorizations are quite broad and require some judgments to be made, the general pattern clearly supports the expectation. A few patterns are noteworthy. Paraguay and El Salvador are representative of the many Latin American countries in which

1674

QUARTERLY JOURNAL OF ECONOMICS

the main identifiable groupings, listed in order both of socioeconomic status and of average statehist, are European, mestizo, and Amerindian. Three of the represented countries—Panama, Nicaragua, and Brazil—add a group of largely African descent to this tripartite pattern. In each of the latter countries, the White group remains on top and the Amerindian group on the bottom. The Black group, with higher statehist than the Amerindians,28 is variously found on approximate par with the mestizos (Nicaragua) or between the mestizo and Amerindian groups (Panama). In Brazil, mestizos, Blacks, and Amerindians are classified as low because data that discriminate more carefully between them are unavailable. In two of the other represented countries of the Americas— Guyana and Trinidad and Tobago—there are substantial populations of South Asian origin. In Trinidad and Tobago, the socioeconomic positioning of this group is lower than predicted by their average statehist. This result, contradicting our general hypothesis, may be related to the economic hard times on which South Asia itself had fallen by the nineteenth century (mentioned in Section III.E) and the manner in which millions were brought from that region to the Caribbean to work in indentured servitude after Britain outlawed slavery. Consistent with the expectations based on their homelands’ state histories, however, people of South Asian ancestry occupy middle or upper-middle socioeconomic positions in Guyana and also in two of the three non-Americas examples, South Africa and Fiji. Of the two African countries represented in Table XI, Cape Verde began as a Portuguese plantation economy employing slaves brought from the African mainland. At the time of the country’s independence from Portugal, in 1975, the society was described as being stratified along color lines, with people of darker complexion usually found in the lower class and people of lighter complexion constituting the “bourgeoisie” (Meintel 1984; Lobban 1995). The correlation between complexion and socioeconomic class is consistent with our proposed explanation of the correlation between standard deviation of statehist and the Gini coefficient seen in Tables IX and X. In South Africa, the major population categories are Black African, White, “colored” (with both European and either African, Indian, or Malay ancestors), and Indian 28. This is due to the existence of some states in Africa before 1500 but their absence in the Americas outside of Mexico, Guatemala, and the Andes. Note that the situation is reversed in some cases; for instance, the indigenous people of South Africa have a lower statehist value than those of Mexico and Peru.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1675

or Asian. The socioeconomic standings of these groups today remain heavily influenced by the history of European settlement and subordination of the local population, and partly as a result, the average incomes for those in the four groupings are ordered exactly in accord with the ordering of average statehist. The only case in Table XI not located in the Americas or Africa is Fiji, whose population is classified by government statisticians as indigenous (55.0%), Indian (41.0%), and other (mainly European and Chinese, 4.0%). Average household incomes per adult in the three groups are ordered identically to average statehist values. Although the reported income gap between the Indian and native Fijian populations is far smaller than the difference in statehist, the government statisticians comment that the incomes of Indo-Fijians are probably undercounted, because much of them comes from private business activities likely to be underreported. Turning finally to the United States, the Census Bureau reports a breakdown of the population into White non-Hispanic, Hispanic any race, Black, Asian, American Indian and Alaska Native, and other small categories. These groups’ reported median incomes have the same ordering as their average statehist values, with the exception of the higher Asian than White income and the higher American Indian than Black income. The simple correlation between the five statehist and the five income values (as reported in Online Appendix D), with equal weighting on all observations, is 0.741. On balance, the evidence from the ten countries with the highest internal variation of statehist and from the eighteenthranking United States appears to support the idea that correlation between within-country differences in income and corresponding differences in the early development indicator statehist at least partially account for the predictive power of the standard deviation of statehist in the Table IX and X regressions. Indeed, in this section we have found within countries (as the previous section found between countries) that there is considerable persistence and reproduction of income differences, which appears to reflect social differences dating back up to half a millennium. To be sure, in the majority of cases just discussed, differences in societal capabilities during the era of European expansion played themselves out to a considerable degree in the form of outright dominance of some over others, including appropriation of land, control of government and monopoly of armed force, and involuntary movement of millions of people between macro-regions to meet the conquering population’s labor demands. How persistent early differences

1676

QUARTERLY JOURNAL OF ECONOMICS

would have proven to be in the absence of the exercise of raw power is a question that goes beyond the scope of our paper. The point for present purposes is that as history has in fact unfolded, such differences have been remarkably persistent. V. CONCLUSIONS Conquest, colonialism, migration, slavery, and epidemic disease reshaped the world that existed before the era of European expansion. Over the last 500 years, there have been dramatic movements of people, institutions, cultures, and languages among the world’s major regions. These movements clearly have implications for the course of economic development. Existing literature has already made a good start at examining how institutions were transferred between regions and the long lasting economic effects of these transfers. However the human side of the story—the relationship between where the ancestors of a country’s current population lived and current outcomes—has received relatively little attention, in part due to the absence of suitable data. In this paper, we introduce a “world migration matrix” to account for international movements of people since the year 1500, and more specifically for the impact that those movements and subsequent population growth have had on the ancestries of countries’ populations today. We use the matrix to document some major features of world migration’s impacts on ancestry, such as the bimodality of the distribution of indigenous and nonindigenous people by country and the variations in the primary source regions for immigrant-populated countries. In the second part of the paper, we demonstrate the utility of the migration data by using them to revisit the hypothesis that early development of agrarian societies and their sociopolitical correlates—states—conferred developmental advantages that remain relevant today. We confirm that in a global sample, countries on whose territories agriculture and states developed earlier have higher incomes. But we conjecture that people who moved from one region to another carried the human capabilities built up in the former area with them. We find that recalculating state history and agriculture measures for each country as weighted averages by place of origin of their people’s ancestors considerably improves the fit of these regressions. We also find that heterogeneity of early development, holding the mean level constant, is associated with higher per capita income. We interpret this finding as indicating

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1677

that the effect of spillovers of growth-promoting characteristics between groups having different early development histories more than compensated for any negative effect on growth of higher inequality due to heterogeneity. In Section IV, we show that the heterogeneity of a country’s population in terms of the early development of its ancestors as of 1500 is strongly correlated with income inequality. We also show that heterogeneity with respect to country of ancestry or with respect to the ancestral language does a better job than does current linguistic or ethnic heterogeneity in predicting income inequalities today. As an additional test of the theory that early development conferred lasting advantage, we show that the rankings of ethnic or racial groups within a country’s income distribution are strongly correlated with the average levels of groups’ early development indicators. The overall finding of our paper is that the origins of a country’s population—more specifically, where the ancestors of the current population lived some 500 years ago—matter for economic outcomes today. Having ancestors who lived in places with early agricultural and political development is good for income today, both at the level of country averages and in terms of an individual’s position within a country’s income distribution. Exactly why the origins of the current population matter is a question on which we can only speculate at this point. People who moved across borders brought with them human capital, cultures, genes, institutions, and languages. People who came from areas that developed early evidently brought with them versions of one or more of these things that were conducive to higher income. Future research will have to sort out which ones were the most significant. The fact that early development explains an ethnic group’s position within a country’s income distribution suggests that “good institutions” coming from regions of early development cannot be the whole story, although it does not prove that institutions are not of enormous importance. More research is also needed to understand how early development led to the creation of growth promoting characteristics (whatever these turn out to be), how these characteristics were transmitted so persistently over the centuries, as well as the process by which these characteristics are transferred between populations of high and low early development. Our hope is that the availability of a compilation of data on the reconfiguration of country populations since 1500 will make it easier to address such issues in future research.

1678

QUARTERLY JOURNAL OF ECONOMICS

APPENDIX I: WORLD MIGRATION MATRIX, 1500–200029 The goal of the matrix is to identify where the ancestors of the permanent residents of today’s countries were living in 1500 CE. In this abbreviated description, we address some major conceptual issues relevant to the construction of the matrix and identify some of the main sources of information consulted. The migration matrix is a table in which both row and column headings are the names of currently existing countries, and cell entries are estimates of the proportion of the ancestors of those now permanently residing in the country identified in the row heading who lived in the country identified by the column heading in 1500. An ancestor is treated as having lived in what is now, say, Indonesia, if the place he or she then resided in is within the borders of Indonesia today. When ancestors could be identified only as part of an ethnic group that lived in a region now straddling the borders of two or more present-day countries, we try to estimate the proportion of that group living in each country and then allocate ancestry accordingly. For example, if a given ancestor is known to have been a “Gypsy” (Roma) but if we have no information on which country he or she lived in during the year 1500, we apply an assumption (see Online Appendix B) regarding the proportion of Gypsies who lived in Greece, Romania, Turkey, etc., as of 1500. The Gypsy example is one of many illustrating the fact that most of our data sources organize their information around ethnic groups rather than territories of origin. Although the use of information on ethnicity was unavoidable in the process of constructing the matrix, it was not a focus of attention in its own right. In cases in which ancestors are known to have migrated more than once between 1500 and 2000, countries of intervening residence are not indicated in the matrix. For example, an Israeli whose parents lived in Argentina but whose grandparents arrived in Argentina from Ukraine is listed as having had ancestors in Ukraine. People of mixed ancestry are common in many countries—for example, people of mixed Amerindian and Spanish ancestry in Mexico. Such individuals are treated as having a certain proportion of their ancestry deriving from each source country. When members of such groups are reported to account for 30% or more 29. This is an abbreviated version of Main Appendix 1.1 in Online Appendix B, which is linked to region summaries and the data set itself. All can be found at http://www.econ.brown.edu/fac/Louis Putterman/.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1679

of a country’s population, we searched the specialized scientific literature on genetic admixture for the best available estimates. For smaller mixed groups we base estimates on the stated or implicit assumptions of conventional sources or on extrapolation from similar countries in which we had genetic estimates. Our assumed breakdowns of mixed populations for each country are discussed in the region Appendices in Online Appendix B. Because our interest is in the possible impact of its people’s origins on each country’s economic performance, we try to identify the origins of long-term residents only, thus leaving out guest or temporary workers. Very few data are available about the duration of stay of most temporary workers, so we made educated guesses as to what portion of the originally temporary residents have become permanent, understood as having been in the country at least ten years as of 2000. The matrix includes entries on all countries existing in 2000 that had populations of one-half million or more. A country is included as a source country for ancestors of the people of another country if at least 0.5% of all ancestors alive in 1500 are estimated to have lived there. Some entries smaller than 0.5% are found in the matrix, but these occur as a result of special decompositions applied to populations that our sources identify by ethnic group rather than by country of origin—for example, Gypsies, Africans (descended from slaves, especially in the Americas), and Ashkenazi Jews. The region Appendices in Online Appendix B detail the method of assigning fractions of these populations to individual source countries. Some of the more important sources from which data were drawn for the construction of the matrix are listed below. See Online Appendix B and its regional Appendices for other sources and details: Columbia Encyclopedia (online edition) CIA World Factbook Countriesquest.com Encyclopædia Britannica (online edition) Everyculture.com Library of Congress, Federal Research Division, Country Studies MSN Encarta Encyclopedia (online edition) Nationsencyclopedia.com World Christian Database (Original source for WCE) World Christian Encyclopedia

8.64∗∗∗ (0.10) 92 .289

0.164 (0.126)

0.336∗ (0.189) 0.327∗ (0.179)

8.64∗∗∗ (0.08) 92 .542

0.734∗∗∗ (0.099)

(2)

(1)

8.64∗∗∗ (0.08) 92 .588

0.797∗∗∗ (0.104)

0.119 (0.127)

(3)

(5)

8.64∗∗∗ (0.10) 92 .255

0.136 (0.134)

(6)

(7)

(8)

(9)

ln(GDP per capita 2000)

8.64∗∗∗ (0.08) 92 .598

8.64∗∗∗ (0.08) 92 .528

8.64∗∗∗ (0.08) 92 .583

8.64∗∗∗ (0.10) 92 .272

8.64∗∗∗ (0.07) 92 .611

−0.0549 −0.0748 0.480∗∗∗ −0.223∗ (0.1198) (0.1625) (0.135) (0.119) 0.861∗∗∗ (0.097) 0.915∗∗∗ (0.128) 0.210∗ (0.116) 0.937∗∗∗ 1.04∗∗∗ (0.100) (0.09)

0.491∗∗∗ −0.0908 (.0159) (0.1254)

(4)

(11)

(12)

(13)

8.64∗∗∗ (0.08) 92 .594

8.64∗∗∗ (0.08) 92 .530

8.64∗∗∗ (0.07) 92 .631

8.64∗∗∗ (0.08) 92 .582

0.276 0.795∗∗∗ 0.352∗∗ (0.210) (0.077) (0.145) 0.620∗∗∗ 0.890∗∗∗ (0.220) (0.080) 0.0666 −0.0541 (0.0906) (0.0839) ∗∗∗ 0.595 (0.154)

(10)

(15)

8.64∗∗∗ (0.07) 92 .636

8.64∗∗∗ (0.07) 92 .609

−0.165∗ (0.096) 0.504∗∗∗ 0.968∗∗∗ (0.175) (0.085)

0.437∗∗ (0.175)

(14)

Notes. All right hand–side variables are ancestry-adjusted and normalized to have a mean of zero and a standard deviation of one. Robust standard errors in parentheses. ∗ ∗ ∗ p < .01, ∗ ∗ p < .05, ∗ p < .1.

No. obs. R2

geo conditions bio conditions Technology index AD 0 Technology index AD 1500 Constant

agyears

statehist

Dependent var.

APPENDIX II: HORSE RACE REGRESSIONS USING DIFFERENT MEASURES OF EARLY DEVELOPMENT

1680 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1681

BROWN UNIVERSITY BROWN UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH

REFERENCES Acemoglu, Daron, Simon Johnson, and James Robinson, “The Colonial Origins of Comparative Development: An Empirical Investigation,” American Economic Review, 91 (2001), 1369–1401. ——, “Reversal of Fortunes: Geography and Institutions in the Making of the Modern World Income Distribution,” Quarterly Journal of Economics, 117 (2002), 1231–1294. Alesina, Alberto, Arnaud Devleeschauwer, William Easterly, Sergio Kurlat, and Romain Wacziarg, “ Fractionalization,” Journal of Economic Growth, 8 (2003), 155–194. Ashraf, Quamrul, and Oded Galor, “Human Genetic Diversity and Comparative Economic Development,” Brown University Working Paper, 2008. Barro, Robert J., “Inequality and Growth Revisited,” Asian Development Bank Working Paper Series on Regional Economic Integration No. 11, January 2008. Bockstette, Valerie, Areendam Chanda, and Louis Putterman, “States and Markets: The Advantage of an Early Start,” Journal of Economic Growth, 7 (2002), 347–369. Cavalli-Sforza, Luigi L., and Francesco Cavalli-Sforza, The Great Human Diasporas (Reading, MA: Addison Wesley Publishing Co., 1995). Chanda, Areendam, and Louis Putterman, “State Effectiveness, Economic Growth, and the Age of States,” in States and Development: Historical Antecedents of Stagnation and Advance, Matthew Lange and Dietrich Rueschemeyer, eds. (Basingstoke, UK: Palgrave MacMillan, 2005). ——, “Early Starts, Reversals and Catch-Up in the Process of Economic Development,” Scandinavian Journal of Economics, 109 (2007), 387–413. Comin, Diego, William Easterly, and Erick Gong, “Was the Wealth of Nations Determined in 1000 BC?” NBER Working Paper No. W12657, 2006. ——, “Was the Wealth of Nations Determined in 1000 BC?” American Economic Journal, 2 (2010), 69–57. Correlates of War Project, Direct Contiguity Data, 1816–2000, Version 3.0 (http://correlatesofwar.org, 2000). ˜ Desmet, Klaus, Ignacio Ortuno-Ort´ ın, and Shlomo Weber, “Linguistic Diversity and Redistribution,” Journal of the European Economic Association, 6 (2009), 1291–1318. Diamond, Jared, Guns, Germs and Steel (New York: Norton, 1997). Easterly, William, “Inequality Does Cause Underdevelopment: Insights from a New Instrument,” Journal of Development Economics, 84 (2007), 755–776. Engerman, Stanley, and Kenneth Sokoloff, “Factor Endowments, Inequality, and Paths of Development among New World Economies,” Economia, 3 (2002), 41–109. Fearon, James D., “Ethnic Structure and Cultural Diversity by Country,” Journal of Economic Growth, 8 (2003), 195–222. Galor, Oded, and Omer Moav, “The Neolithic Origins of Contemporary Variation in Life Expectancy,” Brown University Department of Economics Working Paper 2007-14, 2007. Glaeser, Edward L., Rafael La Porta, Florencio Lopez-de-Silanes, and Andrei Shleifer, “Do Institutions Cause Growth?” Journal of Economic Growth, 9 (2004), 271–303. Guiso, Luigi, Paola Sapienza, and Luigi Zingales, “Does Culture Affect Economic Outcomes?” Journal of Economic Perspectives, 20 (2006), 23–48. Hall, Robert, and Charles Jones, “Why Do Some Countries Produce So Much More Output Than Others?” Quarterly Journal of Economics, 114 (1999), 83– 116. Hibbs, Douglas A., and Ola Olsson, “Geography, Biogeography, and Why Some Countries Are Rich and Others Are Poor,” Proceedings of the National Academy of Sciences, 101 (2004), 3715–3720.

1682

QUARTERLY JOURNAL OF ECONOMICS

Johnson, Allen, and Timothy Earle, The Evolution of Human Societies: From Foraging Groups to Agrarian State (Stanford, CA: Stanford University Press, 1987). Knack, Stephen, and Philip Keefer, “Does Social Capital Have an Economic Payoff? A Cross-Country Investigation,” Quarterly Journal of Economics, 112 (1997), 1251–1288. La Porta, Rafael, Florencio Lopez-de-Silanes, Andrei Shleifer, and Robert Vishny, “Law and Finance,” Journal of Political Economy, 106 (1998), 1113–1155. Lobban, Richard, Cape Verde: Crioulo Colony to Independent Nation (Boulder, CO: Westview Press, 1995). MacNeish, Richard, The Origins of Agriculture and Settled Life (Norman: University of Oklahoma Press, 1991). McEvedy, Colin, and Richard Jones, Atlas of World Population History (New York: Viking Press, 1978). Meintel, Deirdre, “Race, Culture and Portuguese Colonialism in Cabo Verde,” Syracuse University, Foreign and Comparative Studies, African Series, No. 41, 1984. Nunn, Nathan, “The Long Term Effects of Africa’s Slave Trades,” Quarterly Journal of Economics, 123 (2008), 139–176. Olsson, Ola, and Douglas A. Hibbs, Jr., “Biogeography and Long-Run Economic Development,” European Economic Review, 49 (2005), 909–938. Putterman, Louis, “State Antiquity Index Version 3” (http://www.econ.brown.edu/ fac/Louis Putterman/, 2004). Putterman, Louis, with Cary Anne Trainor, “Agricultural Transition Year Country Data Set,” (http://www.econ.brown.edu/fac/Louis Putterman, 2006). Smith, Bruce, The Emergence of Agriculture (New York: Scientific American Library, 1995). Sokoloff, Kenneth, and Stanley Engerman, “Institutions, Factor Endowments and Paths to Development in the New World,” Journal of Economic Perspectives, 14 (2002), 217–232. Spolaore, Enrico, and Romain Wacziarg, “The Diffusion of Development,” Quarterly Journal of Economics,124 (2009), 469–527. Tabellini, Guido, “Culture and Institutions: Economic Development in the Regions of Europe,” CESifo Working Paper No. 1492, 2005.

COMPETITION AND BIAS∗ HARRISON HONG AND MARCIN KACPERCZYK We attempt to measure the effect of competition on bias in the context of analyst earnings forecasts, which are known to be excessively optimistic because of conflicts of interest. Our natural experiment for competition is mergers of brokerage houses, which result in the firing of analysts because of redundancy (e.g., one of the two oil stock analysts is let go) and other reasons such as culture clash. We use this decrease in analyst coverage for stocks covered by both merging houses before the merger (the treatment sample) to measure the causal effect of competition on bias. We find that the treatment sample simultaneously experiences a decrease in analyst coverage and an increase in optimism bias the year after the merger relative to a control group of stocks, consistent with competition reducing bias. The implied economic effect from our natural experiment is significantly larger than estimates from OLS regressions that do not correct for the endogeneity of coverage. This effect is much more significant for stocks with little initial analyst coverage or competition.

I. INTRODUCTION We study the effect of competition on reporting bias. Efficient outcomes in many markets depend on individuals having accurate information. Yet the suppliers who report this information often have other incentives in addition to accuracy. Two notable examples are (1) media outlets that trade off profits from providing informative news to consumers and voters versus printing information favorable to companies or political clients and (2) credit rating agencies such as Moody’s and Standard & Poor’s (S&P’s) that get paid by the corporations they are supposed to evaluate. Will such conflicts of interest in these important economic and political markets lead to consumers, voters, and investors having poor information? Or can the market discipline these supply-side incentives and limit the distortions? The effect of competition from having more suppliers on bias is an important part of answering these questions. The theoretical literature on the economics of reporting bias yields ambiguous answers when it comes to the potentially ∗ We thank three anonymous referees, Robert Barro (the editor), Edward Glaeser (the editor), Rick Green, Paul Healy, Jeff Kubik, Kai Li, Marco Ottaviani, Daniel Paravisini, Amit Seru, Kent Womack, Eric Zitzewitz, and seminar participants at Copenhagen Business School, Dartmouth, HBS, INSEAD, Michigan State, the Norwegian School of Management, Princeton, SMU, Texas, UBC, the UBC Summer Conference, the AFA 2009 Meetings, and the NBER Behavioral Finance Conference for a number of helpful suggestions. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1683

1684

QUARTERLY JOURNAL OF ECONOMICS

disciplining role of competition. One strand of this literature argues that competition from suppliers (e.g., more newspapers or rating agencies) makes it more difficult for a firm (e.g., a political client or a bank) to suppress information (Besley and Prat 2006; Gentzkow and Shapiro 2006). Intuitively, the more suppliers of information are covering the firm, the more costly it will be for the firm to keep unfavorable news suppressed. Another perspective, the catering view, is that competition need not reduce and may increase bias if the end users (voters, consumers, or investors) want to hear reports that conform to their priors (e.g., Mullainathan and Shleifer [2005]).1 In contrast to the theoretical side, empirical work on reporting bias is limited. There is some evidence that competitive pressures have helped discipline the media market. For example, a number of case studies of political scandals suggest that competition helps subvert attempted suppression of news (Genztkow and Shapiro 2008). In addition, Gentzkow, Glaeser, and Goldin (2006), in their study of U.S. newspapers in the nineteenth century, point out that the papers of the time were biased tools funded by political clients. Importantly, they show that this situation changed in the late nineteenth century, when cheap newsprint engendered greater competition, which increased the rewards to objective reporting. These case studies notwithstanding, we have virtually no systematic evidence on the relationship between competition and bias. In this paper, we attempt to measure the effect of competition on bias in the context of the market for security analyst earnings forecasts. This setting is an ideal one for studying this issue for a couple of reasons. First, the trade-offs faced by security analysts in issuing forecasts are similar to those of suppliers of reports in other markets such as media or credit ratings. Namely, analysts’ earnings forecasts are optimistically biased because of conflicts of interest—a desire to be objective by producing the accurate forecasts desired by investors versus the prospect of payoffs by the companies that they cover, which want positive reports.2 Hence, 1. There is a related literature on professional strategic forecasting in which forecasters, in a rank-order contest based on accuracy, differentiate themselves in their forecasts strategically a` la Hotelling in equilibrium (Laster, Bennett, and Geoum 1999; Ottaviani and Sørensen 2005). These models are good at producing dispersion in forecasts due to convex payoffs (associated with publicity, etc.) and competition can lead to more dispersed forecasts. However, they do not necessarily lead to bias on average. 2. Companies naturally like analysts to be optimistic about their stocks, particularly when they are making initial or seasoned equity offerings. They would

COMPETITION AND BIAS

1685

the lessons regarding competition and bias obtained from our setting can be applied more broadly to other markets. Second, at least two of the economic channels laid out in the theory literature predict that competition disciplines supply-side incentive distortions that are likely to apply to the market for analyst forecasts. Both of them point in the direction that an increase in competition among analysts should lead to less bias. The first channel is what Gentzkow and Shapiro (2008) term the independence rationale: Competition means a greater diversity of preferences among suppliers of information (analysts in our context) and hence a greater likelihood of drawing at least one independent supplier or analyst whose preference is such that he or she cannot be bought by the firm. This supplier’s independence can have a disciplining effect on other suppliers. For instance, if an independent analyst makes a piece of bad news public, then the other analysts will be forced to do so as well. A recent example that illustrates this independence-rationale mechanism at work in the market for analyst forecasts is the negative report produced by Meredith Whitney, a then unknown analyst at a lower-tier brokerage house named Oppenheimer, on Citibank on October 31, 2007. Citibank had a large market capitalization and was covered by in excess of twenty analysts. One can view Whitney as the draw of an independent analyst from among many. Whitney argued in the report that Citibank might go bankrupt as a result of their subprime mortgage holdings. Her report is now widely acknowledged as forcing the release of the pentup bad news regarding financial firms, which had been unreported by other analysts. Her report had a disciplining effect on her peers at other brokerage houses, as they subsequently also reported on the same negative news about Citibank. A similar, though less famous example is Ivy Zelman, a housing analyst for Credit Suisse, who issued negative reports on the housing industry.3 not do business with an investment bank if the analyst were not positive about the stock (e.g., Brown, Foster, and Noreen [1985], Stickel [1990], Abarbanell [1991], Dreman and Berry [1995], and Chopra [1998]). A number of papers find that an analyst from a brokerage house that has an underwriting relationship with a stock tends to issue more positive predictions than analysts from nonaffiliated houses (e.g., Dugar and Nathan [1995], Lin and McNichols [1998], Dechow, Hutton, and Sloan [1999], and Michaely and Womack [1999]). Importantly, analysts’ career outcomes depend both on relative accuracy and optimism bias (e.g., Hong and Kubik [2003] and Fang and Yasuda [2009]). 3. Anecdotes also suggest that independent analysts who “blow the whistle” tend to come from lower-tier brokerage houses that have lower investment-banking business revenues.

1686

QUARTERLY JOURNAL OF ECONOMICS

A second channel whereby competition limits bias that also holds in this market is that a firm’s cost of influence increases with the number of suppliers or analysts. In the model of Besley and Prat (2006), if N analysts are all suppressing information in equilibrium and issuing optimistically biased reports, a single deviator who releases a bad forecast gets the same payoff as a monopolist. So the bribe that must be paid to each analyst to suppress information is thus independent of N, and so the total bribe is increasing in N. Increasing the number of analysts makes it more difficult to suppress information for the same reason that it makes it more likely that tacit collusion will break down. An implication of this mechanism is that the rewards are disproportionately high for the deviator if N − 1 other analysts are suppressing information. The Whitney and Zelman examples also seem to bear out this implication. The deviators Whitney and Zelman became famous because few other negative reports were issued by their peers. Their rewards were by all accounts disproportionate. Beyond being offered jobs at better and higher-paying brokerage houses, they ended up being famous enough to start their own advisory businesses with special clients and revenue streams. And a third reason that the market for analyst forecasts is ideal to study the relationship between competition and bias is that there are plentiful micro data concerning analysts and their performance that are not easily accessible in other markets, as well as opportunities to exploit natural experiments for identification. In particular, we identify the causal effect of competition or coverage on bias by using mergers of brokerage houses as a natural experiment. When brokerage houses merge, they typically fire analysts because of redundancy and potentially lose additional analysts for other reasons, including culture clash and merger turmoil (e.g., Wu and Zang [2009]). For example, if the merging houses each had one analyst covering an oil stock, they would only keep one of the two oil stock analysts after the merger. We use this decrease in analyst coverage for stocks covered by both merging houses before the merger (the treatment sample) to measure the causal effect of competition on bias. During the period from 1980 to 2005, we identify fifteen mergers of brokerage houses (which took place throughout this twenty-five-year period) that affected 948 stocks (stocks covered by both merging houses) or 1,656 stock observations. We measure the change in analyst coverage and mean bias for the stocks in the treatment sample from one year before the merger to one year

COMPETITION AND BIAS

1687

after relative to a control group of stocks. The control group is stocks with the same market capitalization, market-to-book ratio, past return, and analyst coverage features before the merger as the treatment sample. The exclusion restriction is that the change in the mean bias of the treatment sample across the merger date is not due to any factor other than the merger leading to a decrease in analyst coverage of those stocks. We think this is a good experiment because the merger-related departures of analysts due to redundancy or culture clash ought not a priori to be related to anything having to do with the biases of the forecasts of the other analysts, particularly those working for other houses. As a benchmark, we begin with simple OLS regressions of average bias of earnings forecasts on analyst coverage.4 Henceforth, we will refer to the average or median bias of a stock simply as the bias of that stock. We restrict ourselves to stocks in the top 25% of the market capitalization distribution to facilitate comparison with the results from our natural experiment. The mean analyst coverage of these stocks is about 21 analysts and the standard deviation across stocks is about 10 analysts. Depending on the specifications we use, the economic effects are small to none. The largest effect we find is that a decrease in one analyst leads to an increase in bias by 0.0002 (2 basis points). The bias for a typical stock is about 0.03 (3%) with a standard deviation across stocks of about 0.03 (3%). Hence, these estimates obtained from crosssectional regressions suggest a modest increase in bias by about 60 basis points to none as a fraction of the cross-sectional standard deviation of bias as we decrease coverage by one analyst. Of course, these regressions are difficult to interpret, because of the endogeneity of analyst coverage. Existing studies suggest a selection bias in coverage in that analysts tend not to cover stocks that they do not issue positive forecasts about (e.g., McNichols and O’Brien [1997]). In this instance, we would then expect to find a larger causal effect from competition if we could randomly allocate analysts to different stocks. Thus, we evaluate our natural experiment by first verifying the premise of our experiment regarding the change in analyst 4. Lim (2001) also tries to explain analyst bias by arguing that bias helps analysts to get access to a firm and hence to provide more accurate forecasts, and shows a negative correlation between bias and coverage in the cross section. His main variable of interest, however, is stock price volatility, whereas coverage is just another proxy for firm size. As we show, OLS estimates of bias on coverage is specification dependent.

1688

QUARTERLY JOURNAL OF ECONOMICS

coverage for the treatment sample from the year before the merger to the year after.5 We find, as expected, that the average drop in coverage for the treatment sample (using the most conservative control group) is around one analyst, with a t-statistic of around 5.7. The effect is economically and statistically significant in the direction predicted. We then find that the treatment sample simultaneously experiences an increase in optimism bias the year after the merger relative to a control group of stocks. A conservative estimate is that the mean optimism bias increases by fifteen basis points (as a result of reducing coverage by one analyst). As we mentioned earlier, the sample for the natural experiment is similar to that of the OLS by construction. This is a sizable difference and suggests that the OLS estimates are biased downward. Importantly, we find the same results when we look at the change in bias for analysts covering the same stocks but not employed by the merging firms, so our effect is not due to the selection of an optimistic analyst by the merging firms. We also find that this competition effect is significantly more pronounced for stocks with smaller analyst coverage (less than or equal to five). As we discuss below, these key additional results of our paper are consistent with the competition mechanisms articulated above. We then conduct a number of analyses to verify the validity of our natural experiment, including showing that mergers are changing bias and not actual earnings or other firm characteristics, that mergers do no predict pretrends, and using nonparametric analysis to verify a shift in the distribution of forecasts. We also conduct a number of robustness exercises, including using an alternative regression framework that controls for brokerage-house fixed effects as well as firm fixed effects. Our results remain after all these additional analyses. Finally, we examine some auxiliary implications of the competitive pressure view, including looking at how implicit incentives for bias vary with analyst coverage. The rest of the paper proceeds as follows. We describe the data in Section II and estimate the OLS regressions of bias on analyst 5. We expect these stocks to experience a decrease in coverage because one of the redundant analysts is typically let go. The exact number depends on a couple of factors. On one hand, the fired analyst might get a job with another firm and cover the same stock, which means the decrease in coverage might be less than one. On the other hand, a firm might lose or fire both analysts for reasons of culture clash or merger turmoil. In this case, if neither analyst is rehired by another firm, we would see a decrease in coverage of two analysts. What the magnitude turns out to be is an empirical question.

COMPETITION AND BIAS

1689

coverage in Section III. In Section IV, we provide background and statistics on the mergers. We discuss the methodology we use to measure the effect of the mergers on analyst coverage and bias in Section V and describe the results in Section VI. We conclude in Section VII. II. DATA Our data on security analysts come from the Institutional Brokers Estimates System (IBES) database. Our full sample covers the period 1980–2005. In our study, we focus on annual earnings forecasts because forecasts of these types are most commonly issued. For each year, we take the most recent forecast of the annual earnings. As a result, we have for each year one forecast issued by each analyst covering a stock. Our data on U.S. firms come from the Center for Research in Security Prices (CRSP) and COMPUSTAT. From the CRSP, we obtain monthly closing stock prices, monthly shares outstanding, and daily and monthly stock returns for NYSE, AMEX, and NASDAQ stocks over the period 1980–2005. From COMPUSTAT, we obtain annual information on corporate earnings, book value of equity, and book value of assets during the same period.6 To be included in our sample, a firm must have the requisite financial data from both CRSP and COMPUSTAT. We follow other studies in focusing on companies’ ordinary shares, that is, companies with CRSP share codes of 10 or 11. We use the following variables. Analyst forecast bias is the difference between an analyst’s forecast and the actual earnings per share (EPS) divided by the previous year’s stock price. Given that the values of EPS reported by IBES tend to suffer from data errors, we follow the literature and use EPS from COMPUSTAT. Because our analysis is conducted at the stock level, we further aggregate forecast biases and consider the consensus bias expressed as a mean or median bias among all analysts covering a particular stock, which is denoted by BIASit . This is our main dependent variable of interest. We also utilize a number of other independent variables. COVERAGEit is the number of analysts covering stock i in year t. LNSIZEit is the natural logarithm of firm i’s market capitalization 6. Our results are similar if we use IBES earnings numbers as opposed to those from COMPUSTAT.

1690

QUARTERLY JOURNAL OF ECONOMICS

(price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i during year t. RETANNit is the average monthly return on stock i in year t. LNBMit is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings during year t to the book value of equity. Earnings are calculated as income before extraordinary items available to common stockholders (Item 237), plus deferred taxes from the income statement (Item 50), plus investment tax credit (Item 51). To measure the volatility of ROE (VOLROEit ), we estimate an AR(1) model for each stock’s ROE using the past ten-year series of the company’s valid annual ROEs. We calculate VOLROEit as the variance of the residuals from this regression. PROFITit is firm profitability, defined as operating income over book value of assets. SPit is an indicator variable equal to one if the stock is included in the S&P 500 index and zero otherwise. As in earlier studies, stocks that do not appear in IBES are assumed to have no analyst estimates. Following earlier work, we exclude observations (stock-year) in which the stock price is less than five dollars or whose mean bias is at the outer tails—the 2.5% left and right tails. We also exclude analyst forecasts whose absolute difference exceeds ten dollars on the basis that this is likely a coding error. III. OLS RESULTS We begin by estimating a pooled OLS regression of the mean and median BIAS on lagged values of COVERAGE and a set of standard control variables, which include LNSIZE, SIGMA, RETANN, LNBM, VOLROE, and PROFIT. We additionally include an S&P 500 index indicator variable (SP500) as well as time and three-digit SIC industry fixed effects and potentially firm and brokerage-house fixed effects. Standard errors are clustered at the industry groupings. These regressions are based on a sample of stocks in the top 25% of the market capitalization distribution. We restrict ourselves to this sample to facilitate a comparison with the results from our natural experiment.7 The summary statistics for these regressions (time-series averages of cross-sectional means, 7. Qualitatively, the same results hold even using the entire universe. We have replicated these results, which are consistent with those in Lim (2001).

COMPETITION AND BIAS

1691

TABLE I SUMMARY STATISTICS ON THE IBES SAMPLE

Variable COVERAGEi,t Mean BIASi,t (%) Median BIASi,t (%) Mean FERRORi,t (%) Median FERRORi,t (%) FDISPi,t (%) LNSIZEi,t SIGMAi,t (%) RETANNi,t (%) LNBMi,t VOLROEi,t (%) PROFITi,t (%)

Cross-sectional mean (1)

Cross-sectional median (2)

21.45 2.70 2.64 3.31 3.24 0.75 8.38 40.72 1.73 −1.02 26.53 15.48

21 2.10 2.01 2.39 2.26 0.41 8.38 35.04 1.49 −0.92 10.43 15.29

Cross-sectional st. dev. (3) 9.57 3.10 3.17 2.93 3.00 1.02 1.62 21.03 4.04 0.88 19.79 9.38

Notes. We consider a sample of stocks covered by IBES during the period 1980–2005 with valid annual earnings forecast records. COVERAGEit is a measure of analyst coverage, defined as the number of analysts covering firm i at the end of year t. Analyst forecast bias (BIAS jt ) is the difference between the forecast analyst j in year t and the actual EPS, expressed as a percentage of the previous year’s stock price. The consensus bias is expressed as a mean or median bias among all analysts covering a particular stock. Analyst forecast error (FERROR jt ) is the absolute difference between the forecast of analyst j in year t and the actual EPS, expressed as a percentage of the previous year’s stock price. The forecast error is expressed as a mean or median bias among all analysts covering a particular stock. FDISPit is analyst forecast dispersion, defined as the standard deviation of all analyst forecasts covering firm i in year t. LNSIZEit is the natural logarithm of firm i’s market capitalization (price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i in year t. RETANNit is the average monthly return on stock i in year t. LNBMit is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a 10-year series of the company’s valid annual ROEs. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings in year t over the book value of equity. We calculate VOLROE as the variance of the residuals from this regression. PROFITit is the profitability of company i at the end of year t, defined as operating income over book value of assets. We exclude observations that fall to the left of the 25th percentile of the size distribution, observations with stock prices lower than $5, and those for which the absolute difference between forecast value and the true earnings exceeds $10.

medians, and standard deviations) are reported in Table I. The cross-sectional mean (median) analyst coverage of these stocks is about 21 (21) analysts and the standard deviation across stocks is about 10 analysts. The cross-sectional mean (median) bias is 0.027 (0.021) with a standard deviation of around 0.03. The regression results are presented in Table II. We first present the results for the mean bias with just time and industry fixed effects in column (1), with industry, time, and brokeragehouse fixed effects in column (2), and additionally with firm fixed effects in column (3). In column (1), the coefficient in front of COVERAGE is −0.0002 and is statistically significant at the 5% level of significance. In column (2), the coefficient is also −0.0002 and it is still statistically significant at the 5% level of significance.

1692

QUARTERLY JOURNAL OF ECONOMICS

TABLE II REGRESSION OF CONSENSUS FORECAST BIAS ON COMPANY CHARACTERISTICS Mean BIAS Variables/model

(1)

(2) ∗∗

COVERAGEi,t −1 −0.0002 (0.0001) 0.0028∗∗∗ LNSIZEi,t −1 (0.0009) −0.0093 SIGMAi,t −1 (0.0062) −0.1000∗∗∗ RETANNi,t −1 (0.0199) 0.0121∗∗∗ LNBMi,t −1 (0.0016) 0.0062∗∗∗ VOLROEi,t −1 (0.0019) 0.0579∗∗∗ PROFITi,t −1 (0.0095) −0.0110∗∗∗ SP500i,t −1 (0.0024) Year fixed effects Yes Industry fixed Yes effects Brokerage fixed No effects Firm fixed effects No Observations 9,313

Median BIAS (3)

∗∗

−0.0002 0.0001 (0.0001) (0.0001) ∗∗∗ 0.0026 0.0044∗∗∗ (0.0009) (0.0014) −0.0031 0.0108∗∗∗ (0.0058) (0.0039) −0.0986∗∗∗ −0.0368∗∗∗ (0.0199) (0.0133) 0.0115∗∗∗ 0.0053∗∗∗ (0.0015) (0.0013) 0.0061∗∗∗ 0.0000 (0.0019) (0.0000) 0.0571∗∗∗ 0.0629∗∗∗ (0.0092) (0.0115) −0.0110∗∗∗ −0.0017 (0.0024) (0.0072) Yes Yes Yes Yes

(4)

(5) ∗∗∗

−0.0002 (0.0001) 0.0028∗∗∗ (0.0008) −0.0095 (0.0061) −0.0986∗∗∗ (0.0192) 0.0118∗∗∗ (0.0016) 0.0060∗∗∗ (0.0019) 0.0578∗∗∗ (0.0098) −0.0110∗∗∗ (0.0025) Yes Yes

(6) ∗∗

−0.0002 0.0001 (0.0001) (0.0001) ∗∗∗ 0.0026 0.0042∗∗∗ (0.0008) (0.0014) −0.0061 0.0108∗∗∗ (0.0059) (0.0041) −0.0995∗∗∗ −0.0367∗∗∗ (0.0198) (0.0135) 0.0116∗∗∗ 0.0049∗∗∗ (0.0015) (0.0013) 0.0059∗∗∗ 0.0000 (0.0019) (0.0000) 0.0574∗∗∗ 0.0619∗∗∗ (0.0095) (0.0115) −0.0110∗∗∗ −0.0026 (0.0024) (0.0075) Yes Yes Yes Yes

Yes

Yes

No

Yes

Yes

No 9,313

Yes 9,313

No 9,313

No 9,313

Yes 9,313

Notes. The dependent variable is BIAS, defined as a consensus forecast bias of all analysts tracking stock i in year t. Forecast bias is the difference between the forecast of analyst j in year t and the actual EPS, expressed as a percentage of the previous year’s stock price. The consensus is obtained either as a mean or median bias. COVERAGEi,t is a measure of analyst coverage, defined as the number of analysts covering firm i at the end of year t. LNSIZEi,t is the natural logarithm of firm i’s market capitalization (price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i during year t. RETANNi,t is the average monthly return on stock i in year t. LNBMi,t is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a 10-year series of the company’s valid annual ROEs. ROEi,t is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings in year t over the book value of equity. VOLROE is the variance of the residuals from this regression. PROFITi,t is the profitability of company i at the end of year t, defined as operating income over book value of assets. SP500i,t is an indicator variable equal to one if stock i is included in the S&P500 index in year t. We exclude all observations that fall to the left of the 25th percentile of the size distribution, observations with stock prices lower than $5, and those for which the absolute difference between forecast value and the true earnings exceeds $10. All regressions include three-digit SIC industry fixed effects and year fixed effects. Some specifications also include brokerage house and firm fixed effects. Standard errors (in parentheses) are clustered at the industry level. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

However, the coefficient turns positive and statistically nonsignificant when we include firm fixed effects. Because coverage is fairly persistent, it may be that a fixed-effects approach is not picking up the right variation, in contrast to the cross-sectional approach. So depending on the controls used, we find that a decrease in coverage by one analyst leads to an increase in bias of anywhere from 0.0002 (two basis points) to none. The bias for a typical stock is about

COMPETITION AND BIAS

1693

0.027 (2.7%) with a standard deviation across stocks of about 0.03 (3%). Hence, these estimates obtained from cross-section regressions suggest only a small increase in bias of about zero to sixty basis points as a fraction of the cross-sectional standard deviation of bias as we decrease coverage by one analyst, though some are very precisely measured. The results using the median bias instead of the mean bias are reported in columns (4), (5), and (6). Again, there is little difference in the coefficient on COVERAGE. The other control variables also come in significantly in these regressions. Bias increases with firms’ sizes, book-to-market ratios, volatilities of return on equity, and profits. Bias is lower for firms with high returns and for firms in the S&P 500 index. The sign on stock return volatility is ambiguous, depending on whether firm fixed effects are included. Of course, as we explained in the Introduction, these OLS regressions are difficult to interpret because of the endogeneity of analyst coverage. If stocks that attract lots of coverage are stocks that analysts are likely to be excited about, then these OLS estimates are biased downward. In contrast, if stocks covered by only a few analysts are likely under-the-radar stocks that analysts have to be very excited about to initiate coverage on, then these OLS estimates of the competition effect are biased upward. Estimating this regression using stock fixed effects is not an adequate solution to the endogeneity critique because analyst coverage tends to be a fairly persistent variable and analysts drop coverage on stocks when the stock is no longer doing well (e.g., McNichols and O’Brien [1997]). Hence, we rely on a natural experiment to sort out these endogeneity issues. We use mergers of brokerage houses as our experiment, on the premise that mergers typically lead to a reduction in analyst coverage on the stocks that were covered by both the bidder and target firms premerger. If a stock is covered by both firms before the merger, they will get rid of at least one of the analysts, usually the target analyst. It is to this experiment that we now turn. IV. BACKGROUND ON MERGERS We begin by providing some background on these mergers. We identify mergers among brokerage houses by relying on information from the SDC Mergers and Acquisition database. We start with the sample of 32,600 mergers of financial institutions. Next,

1694

QUARTERLY JOURNAL OF ECONOMICS

we choose all the mergers in which the target company belongs to the four-digit SIC code 6211 (“Investment Commodity Firms, Dealers, and Exchanges”). This screen reduces our sample to 696 mergers. Subsequently, we manually match all the mergers with IBES data. This match identifies 43 mergers with both bidder and target covered by IBES. Finally, we select only those mergers where both merging houses analyze at least two of the same stocks—otherwise, there is little scope for our instrumental variables approach below. With this constraint, our search produces fifteen mergers, which we break down to parties involved: bidder and target. We provide further details about these mergers in the Appendix. Of the fifteen mergers, six are particularly big in the sense that the merging houses both tend to be big firms and have coverage premerger on a large number of similar stocks. The first of these big mergers is Merrill Lynch acquiring on September 10, 1984, a troubled Becker Paribas that was having problems with its own earlier merger to another firm. The second is Paine Webber acquiring Kidder Peabody on December 31, 1994. Kidder was in trouble and had fired a good part of its workforce before the merger in the aftermath of a major trading scandal involving its government bond trader, Joseph Jett. Kidder’s owner, General Electric, wanted to sell the company and Paine Webber (a secondtier brokerage house) wanted to buy a top-tier investment bank with a strong research department. The third is Morgan Stanley acquiring Dean Witter Reynolds on May 31, 1997. Morgan Stanley was portrayed as wanting to get in on the more down-market retail brokerage operations of Dean Witter. The fourth is Smith Barney (Travelers) acquiring Salomon Brothers on November 28, 1997. This is viewed as a synergy play led by Sandy Weill. The fifth and sixth mergers involved Swiss banks trying to diversify their lines of business geographically into the American market. These mergers happened within a few months of each other. Credit Suisse First Boston acquired Donaldson Lufkin and Jenrette on October 15, 2000. A few months later, on December 10, 2000, UBS acquired Paine Webber. The anecdotal descriptions of the motivations for these mergers provide comfort in our proposed experiment, which is that these mergers provide a change in competition that is unrelated to some underlying unobservable of the biases in the stocks. In Table III, we provide a number of key statistics regarding all fifteen mergers. In Panel A, we summarize the names, the

1695

COMPETITION AND BIAS TABLE III DESCRIPTIVE STATISTICS FOR MERGERS

Panel A: Mergers used in the analysis and stocks covered Merger Merger # stocks # stocks # stocks (bidder Brokerage house number date (bidder) (target) and target) Merrill Lynch Becker Paribas Paine Webber Kidder Peabody Morgan Stanley Dean Witter Reynolds Smith Barney (Travelers) Salomon Brothers Credit Suisse First Boston Donaldson Lufkin and Jenrette UBS Warburg Dillon Read Paine Webber Chase Manhattan JP Morgan Wheat First Securities Butcher & Co., Inc. EVEREN Capital Principal Financial Securities DA Davidson & Co. Jensen Securities Dain Rauscher Wessels Arnold & Henderson First Union EVEREN Capital Paine Webber JC Bradford Fahnestock Josephthal Lyon & Ross Janney Montgomery Scott Parker/Hunter

1

9/10/1984

762

173 288

2

12/31/1994

659

234 545

3

05/31/1997

739

251 470

4

11/28/1997

914

327 721

5

10/15/2000

856

307 595

6

12/10/2000

596

7

12/31/2000

487

213 487 80 415

8

10/31/1988

178

8 66

9

1/9/1998

178

17 142

10

2/17/1998

76

8 53

11

4/6/1998

360

26 135

12

10/1/1999

274

21 204

13

6/12/2000

516

28 182

14

9/18/2001

117

5 91

15

3/22/2005

116

10 54

1696

QUARTERLY JOURNAL OF ECONOMICS TABLE III (CONTINUED) Panel B: Career outcomes of analysts after mergers # analysts # analysts after merger

Brokerage house Merrill Lynch Becker Paribas Paine Webber Kidder Peabody Morgan Stanley Dean Witter Reynolds Smith Barney (Travelers) Salomon Brothers Credit Suisse First Boston Donaldson Lufkin Jenrette UBS Warburg Dillon Read Paine Webber Chase Manhattan JP Morgan Wheat First Securities Butcher & Co., Inc. EVEREN Capital Principal Financial Securities DA Davidson & Co. Jensen Securities Dain Rauscher Wessels Arnold & Henderson First Union EVEREN Capital Paine Webber JC Bradford Fahnestock Josephthal Lyon & Ross Janney Montgomery Scott Parker/Hunter

Merger number Prior After 1

Retained Left to Exited in the another sample New house house (fired) analysts

4

90 27 50 51 70 35 91

98 — 57 — 92 — 140

84 1 42 9 61 5 70

0 11 1 28 2 16 6

5 15 7 14 7 14 15

13 — 6 — 26 — 27

5

76 120

— 146

43 93

20 5

13 22

— 35

77

—

18

17

42

—

94

118

80

5

9

0

64 64 50 13

— 106 — 21

38 48 34 13

8 5 1 0

17 11 15 0

— 24 — 8

13 27 18

— 31 —

3 21 2

3 4 6

7 2 10

— 8 —

6 4 39 15

8 — 36 —

4 4 19 11

1 0 9 0

1 0 11 4

0 — 6 —

35 32 54 22 14 14

54 — 55 — 16 —

26 12 37 0 7 0

2 10 9 14 1 5

7 10 8 8 6 9

16 — 18 — 9 —

13

15

11

1

1

3

5

—

1

0

4

—

2 3

6

7 8

9

10 11

12 13 14

15

1697

COMPETITION AND BIAS TABLE III (CONTINUED)

Panel C: Percentage of stocks covered by analysts from bidder and target houses after mergers Percentage of stocks (bidder) Percentage of stocks (target) Merger 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(1) 85.7 73.7 66.3 50.0 63.7 67.8 45.3 61.9 67.7 50.0 52.8 48.1 67.3 43.8 73.3

(2) 1.1 15.8 5.4 30.7 12.3 32.3 32.1 14.3 6.5 50.0 30.6 22.2 0 0 6.7

Notes. Panel A includes the names of brokerage houses involved in mergers, the date of the merger, and the number of stocks covered by either brokerage house or both of them prior to the merger. Panel B breaks down the merger information at the analyst level. We include number of analysts employed in the merging brokerage houses prior to merger and after the merger as well as the detailed information on the career outcomes of the analysts after the merger. Panel C calculates the percentage of analysts from the merging houses that cover the same stock after the merger. We restrict our sample of stocks to those which were covered by bother the bidder and the target house.

dates, and the number of stocks covered by the bidder and target individually and the overlap in the coverage. For instance, in the merger involving Paine Webber and Kidder Peabody, Paine Webber covered 659 stocks and Kidder covered 545 stocks. There was a 234-stock overlap in their coverage. As a result, the merger leads to a potential decrease of around one analyst for a large number of stocks. The size of our treatment sample, the number of firms covered by both merging houses, ranges from a low of five stocks in the merger involving Fahnestock and Josephthal Lyon and Ross to a high of 327 stocks in the Smith Barney and Salomon Brothers deal. Notice that the big six mergers described above give us much of the variation in terms of the number of treatment stocks. In total, we have a significant treatment sample with which to identify our effect. To better support the premise that mergers lead to less analyst coverage in the treatment sample via job turnover, we

1698

QUARTERLY JOURNAL OF ECONOMICS

examine career outcomes of analysts employed by merging houses. Panel B presents the results with the breakdown of career outcomes of analysts employed by both the bidder and target houses. A few observations can be noted. First, the big mergers affected a very significant number of analysts. The largest of the mergers— between Credit Suisse First Boston and Donaldson Lufkin and Jenrette—affected almost 200 analysts. The smallest merger in terms of analysts affected is Davidson and Jensen with ten. Given that in our sample the average brokerage house employs approximately fifteen analysts, a number of our mergers constituted important events in the analyst industry. Second, as expected, mergers generally reduce the number of analysts covering stocks. For example, the two brokerage houses involved in the first merger, Paine Webber and Kidder Peabody, employed a total of 101 analysts prior to merger. After Paine Webber acquired Kidder Peabody, the employment in the joint entity decreased to 57 analysts. Third, the majority of the employment reduction comes from the closure of the target house. In particular, out of 51 analysts employed by Kidder, only nine were retained in the new company, and 28 left to a different house, whereas fourteen exited the sample, which we interpret as a firing decision. In Panel C, we confirm more precisely that for stocks covered by both houses premerger, it is usually the analyst in the bidding house who remains whereas the target analyst is let go. In the first column of Panel C, we report for the treatment sample, stocks covered by both houses, the fraction that are covered by the bidder analyst after the merger. In the second column, we report the fraction covered by the target analysts after the merger of the treatment sample. In the Paine Webber and Kidder merger, for stocks covered premerger by both houses, it is the target analyst who is indeed the redundant one that gets fired—the corresponding figures are 73.7% for the bidder analysts and only 15.8% for the target analysts. Similarly big gaps exist for most of the other mergers. This gap is much smaller in the Davidson and Jensen merger, 50% for the bidder and 50% for the target. Nonetheless, from Panel B, it still appears that there were fewer analysts working for the merged entity than for the two houses beforehand. V. EMPIRICAL DESIGN Our analysis of the effect of competition on analyst forecast bias utilizes a natural experiment involving brokerage house

COMPETITION AND BIAS

1699

mergers. The outcome of this process is reduction in the number of analysts employed in the combined entity compared to the total number of analysts employed in bidder and target entities prior to merger. As a result, the number of analysts covering a stock that was covered by both houses before the merger (our treatment sample) should drop as one of the redundant analysts is let go or reallocated to another stock (or maybe even both are let go), and thus the competition in the treatment sample decreases. The questions then are whether there is a decrease in competition among analysts around merger events and whether this decrease is associated with an economically significant effect on average consensus bias. Our empirical methodology requires that we specify a representative window around the merger events. In choosing the proper estimation window, we face a trade-off, unlike most other event studies, which would have us focus on a very narrow window. As is the case with most event studies, choosing a window that is too long may incorporate information that is not really relevant to the event under consideration. But in our case, choosing too short a window means we may lose observations, because analysts may not issue forecasts on the same date or with the same frequency. We want to keep a long enough window to look at the change in the performance of all analysts before and after the merger. To this end, we use a two-year window, with one year of data selected for each pre- and postevent period. Most analysts will typically issue at least one forecast within a twelve-month window. Given that in each of the two windows one analyst could issue more than one forecast, we retain only the forecast that has the shortest possible time distance from the merger date. In addition, because we are interested in the effect of merger on various analyst characteristics, we require that each stock be present in both windows around the merger. As a result, for every stock we note only two observations—one in each window of the event. This one year before and one year after the merger event having been chosen, one then has to factor in the fact that coverage and the average stock bias may vary from one year to the next. In other words, to identify how the merger affected coverage in the stocks covered by both houses premerger and how the bias in these stocks then also changed, one needs to account for the fact that there may be natural changes from year to year in coverage and bias for these stocks.

1700

QUARTERLY JOURNAL OF ECONOMICS

A standard approach to deal with these time trends is based on the difference-in-differences (DID) methodology. In this approach, the sample of stocks is divided into treatment and control groups. In the context of our paper, the treatment group includes all stocks that were covered by both brokerage houses before the merger. The control group includes all the remaining stocks. If we denote the average observed characteristics in the treatment (T) and control (C) groups in the pre- and postevent periods by CT ,1 , CT ,2 , CC,1 , and CC,2 , respectively, the partial effect of change due to merger can be estimated as (1)

DID = (CT ,2 − CT ,1 ) − (CC,2 − CC,1 ).

Here the characteristics might be analyst coverage or bias. By comparing the time changes in the means for the treatment and control groups, we allow for both group-specific and time-specific effects. This estimator is unbiased under the condition that the merger is not systematically related to other factors that affect C. A potential concern with the above estimator is the possibility that the treatment and control groups may be significantly different from each other and thus the partial effect may additionally capture the differences in the characteristics of the different groups. For example, the average stocks in both groups may differ in terms of their market capitalizations, value characteristics, or past return characteristics. For instance, it might be that companies with good recent returns lead analysts to cover their stocks and to be more optimistic about them. Hence, we want to make sure that past returns of the stocks in the treatment and control samples are similar. We are also worried that higher analyst coverage stocks may simply be different than lower analyst coverage stocks for reasons unrelated to our competition effect. So we will also want to keep the premerger coverage characteristics of our treatment sample similar to those of our control sample. To account for such systematic differences across the two samples, we use the matching technique similar to that used in the context of IPO event studies or characteristic-based asset pricing. In particular, each stock in the treatment sample is matched with its own benchmark portfolio obtained using the sample of stocks in the control group. We expect our controls typically to do a better job at capturing our true effect by netting out unobserved heterogeneity.

COMPETITION AND BIAS

1701

To construct the benchmark, we first sort stocks into tercile portfolios according to their market capitalizations. Next, we sort stocks within each size portfolio according to their book-to-market ratios. This sort results in nine different benchmark portfolios. Further, we sort stocks in each of the nine portfolios into tercile portfolios according to their past returns, which results in 27 different benchmark portfolios. Finally, we sort stocks in each of the 27 portfolios into tercile portfolios according to their analyst coverage. Overall, our benchmark includes 81 portfolios. Using the above benchmark specification, we then construct the benchmark-adjusted DID estimator (BDID). In particular, for each stock i in the treatment sample, the partial effect of change due to merger is calculated as the difference between two components, (2)

i i − BCC,1 , BDIDi = CTi ,2 − CTi ,1 ) − (BCC,2

where the first component is the difference in characteristics of stock i in the treatment sample moving from the premerger to the postmerger period. The second component is the difference in the average characteristics of the benchmark portfolios that are matched to stock i along the size/value/momentum/coverage dimensions. In general, the results are comparable if we use benchmarks matched along any subset of the characteristics. To assess the average effect for all stocks in the treatment sample, one can then take the average of all individual BDIDs. One final issue that we need to account for is that a few of the mergers occurred within several months of each other (e.g., the fifth and sixth mergers occurred on October 15, 2000, and December 10, 2000, respectively). As a result, it might be difficult to separate out the effects of these two mergers individually. As the baseline case, we decided for simplicity to treat each merger separately in our analysis. However, we have also tried robustness checks in which we group mergers occurring close together in time and treat them as one merger. For instance, we consider a one-year window before the third merger on October 15, 2000, as the premerger period and the one-year window after the fourth merger on December 10, 2000, as the postmerger period. As a result, the treatment sample is the union of the 307 stocks jointly covered by Credit Suisse and DLJ and the 213 stocks covered by UBS and Paine Webber. There is potentially some overlap of these two subsets of stocks, and hence it might be the case that some of

1702

QUARTERLY JOURNAL OF ECONOMICS TABLE IV SUMMARY STATISTICS FOR THE TREATMENT SAMPLE

Variable COVERAGEi,t Mean BIASi,t (%) Median BIASi,t (%) Mean FERRORi,t (%) Median FERRORi,t (%) FDISPi,t (%) LNSIZEi,t SIGMAi,t (%) RETANNi,t (%) LNBMi,t VOLROEi,t (%) PROFITi,t (%)

Cross-sectional mean (1) 21.12 2.79 2.74 3.40 3.33 0.75 8.39 41.00 1.74 −1.03 25.32 15.52

Cross-sectional median (2) 20 2.24 2.21 2.52 2.43 0.40 8.37 35.86 1.52 −0.92 9.89 15.25

Cross-sectional st. dev. (3) 9.45 3.10 3.19 2.90 2.99 0.94 1.60 21.02 4.13 0.91 43.40 9.22

Notes. We consider all stocks covered by two merging brokerage houses around the one-year merger event window. COVERAGEit is a measure of analyst coverage, defined as the number of analysts covering firm i at the end of year t. Analyst forecast bias (BIAS jt ) is the difference between the forecast analyst j at time t and the actual EPS, expressed as a percentage of the previous year’s stock price. The consensus bias is expressed as a mean or median bias among all analysts covering a particular stock. Analyst forecast error (FERROR jt ) is the absolute difference between the forecast analyst j at time t and the actual EPS, expressed as a percentage of the previous year’s stock price. The forecast error is expressed as a mean or median bias among all analysts covering a particular stock. FDISPit is analyst forecast dispersion, defined as the standard deviation of all analyst forecasts covering firm i at time t. LNSIZEit is the natural logarithm of firm i’s market capitalization (price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i during year t. RETANNit is the average monthly return on stock i during year t. LNBMit is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a 10-year series of the company’s valid annual ROEs. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings during year t over the book value of equity. We calculate VOLROE as the variance of the residuals from this regression. PROFITit is the profitability of company i at the end of year t, defined as operating income over book value of assets. We exclude observations with stock prices lower than $5 and those for which the absolute difference between the forecast value and the true earnings exceeds $10.

these stocks will experience a greater decline in analyst coverage to the extent that they have more than two redundant analysts. However, these alterations do not affect our baseline results. Table IV presents summary statistics for the treatment sample in the two-year window around the merger. The characteristics of the treatment sample are similar to those reported in Table I for the OLS sample. For instance, the coverage is about 21 analysts for the typical stock. The mean bias is 2.79% with a standard deviation of around 3.10%. These figures, along with those of the control variables, are fairly similar across these two samples. This provides comfort that we can then relate the economic effect of competition obtained from our treatment sample to the OLS estimates presented in Table II.

COMPETITION AND BIAS

1703

VI. RESULTS VI.A. Analyst Coverage and Optimism Bias We first verify the premise of our natural experiment by measuring the change in analyst coverage for the treatment sample from the year before the merger to the year after. We expect these stocks to experience a decrease in coverage. Panel A of Table V (column (1)) reports the results of this analysis. We present the DID estimator for coverage using our benchmarking technique—size, book-to-market, return, and coverage matched. We observe a discernible drop in coverage due to merger by around 1.02 analysts, using the DID estimator, and the level of the drop of between one and two analysts is in line with our expectations. This effect is significant at the 1% level. One can think of this finding as essentially the first stage of our estimation. The effect is economically and statistically significant in the direction predicted, and hence confirms the premise of our natural experiment. We will focus on this number in our discussion of the economic effect of competition below. We next look at how the optimism bias changes for the treatment sample across the mergers. These results are presented in Panel A of Table V. We present the findings in column (2) for the mean BIAS and in column (3) for the median BIAS. Using the DID estimator, we find an increase in optimism bias of 0.0013 for the mean bias (significant at the 10% level) and 0.0016 for the median bias (significant at the 5% level). Using the estimates obtained above, a conservative estimate is that the mean optimism bias increases by about thirteen basis points (as a result of reducing coverage by one analyst). As we mentioned earlier, the sample for the natural experiment is similar to that of the OLS by construction—the typical stock has a bias of around 2.7% and the standard deviation of the optimism bias is also around 3%. This means that the estimate of the competitive effect from our natural experiment is approximately six to seven times as large as that from the OLS estimates. This is a sizable difference and suggests that the OLS estimates are biased downward, consistent with the documented selection bias that stocks that attract lots of coverage are likely to have more optimistic analysts. One could argue that our mean bias effect might be driven by selection through which one of the two analysts from the

1704

QUARTERLY JOURNAL OF ECONOMICS TABLE V CHANGE IN STOCK-LEVEL COVERAGE AND BIAS: DID ESTIMATOR Panel A: Coverage and bias Coverage Mean BIAS (1) (2)

SIZE/BM/RET/NOAN-matched

−1.021∗∗∗ (0.179)

0.0013∗ (0.0007)

Median BIAS (3) 0.0016∗∗ (0.0008)

Panel B: Change in forecast bias: DID estimator without analysts from merging houses Mean BIAS Median BIAS (1) (2) SIZE/BM/RET/NOAN-matched

0.0011∗∗ (0.0005)

0.0012∗∗ (0.0005)

Panel C: Change in forecast bias: Conditioning on initial coverage Mean BIAS Median BIAS (1) (2) SIZE/BM/RET/NOAN-matched (coverage ≤ 5) SIZE/BM/RET/NOAN-matched (coverage > 5 and ≤ 20) SIZE/BM/RET/NOAN-matched (coverage > 20)

0.0078∗∗ (0.0036) 0.0017∗ (0.0011) 0.0003 (0.0013)

0.0096∗∗ (0.0044) 0.0020∗ (0.0011) 0.0007 (0.0013)

Notes. N = 1,656 in Panels A and B. We measure analyst coverage as the number of analysts covering firm i at the end of year t. For all mergers, we split the sample of stocks into those covered by both merging brokerage houses (treatment sample) and those not covered by both houses (control sample). We also divide stocks into premerger period and postmerger period (one-year window for each period). For each period we further construct benchmark portfolios using the control sample based on stocks’ size (SIZE), book-to-market ratio (BM), and average past year’s returns (RET). Our benchmark assignment involves three portfolios in each category. Each stock in the treatment sample is then assigned to its own benchmark SIZE/BM/RET/NOANmatched). Next, for each period, we calculate the cross-sectional average of the differences in analyst stock coverage across all stocks in the treatment sample and their respective benchmarks. Finally, we calculate the difference in differences between postevent period and pre-event period (DID estimator). Our sample excludes observations with stock prices lower than $5 and those for which the absolute difference between forecast value and the true earnings exceeds $10. In Panel B, we exclude from our sample all analysts employed in the merging houses. Panel C presents our results by cuts on initial coverage. There are three groups: lowest coverage (≤5), medium coverage (>5 and ≤20) and highest coverage (>20). Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

merging firms covering the stock gets fired. It might be that the less optimistic analyst gets fired and hence the bias might be higher as a result. Another possibility could be that analysts employed by the merging houses may compete for the job in the new merged house and thus they may strategically change their reporting behavior.

COMPETITION AND BIAS

1705

We deal with these issues in two ways. The first one, because we have turnover data, is simply to check whether the merging brokerage houses selectively fire analysts who are less optimistic. We do not find such a selection bias. The second and more direct way to deal with this concern is to look only at the change in the bias for the analysts covering the same stocks but not employed by the merging firms. The findings are in Panel B of Table V. We report the change in bias for the treatment sample, but now the bias is calculated using only the forecasts of the analysts not employed by the merging houses. The figures are very similar to the main findings—only slightly smaller in some instances by a negligible amount. The mean bias increases by eleven basis points, the median bias increases by twelve basis points, and both are significant at the 5% level. Collectively, these findings provide comfort that our main results are not spuriously driven by outliers or by selection biases. We next test a key auxiliary prediction that will further buttress our identification strategy. We check whether the competition effect is more pronounced for stocks with smaller analyst coverage. The idea is that the more analysts cover a stock, the less the loss of an additional analyst matters, akin to the Cournot view of competition. For instance, in the independence rationale of Gentzkow and Shapiro, when there are already many analysts, losing one would not change much the likelihood of drawing an independent analyst. In contrast, when there are only a few analysts to begin with, losing one analyst could really affect the likelihood of getting an independent supplier of information. However, note that if collusion is possible, then we might expect a nonlinear relationship between bias and coverage. Suppose that collusion is easier when there are only a few analysts. Under this scenario, going from one to two analysts may not have an effect because the two can collude. And we might find more of an effect when going from five to six analysts if the sixth analyst does not collude. With collusion, it might be that we expect the biggest effect for stocks covered by a moderate number of analysts—that is, an inverted U-shape with the effect being greatest for mediumcoverage stocks. We examine this issue in Panel C of Table V using the same DID framework as before. We divide initial coverage into three groups: less than or equal to five analysts, between six and twenty analysts, and more than twenty analysts. Column (1) reports the results using mean bias. We expect and find that the effect is

1706

QUARTERLY JOURNAL OF ECONOMICS

significantly smaller when there are a lot of analysts covering. The effect is greatest for the first group (less than or equal to five analysts). The mean bias increases by 78 basis points and the median bias by 96 basis points, and both are significant at the 5% level. The next largest effect is in the second group (more than five and less than or equal to twenty): The mean bias increases by seventeen basis points and the median bias by twenty basis points. Both are significant at the 10% level. Finally, the effect is much smaller for the highest-coverage group: the mean bias increases by three basis points and the median bias by seven basis points and neither of these point estimates is statistically significant. In sum, the evidence is remarkably comforting, as it conforms well to our priors on competition being more important when there are fewer analysts around. This result reassures us that our estimate is a sensible one.8 Next, we delve deeper into our results in Table V by analyzing plots in event time of the change in coverage and bias. This will also allow us to gauge the robustness of the parallel trend assumption required for the difference-in-differences approach. To this end, we consider the event window with three periods before and three periods after the merger. For each event-window date, we calculate the difference in coverage and median bias between the treatment and control groups and plot it against the event time. The left-hand side of Figure I presents the results for coverage and the right-hand side for bias. We report the results separately for the three subgroups of stocks sorted according to coverage (Panels A–C) and then for the entire sample (Panel D). We do not find support for pretrends driving our results. In particular, both for coverage and for bias, the difference between the treatment and control groups is stable before the event date, and also after the event date. Moreover, we confirm the results from Panel C of Table V. Consider Panel A of Figure I, which shows the results for the low-coverage stocks. Note that the mergers cause a relative drop of about one analyst on the event date (from time −1 to time 0) and an increase in bias of about eighty basis points for low-coverage stocks (from time −1 to time 0). Our matching in terms of premerger coverage characteristics is nearly perfect here, and there is also little difference in the 8. The results are not affected by a particular cutoff level for the number of analysts. The results are generally declining in a nonlinear way with an increase of coverage.

FIGURE I Trend of Analyst Coverage and Bias in the Treatment Sample (Net of Control) We show the trend of average analyst coverage and forecast bias in the treatment sample net of the control group (in a given year) up to three years before and after the merger event. Panel A is for stocks with low analyst coverage (less than six analysts); Panel B is for stocks with medium analyst coverage (six to twenty analysts); Panel C is for stocks with high analyst coverage (above twenty analysts). Panel D documents aggregate results. Dotted lines illustrate 95% confidence intervals.

COMPETITION AND BIAS

1707

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I (CONTINUED)

1708

COMPETITION AND BIAS

1709

premerger mean bias between the treatment and control groups. There are no discernible pretrends or posttrends. We also observe a similar degree of statistical significance, as illustrated by two-standard-deviation bands represented by dotted lines. Panel B shows the results for the medium-coverage stocks. Again, the matching in premerger coverage characteristics is nearly perfect, with the treatment sample having a slightly higher premerger coverage by about one-half analyst than the control sample. Again there is nearly perfect matching in the mean bias premerger between the treatment and control samples. We see a drop of about one analyst on the event date and an increase in bias by about twenty basis points. Importantly, there are no discernible pretrends, though there is a slight posttrend in the coverage drop, which continues to drop a couple of years after the event date. But by and large, the figures in Panel B support the validity of the experiment. In Panel C, we look at the high-coverage stocks. Notice here that the treatment sample has a much higher coverage than the control sample. Part of the reason for this is that the mergers in our sample involve big brokerage houses, which cover very big stocks. As a result, it is difficult to get a match for these big stocks identical to what we could for the lower-coverage stocks. This might explain why the premerger bias of the treatment sample is slightly lower than that of the control group, which would very much be consistent with the competition mechanism. We believe this difference is not contributing any biases to our estimates related to the merger. Absent this difference, we see roughly the same picture of a one-analyst drop on the event date and a slight increase in the bias, which is not significantly different from zero. The picture that emerges is similar to that of Panel B. Finally, in Panel D, we draw the same pictures using the entire sample. We see no significant pretrends in either coverage or bias and an observable event day drop in coverage and increase in bias. These figures provide comfort that our results in Table V are not driven by pretrends. VI.B. Validity of the Natural Experiment The economic significance of our results strictly depends on the validity of our natural experiment. Although the results in Figure I are a start in the direction of comforting us on the validity of our natural experiment, in this section we report a number of

1710

QUARTERLY JOURNAL OF ECONOMICS

further tests, which collectively provide strong support for our experiment. First, we separately estimate our effect using the six biggest mergers. The results are very similar in that the conservative estimates are a one-analyst drop in coverage associated with a 0.0017 increase in bias. Further, we estimate our effect separately for each of the fifteen mergers. Each of the fifteen mergers experienced a decline in coverage using the most conservative DID estimate. Hence, our result is not driven by outliers—there is a distinct coverage drop with mergers. Clearly, the fact that fifteen out of fifteen mergers experienced drops suggests that our effect is robustly significant in a nonparametric sense. Similarly, we find that twelve (thirteen) of the fifteen mergers experienced an increase in mean (median) bias using the most conservative DID estimate. It is important to emphasize that because these mergers occur throughout our entire sample, our effects are not due to any particular macroeconomic event such as a recession or boom. Second, given that the optimism measure is constructed as a difference between an analyst forecast and actual earnings, one could worry that our results are driven by differences in the actual earnings and not in reported forecasts. To rule out such a possibility, we test whether merger events lead to differential changes in earnings between treatment and control groups. Panel A of Table VI reports the results separately for the mean and median earnings. We find no evidence that competition causes changes in actual earnings. Third, our experiment relies on the validity of the matching procedure between firms in treatment and control groups. In general, our findings do not raise a major problem with the matching, but to provide further robustness that the differences we observe do not actually capture the ex ante differences in various observables, we report similar DID estimators for other response variables—Tobin’s Q, size, returns, volatility, profitability, and sales. The results in Panel B of Table VI show that none of the important observables is significantly affected by the merger event. These results are comforting, as they confirm the validity of our matching procedure. Fourth, the nature of our experiment requires that the same company be covered by two merging houses. To ensure that our effects are not due merely to the fact that the selection of the companies to brokerage houses is not random, we reexamine our evidence by focusing on stocks that are covered by one of the

1711

COMPETITION AND BIAS TABLE VI VALIDITY OF THE NATURAL EXPERIMENT Panel A: Change in earnings Mean EARN (1) DID estimator (SIZE/BM/RET/NOAN-matched)

Stock characteristic

−0.0002 (0.0005)

Median EARN (2) −0.0002 (0.0005)

Panel B: Change in firm characteristics SIZE/BM/RET/NOAN-matched

Tobin’s Q

0.0397 (0.1138) 30.52 (20.63) 0.0015 (0.0012) −0.0069 (0.0054) −0.0593 (0.0597) 0.0046 (0.0090)

Size Returns Volatility Profitability Log(sales)

Panel C: Change in forecast bias for non-overlapping stocks Coverage Mean BIAS Median BIAS (1) (2) (3) SIZE/BM/RET/NOAN-matched

0.080 (0.106)

0.0001 (0.0004)

0.0001 (0.0004)

Panel D: Change in forecast bias for non-overlapping stocks as a control Mean BIAS Median BIAS (1) (2) SIZE/BM/RET/NOAN-matched

0.0017∗∗∗ (0.0006)

0.0018∗∗∗ (0.0006)

Notes. N = 1,656 in Panels A, B, and D. In Panel A, we measure analyst earnings (EARN jt ) as the actual EPS expressed as a percentage of the previous year’s stock price. For all mergers, we split the sample of stocks into those covered by both merging brokerage houses (treatment sample) and those not covered by both houses (control sample). We also divide stocks into premerger period and postmerger period (one-year window for each period). For each period we further construct benchmark portfolios using the control sample based on stocks’ size (SIZE), book-to-market ratio (BM), average past year’s returns (RET), and analyst coverage (NOAN). Our benchmark assignment involves three portfolios in each category. Each stock in the treatment sample is then assigned to its own benchmark portfolio (SIZE/BM/RET/NOAN-matched). Next, for each period, we calculate the cross-sectional mean and median of the differences in earnings across all stocks in the treatment sample and their respective benchmarks. Finally, we calculate the difference in differences between postevent period and pre-event period (DID estimator). In Panel B, we provide the DID estimator for various corporate characteristics, including Tobin’s Q, asset size, stock returns, volatility, profitability, and log sales. In Panel C, the treatment sample is constructed based on the stocks that are covered by one but not both merging houses. In Panel D, the control sample is constructed using the stocks that are covered by one but not both merging houses. Our sample excludes observations with stock prices lower than $5 and those for which the absolute difference between forecast value and the true earnings exceeds $10. Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

1712

QUARTERLY JOURNAL OF ECONOMICS

0.045 0.040 0.035

Frequency

0.030 0.025 0.020 0.015 0.010 0.005

–0 .1 2 –0 5 .1 1 –0 0 .0 9 –0 6 .0 8 –0 1 .0 6 –0 6 .0 5 –0 2 .0 3 –0 7 .0 2 –0 2 .0 08 0. 00 7 0. 02 1 0. 03 6 0. 05 1 0. 06 5 0. 08 0 0. 09 5 0. 10 9

0.000

Difference in bias Before

After

FIGURE II Kernel Densities of Differences between Treatment and Control before and after Merger We show Epanechnikov kernel densities of differences in forecast bias between treatment and control groups for the period before and after the merger. The bandwidth for the density estimation is selected using the plug-in formula of Sheather and Jones (1991). The rightward shift of the distribution after the merger is significant because the hypothesis of equality of distributions is rejected at 1% level using the Kolmogorov–Smirnov test.

merging houses, but not by both. We show in Panel C of Table VI that the average stock coverage does not change significantly on the event date across these treatment and control groups, and the change in the bias is statistically not different from zero. We further apply this setting to validate the quality of our control group. Specifically, in Panel D of Table VI, we show that using stocks covered by only one of the two merging houses as a control group does not change the nature of our results. In fact, the results become slightly stronger than those in our baseline specification. Fifth, we examine whether competition changes the entire distribution of forecasts. To this end, we plot Epanechnikov kernel densities of bias in the treatment group relative to the control group before and after the merger. The bandwidth for the density estimation is selected using the plug-in formula of Sheather and Jones (1991). Figure II presents the results. We observe a significant rightward shift in the entire distribution of bias in the

COMPETITION AND BIAS

1713

postmerger period. The rightward shift of the distribution after the merger is significant because the hypothesis of equality of distributions is rejected at the 1% level using the Kolmogorov– Smirnov test. Moreover, the average relative bias becomes strictly positive, consistent with our earlier findings. These results suggest that the findings of our experiment are not driven by outliers and further indicate that the merger mainly causes previously unbiased analysts to become biased. VI.C. Robustness In this section, we report a number of tests that confirm the robustness of our results. An alternative econometric approach to capturing the effect of change in the bias due to merger that we consider is to estimate the following regression model: (3) Ci = α + β1 Mergei + β2 Affectedi + β3 Mergei × Affectedi + β4 Controlsi + εi , where C is the characteristic that may be subject to merger; Merge is an indicator variable, equal to one for observations after the merger, and zero otherwise; Affected is an indicator variable equal to one if stock i is affected by the merger, and zero otherwise; and Controls is a vector of stock-specific covariates affecting C. In this specification, the coefficient of primary interest is β3 , which captures the partial effect of change due to merger; in the version with additional controls its value is similar in spirit to the DID estimator in equation (2). By including additional controls, we account for any systematic differences in stocks that may affect the partial effect of change due to merger. Importantly, the regressions include merger fixed effects and industry fixed effects, which ensures comparability of the samples across various time-invariant characteristics. We also include brokerage fixed effects and firm fixed effects, which help us understand whether the observed effects in the data are driven by any systematic time-invariant differences between brokerage houses covering particular companies and the companies themselves. These regressions ought to provide similar answers as the DID approach except that we can control for additional sources of heterogeneity. We estimate our regression model using a pooled (panel) regression and calculating standard errors by clustering at the

1714

QUARTERLY JOURNAL OF ECONOMICS

merger level. This approach addresses the concern that the errors, conditional on the independent variables, are correlated within merger groupings (e.g., Moulton [1986]). One reason this may occur is that the bias occurring in one company may also naturally arise in another company covered by the same house because the broker tends to cover stocks with similar bias pressures.9 The results for the effect on bias obtained using an alternative regression approach outlined in equation (3) are presented in Table VII. The first column shows the result using mean bias and the second column shows the results for median bias. In the first column, the coefficient of interest in front of MERGE × AFFECTED is 0.0021, which is significant at the 10% level. The coefficient of interest increases slightly to 0.0024 for median bias and the statistical significance level is 5%. Hence, the results in this table are consistent with those using the DID estimator, though the estimates are a bit bigger. Further, we account for the fact that the bias change we capture may result from the difference in the timeliness of the forecasts issued premerger compared to postmerger. In particular, empirical evidence suggests that analyst bias is more pronounced the farther out is the forecast. Indeed, there is a tendency for an analyst to undershoot the earnings number for forecasts issued near the earnings date. To this end, we first document that there is no difference in the timeliness of the forecasts issued premerger as compared to postmerger. Further, in our regression model, we include an additional control variable—recency (Rec)—that measures the average distance of the forecast from the date for which the forecast is obtained. The results, presented in columns (3) and (4) of Table VII, show that controlling for forecast timing does not qualitatively affect our results. In this paper, we focus on annual earnings forecasts, because these are the key numbers that the market looks to, and every analyst has to submit such a forecast. For completeness, we also look at how long-term growth forecasts and stock recommendations change for the treatment sample in comparison to the control sample around these mergers. One downside is that data in this case are more sparse as analysts do not issue as many timely growth forecasts or recommendations. Moreover, we cannot measure bias 9. We have also considered other dimensions of clustering: clustering by industry, by stock, by time, and by time and industry. All of them produced standard errors that were lower than the ones we report.

1715

COMPETITION AND BIAS TABLE VII CHANGE IN FORECAST BIAS: REGRESSION EVIDENCE

Mean BIAS Median BIAS Mean BIAS Median BIAS (1) (2) (3) (4) MERGEi AFFECTEDi MERGEi × AFFECTEDi LNSIZEi,t−1 RETANNi,t−1 LNBMi,t−1 COVERAGEi,t−1 SIGMAi,t−1 VOLROEi,t−1 PROFITi,t−1 SP500i,t−1

0.0005 (0.0008) −0.0019∗∗ (0.0006) 0.0021∗ (0.0012) 0.0038∗∗∗ (0.0010) 0.0005 (0.0017) −0.0037 (0.0073) 0.0001 (0.0001) 0.0001∗∗ (0.0000) 0.0000 (0.0000) 0.0629∗∗∗ (0.0052) 0.0032 (0.0031)

0.0005 (0.0008) −0.0019∗∗ (0.0007) 0.0024∗∗ (0.0012) 0.0037∗∗∗ (0.0010) 0.0004 (0.0017) 0.0001 (0.0074) 0.0001 (0.0001) 0.0000 (0.0000) 0.0000 (0.0000) 0.0620∗∗∗ (0.0052) 0.0039 (0.0037)

Yes Yes Yes Yes 57,005

Yes Yes Yes Yes 57,005

RECi,t−1 Merger fixed effects Industry fixed effects Brokerage fixed effects Firm fixed effects Observations

0.0005 (0.0008) −0.0019∗∗∗ (0.0006) 0.0021∗ (0.0012) 0.0038∗∗∗ (0.0009) 0.0005 (0.0017) −0.0037 (0.0073) 0.0001 (0.0001) 0.0001∗∗ (0.0000) 0.0000 (0.0000) 0.0630∗∗∗ (0.0052) 0.0032 (0.0031) −0.0000 (0.0000) Yes Yes Yes Yes 57,005

0.0006 (0.0008) −0.0019∗∗∗ (0.0006) 0.0024∗∗ (0.0012) 0.0037∗∗∗ (0.0010) 0.0004 (0.0017) 0.0001 (0.0074) 0.0001 (0.0001) 0.0000 (0.0000) 0.0000 (0.0000) 0.0621∗∗∗ (0.0052) 0.0039 (0.0037) −0.0000 (0.0000) Yes Yes Yes Yes 57,005

Notes. The dependent variable is forecast bias (BIAS), defined as the difference between forecasted earnings and actual earnings, adjusted for the past year’s stock price. For each merger, we consider a one-year window prior to merger (pre-event window) and a one-year window after the merger (postevent window). We construct an indicator variable (MERGE) equal to one for the postevent period and zero for the pre-event period. For each merger window, we assign an indicator variable (AFFECTED) equal to one for each stock covered by both merging brokerage houses (treatment sample) and zero otherwise. LNSIZE is a natural logarithm of the market cap of the stock; SIGMAit is the variance of daily (simple, raw) returns of stock i during year t; RETANN is annual return on the stock; LNBM is a natural logarithm of the book to market ratio; COVERAGE denotes the number of analysts tracking the stock. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a ten-year series of the company’s valid annual ROEs. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings in year t over the book value of equity. VOLROE is the variance of the residuals from this regression. PROFITit is the profitability of company i at the end of year t, defined as operating income over book value of assets. SP500 is an indicator variable equal to one if a stock is included in the S&P500 index. RECit is the recency measure of the forecast, measured as an average distance between the analyst forecast and the earnings’ report. All regressions include three-digit SIC industry fixed effects, merger fixed effects, brokerage fixed effects, and firm fixed effects. We report results based on both mean and median bias. Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

1716

QUARTERLY JOURNAL OF ECONOMICS TABLE VIII CHANGE IN ALTERNATIVE FORECAST BIAS MEASURES: DID ESTIMATOR Mean BIAS (1)

Panel A: Long-term growth DID estimator (SIZE/BM/RET/NOAN-matched) 0.553∗∗∗ (0.202) Panel B: Analyst recommendations DID estimator (SIZE/BM/RET/NOAN-matched) 0.0501 (0.0412)

Median BIAS (2) 0.352∗ (0.212) 0.0902∗ (0.0556)

Notes. We measure analyst forecast bias (BIAS jt ) using two different measures: the forecast of longterm growth of analyst j at time t (Panel A), and the analyst’s j stock recommendation at time t (Panel B). For each analyst, the recommendation variable is ranked from 1 to 5, where 1 is strong sell, 2 is sell, 3 is hold, 4 is buy, and 5 is strong buy. The consensus bias is expressed as a mean or median bias among all analysts covering a particular stock. For all mergers, we split the sample of stocks into those covered by both merging brokerage houses (treatment sample) and those not covered by both houses (control sample). We also divide stocks into premerger period and postmerger period (one-year window for each period). For each period we further construct benchmark portfolios using the control sample based on stocks’ size (SIZE), book-to-market ratio (BM), average past year’s returns ( RET), and analyst coverage (NOAN). Our benchmark assignment involves three portfolios in each category. Each stock in the treatment sample is then assigned to its own benchmark portfolio (SIZE/BM/RET/NOAN-matched). Next, for each period, we calculate the crosssectional average of the differences in analyst forecast bias across all stocks in the treatment sample and their respective benchmarks. Finally, we calculate the difference in differences between postevent period and pre-event period (DID estimator). Our sample excludes observations with stock prices lower than $5 and those for which the absolute difference between forecast value and the true earnings exceeds $10. Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

in the same way, because there are no actual earnings forecasts to make the comparison to. However, we can gauge the extent to which the average long-term forecast or recommendation changes across the merger date for our treatment sample (provided data are available) compared to the control group. To the extent that there is less competition as a result of these mergers, we expect forecasts for percentage growth to be higher after the merger and for them to have more positive recommendations. The results for the long-term growth forecasts and recommendations are in Table VIII. Panel A reports the results for long-term growth forecasts (which is the percentage long-term growth in earnings). Using the most conservative benchmark, we see that long-term growth forecasts increase by 55 bp’s after the merger, using mean forecasts, and by 35 bp’s, using median forecasts. The mean long-term growth forecast in the treatment sample is 14%, with a standard deviation of 6%. So a one-analyst drop in coverage in our treatment sample results in an increase in the mean longterm growth forecast that is about 9% of a standard deviation of these forecasts. This is both an economically and a statistically significant effect.

COMPETITION AND BIAS

1717

Panel B reports the results using recommendations. Recommendations are given in terms of the following five rankings: strong sell, sell, hold, buy, and strong buy. We convert these into a score of 1 for strong sell, 2 for sell, 3 for hold, 4 for buy, and 5 for strong buy. We then take the mean and median of these recommendation scores and look at how they vary for the treatment sample and the control group across the merger date. Using again the most conservative benchmark, the merger event is associated with an increase in the average recommendation score for the treatment sample of 0.05 using the mean score and 0.09 using the median score. The result using the mean score is not statistically significant, but the result using the median score is statistically significant at the 10% level. However, both estimates imply quite significant economic effects. The mean score for the treatment sample is 3.87 with a standard deviation of 0.44. Hence, we find that a one-analyst drop in coverage leads to about a 20% (10%) increase in the median (mean) recommendation score as a fraction of the standard deviation of these recommendations. In sum, we conclude that our baseline results based on annual forecasts are robust to different measures of bias. Moreover, in economic magnitude, they are half as large as the alternative effects we document above, and thus they constitute a lower bound in estimating the effect of competition on bias. One explanation of this fact is that long-term and recommendation forecasts might be more difficult to verify and thus they are subject to stronger competition effects. VI.D. Additional Analysis In this section, we bring to bear additional evidence on one of the mechanisms behind the competition effect in the data.10 Consider the independence-rationale mechanism in Gentzkow and Shapiro in which one independent or honest analyst can discipline her peers and subvert attempts to suppress information. Now imagine a firm covered by many analysts. This firm is less likely to try to bribe analysts to suppress information because the likelihood of drawing an independent or honest analyst is so high. In contrast, a firm with only a couple of analysts covering it is more likely to attempt to influence their few analysts because the payoffs for doing so are higher. One can measure attempts 10. We would like to thank an anonymous referee for several suggestions on addressing this issue.

1718

QUARTERLY JOURNAL OF ECONOMICS

on the firm to influence analysts by comparing the incentives for analysts for optimism bias for analysts who cover stocks with low analyst coverage versus those for analysts who cover stocks with high analyst coverage—by our logic, we expect higher incentives for analyst optimism bias for analysts covering stocks with little initial coverage or competition to begin with. Hence, our focus in this section is to measure how the incentives for optimism bias faced by analysts change depending on how much competition there is. Here we build on the work of Hong and Kubik (2003), who measure the implicit incentives of analysts for bias by looking at how career outcomes depend on optimism bias. They document strong evidence that optimism is correlated with subsequent career outcomes: Optimism increases chances of being promoted, whereas pessimism increases chances of being demoted. Our twist, building on their framework, is to examine whether, in the cross section, the subsequent career outcomes of analysts that cover stocks followed by fewer analysts are more strongly related to the degree of their bias. We interpret this as firms with little analyst coverage attempting to influence analysts by providing them incentives through their brokerage houses. To this end, using analysts as a unit of observation, we estimate the following linear probability models: (4a) (4b)

Promotionit+1 = α + β1 Biasit + β2 Coverageit + β3 Biasit × Coverageit + β4 Controlsit + εit+1 , Demotionit+1 = α + β1 Biasit + β2 Coverageit + β3 Biasit × Coverageit + β4 Controlsit + εit+1 .

Our coefficient of interest is β3 . Following Hong and Kubik (2003), Promotionit+1 equals one if an analyst i moves to a brokerage house with more analysts, and zero otherwise; Demotionit+1 equals one if an analyst i moves to a brokerage house with fewer analysts, and zero otherwise; Controls is a vector of controls including forecast accuracy, natural logarithm of an analyst’s experience, the size of the brokerage house. We also include year fixed effects, broker fixed effects, and analyst fixed effects. We estimate our regression model using a pooled (panel) regression and calculating standard errors by clustering at the analyst level. An important control in the regression model is forecast accuracy. To construct our measure of accuracy, we first calculate an analyst forecast error, defined as the absolute difference between his or her forecast and the actual EPS of firm i at time t.

COMPETITION AND BIAS

1719

We express the difference as a percentage of the previous year’s stock price. Subsequently, we follow the methodology in Hong and Kubik (2003) and construct a measure of relative analyst forecast accuracy. To this end, we first sort the analysts who cover a particular stock in a year based on their forecast error. We then assign a ranking based on this sorting; the best analyst (the one with the lowest forecast error) receives the first rank for that stock, the second best analyst receives the second rank, and onward until the worst analyst receives the highest rank. If more than one analyst is equally accurate, we assign all those analysts the midpoint value of the ranks they take up. Finally, we scale an analyst’s rank for a firm by the number of analysts who cover that firm. Following Hong and Kubik (2003), who have shown that analyst accuracy predicts career outcomes in a nonlinear fashion, we define two measures of accuracy: High Accuracy is an indicator variable that equals one if an analyst’s error falls into the highest decile of the distribution of accuracy measure, and zero otherwise; Low Accuracy is an indicator variable that equals one if the analyst’s error falls into the lowest decile of the distribution of accuracy measure, and zero otherwise. Note that in the regression with promotions as the dependent variable, the indicator for high accuracy is used as a control, whereas the indicator for low accuracy is used as a control when demotions is the dependent variable. We can have a more symmetric specification in which we include set of dummies for different accuracy deciles, along with the respective interaction terms with coverage, as control variables for both promotions and demotions and the results would be identical (these are available from the authors). We present results from estimation of equation (4) in Table IX. Columns (1) and (2) show the results for promotion and columns (3) and (4) for demotion. In columns (1) and (3), we replicate the results in Hong and Kubik (2003) and show that bias positively affects the probability of being promoted and negatively affects the probability of being demoted. More important, consistent with the hypothesis that competition affects incentives for bias, we find that the probability of being promoted is positively correlated with bias in an environment with lower competition. The result is statistically significant at the 1% level. Likewise, we find a qualitatively similar result for the probability of being demoted at the 10% level of significance. These results provide support for the independence rationale mechanism behind the disciplining effect of competition on bias.

1720

QUARTERLY JOURNAL OF ECONOMICS TABLE IX INCENTIVES AND CAREER OUTCOMES Promotion (1)

BIAS HIGH ACCURACY

0.0106∗ (0.0067) 0.0082∗∗ (0.0040)

Demotion (2)

0.0448∗∗∗ (0.0136) 0.0183∗∗ (0.0080)

LOW ACCURACY −0.0019∗∗∗ (0.0006) −0.0006 (0.0004)

BIAS × COVERAGE HIGH ACCURACY × COVERAGE LOW ACCURACY × COVERAGE COVERAGE EXPERIENCE BROKERAGE SIZE Year fixed effects Brokerage fixed effects Analyst fixed effects Observations

−0.0004∗ (0.0002) 0.0262∗∗∗ (0.0052) −0.0016∗∗∗ (0.0003) Yes Yes Yes 45,770

0.0002 (0.0003) 0.0265∗∗∗ (0.0052) −0.0016∗∗∗ (0.0003) Yes Yes Yes 45,770

(3) −0.0036 (0.0074) 0.0262∗∗∗ (0.0062)

0.0004 (0.0003) 0.0067 (0.0043) 0.0008∗∗∗ (0.0002) Yes Yes Yes 45,770

(4) −0.0065 (0.0092) 0.0233∗∗ (0.0098) 0.0005∗ (0.0003)

0.0001 (0.0005) 0.0002 (0.0003) 0.0065 (0.0043) 0.0008∗∗∗ (0.0002) Yes Yes Yes 45,770

Notes. The dependent variables are promotion (PROMOTION), defined as an indicator variable equal to one when an analyst moves to a larger brokerage house, and zero otherwise; and demotion (DEMOTION), defined as an indicator variable equal to one when an analyst moves to a smaller brokerage house, and zero otherwise. BIAS is forecast bias, defined as the difference between forecasted earnings and actual earnings, adjusted for the past year’s stock price. COVERAGE denotes the number of analysts tracking the stock. HIGH ACCURACY is an indicator variable that equals one if an analyst’s error falls into the highest decile of the distribution of accuracy measure, and zero otherwise; LOW ACCURACY is an indicator variable that equals one if the analyst’s error falls into the highest decile of the distribution of accuracy measure, and zero otherwise. EXPERIENCE is the natural logarithm of the number of years that the analyst is employed in the brokerage house. BROKERAGE SIZE is the number of analysts employed by the brokerage house. This table includes an interaction term between BIAS and COVERAGE. All regressions include year fixed effects, brokerage fixed effects, and analyst fixed effects. Standard errors (in parentheses) are clustered at the analyst groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

Another key part of the independence-rationale mechanism through which we think competition affects bias is that independent analysts keep other analysts honest. To this end, one can assess the incentives for being biased by comparing the penalty that competition imposes for an analyst who is contradicted by other analysts. A reasonable hypothesis is that the impact of bias on the probability of being promoted for such an analyst is lower the more contradicted she is. To evaluate this hypothesis, one can

COMPETITION AND BIAS

1721

define contradiction by other analysts as the average across stocks of an absolute difference between an analyst’s bias and the average bias of all other analysts calculated for each stock. We find some mixed evidence in support of this hypothesis. We find that the importance of bias for an analyst’s promotion decreases with the degree of her being contradicted. The result is significant at the 5% level. Unfortunately, being contradicted by other analysts does not significantly change the probability of being demoted. We omit these results for brevity. Nonetheless, this set of findings, along with the ones on how incentives for analyst optimism bias vary with competition, provide some comforting support for the independence-rationale mechanism. Finally, even though it might seem ideal to implement the above ideas using the context of our merger experiment, we note that this task proves to be quite difficult empirically. The main problem is that of statistical power. Because we do not observe many career changes for each analyst around merger events, it is very difficult to estimate the incentives for bias separately before and after the merger. Hence, we decided to use the approach that is most appealing statistically. VII. CONCLUSIONS We attempt to measure the effect of competition on bias in the context of analyst earnings forecasts, which are known to be excessively optimistic due to conflicts of interest. Using crosssectional regressions, we find that stocks with more analyst coverage, and presumably competition, have less biased forecasts on the average. However, these OLS estimates are biased, because analyst coverage is endogenous. We propose a natural experiment for competition—namely, mergers of brokerage houses, which result in the firing of analysts because of redundancy and other reasons including culture clash and general merger turmoil. We use this decrease in analyst coverage for stocks covered by both merging houses before the merger (the treatment sample) to measure the causal effect of competition on bias. We find that the treatment sample simultaneously experiences a decrease in analyst coverage and an increase in optimism bias the year after the merger relative to a control group of stocks. Our findings suggest that competition reduces analyst optimism bias. Moreover, the economic effect from our estimates is much larger than that from the OLS estimates.

1722

QUARTERLY JOURNAL OF ECONOMICS

Our findings have important welfare implications. Notably, a number of studies find that retail investors in contrast to institutional investors cannot adjust for the optimism bias (i.e., debias) of analysts and hence these optimistic recommendations have an effect on stock prices (e.g., Michaely and Womack [1999], Malmendier and Shanthikumar [2007]). One conclusion of our findings is that more competition can help protect retail investors because it tends to lower the optimism bias of analysts. Finally, our natural experiment for analyst coverage can also be useful for thinking about the determinants of stock prices. There is a large literature in finance and accounting that has tried to pin down whether analyst coverage increases stock prices. These studies are typically biased because of endogeneity, as analysts tend to cover high-priced, high-performing, or large stocks, for a variety of reasons. In other words, the causality might be reversed. Our natural experiment can hence be used to identify the causal effect of coverage on stock prices. Recent interesting research in the spirit of our experiment is that of Kelly and Ljungqvist (2007), who use closures of brokerage houses as a source of exogenous variation in coverage. We anticipate more exciting work will be done along this vein.

15

7 14

6

5

13

12

11

10

4 9

2 3

Brokerage firm

Target’s industry

12/31/2000 JP Morgan 9/18/2001 Josephthal Lyon & Ross 3/22/2005 Parker/Hunter Inc

Investment bank Security brokers and dealers Pvd investment, investment banking services

Securities dealer; RE broker 12/31/1994 Kidder Peabody & Co. Investment bank 5/31/1997 Dean Witter Discover Pvd sec brokerage & Co. services 11/28/1997 Salomon Brothers Investment bank 1/9/1998 Principal Financial Investment bank; Securities securities firm 2/17/1998 Jensen Securities Co. Securities brokerage firm 4/6/1998 Wessels Arnold & Investment bank Henderson LLC 10/1/1999 EVEREN Capital Securities brokerage Corp. firm 6/12/2000 JC Bradford & Co. Securities brokerage firm 10/15/2000 Donaldson Lufkin & Investment bank Jenrette 12/10/2000 Paine Webber Investment bank

10/31/1988 Butcher & Co., Inc.

8

Target house

9/10/1984 Becker Paribas

Merger date

1

Merger number

860

873 933

189

86

34

829

280

932

242 495

150 232

44

299

6211

6211 6211

6211

6211

6211

6211

6211

6211

6211 6211

6211 6211

6211

6211

IBES Industry no. code

Janney Montgomery Scott LLC

UBS Warburg Dillon Read Chase Manhattan Fahnestock & Co.

CSFB

First Union Corp., Charlotte, NC PaineWebber Group, Inc.

Dain Rauscher Corp.

DA Davidson & Co.

Wheat First Securities Inc (WF) PaineWebber Group, Inc. Morgan Stanley Group, Inc. Smith Barney EVEREN Capital Corp.

Merrill Lynch & Co., Inc.

Bidder house

APPENDIX: MERGERS INCLUDED IN THE SAMPLE (SORTED BY DATE)

142

125 98

85

100

189

282

76

79

254 829

189 192

282

183

IBES no.

Investment bank Securities brokerage firm Pvd sec brokerage services

Investment bank

Investment bank

Commercial bank; holding company Investment bank

Investment bank

Investment bank Securities brokerage firm Investment company

Pvd investment, financial advisory services Investment bank, brokerage firm Investment bank Investment bank

Bidder’s industry

6211

6211 6211

6211

6211

6211

6021

6211

6799

6211 6211

6211 6211

6211

6211

Industry code

COMPETITION AND BIAS

1723

1724

QUARTERLY JOURNAL OF ECONOMICS

PRINCETON UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH NEW YORK UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH

REFERENCES Abarbanell, Jeffery S., “Do Analysts’ Earnings Forecasts Incorporate Information in Prior Stock Price Changes?” Journal of Accounting and Economics, 14 (1991), 147–165. Besley, Timothy, and Andrea Prat, “Handcuffs for the Grabbing Hand? The Role of the Media in Political Accountability,” American Economic Review, 96 (2006), 720–736. Brown, Phillip, George Foster, and Eric Noreen, Security Analyst Multi-year Earnings Forecasts and the Capital Market (Sarasota, FL: American Accounting Association, 1985). Chopra, Vijay K., “Why So Much Error in Analysts’ Earnings Forecasts?” Financial Analysts Journal, 54 (1998), 30–37. Dechow, Patricia, Amy Hutton, and Richard G. Sloan, “The Relation between Affiliated Analysts’ Long-Term Earnings Forecasts and Stock Price Performance Following Equity Offerings,” Contemporary Accounting Research, 17 (1999), 1–32. Dreman, David, and Michael Berry, “Analyst Forecasting Errors and Their Implications for Security Analysis,” Financial Analysts Journal, 51 (1995), 30– 42. Dugar, Abhijeet, and Siva Nathan, “The Effect of Investment Banking Relationships on Financial Analysts’ Earnings Forecasts and Investment Recommendations,” Contemporary Accounting Research, 12 (1995), 131– 160. Fang, Lily, and Ayako Yasuda, “The Effectiveness of Reputation as a Disciplinary Mechanism in Sell-Side Research,” Review of Financial Studies, 22 (2009), 3735–3777. Gentzkow, Matthew, Edward L. Glaeser, and Claudia Goldin, “The Rise of the Fourth Estate: How Newspapers Became Informative and Why It Mattered,” in Corruption and Reform: Lessons from America’s History, Edward L. Glaeser and Claudia Goldin, eds. (Cambridge, MA: National Bureau of Economic Research, 2006). Gentzkow, Matthew, and Jesse M. Shapiro, “Media Bias and Reputation,” Journal of Political Economy 114 (2006), 280–316. ——,“Competition and Truth in the Market for News,” Journal of Economic Perspectives, 22 (2008), 133–154. Hong, Harrison, and Jeffrey D. Kubik, “Analyzing the Analysts: Career Concerns and Biased Earnings Forecasts,” Journal of Finance, 58 (2003), 313–351. Kelly, Bryan, and Alexander Ljungqvist, “The Value of Research,” Stern NYU Working Paper, 2007. Laster, David, Paul Bennett, and In Sun Geoum, “Rational Bias in Macroeconomic Forecasts,” Quarterly Journal of Economics, 114 (1999), 293–318. Lim, Terence, “Rationality and Analysts’ Forecast Bias,” Journal of Finance, 56 (2001), 369–385. Lin, Hsiou-wei, and Maureen F. McNichols, “Underwriting Relationships, Analysts’ Earnings Forecasts and Investment Recommendations,” Journal of Accounting and Economics, 25 (1998), 101–127. Malmendier, Ulrike, and Devin Shanthikumar, “Are Small Investors Na¨ıve about Incentives?” Journal of Financial Economics, 85 (2007), 457–489. McNichols, Maureen, and Patricia C. O’Brien, “Self-Selection and Analyst Coverage,” Journal of Accounting Research, 35 (1997), Supplement, 167–199. Michaely, Roni, and Kent L. Womack, “Conflict of Interest and the Credibility of Underwriter Analyst Recommendations,” Review of Financial Studies, 12 (1999), 653–686. Moulton, Brent, “Random Group Effects and the Precision of Regression Estimates,” Journal of Econometrics, 32 (1986), 385–397. Mullainathan, Sendhil, and Andrei Shleifer, “The Market for News,” American Economic Review, 95 (2005), 1031–1053.

COMPETITION AND BIAS

1725

Ottaviani, Marco, and Peter N. Sørensen, “Forecasting and Rank-Order Contests,” Kellogg Working Paper, 2005. Sheather, Simon J., and Chris M. Jones, “A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation,” Journal of the Royal Statistical Association B, 53 (1991), 683–690. Stickel, Scott E., “Predicting Individual Analyst Earnings Forecasts,” Journal of Accounting Research, 28 (1990), 409–417. Wu, Joanna S., and Amy Zang, “What Determines Financial Analysts’ Career Outcomes during Mergers?” Journal of Accounting and Economics, 47 (2009), 59–86.

IMPORTED INTERMEDIATE INPUTS AND DOMESTIC PRODUCT GROWTH: EVIDENCE FROM INDIA∗ PINELOPI KOUJIANOU GOLDBERG AMIT KUMAR KHANDELWAL NINA PAVCNIK PETIA TOPALOVA New goods play a central role in many trade and growth models. We use detailed trade and firm-level data from India to investigate the relationship between declines in trade costs, imports of intermediate inputs, and domestic firm product scope. We estimate substantial gains from trade through access to new imported inputs. Moreover, we find that lower input tariffs account on average for 31% of the new products introduced by domestic firms. This effect is driven to a large extent by increased firm access to new input varieties that were unavailable prior to the trade liberalization.

I. INTRODUCTION New intermediate inputs play a central role in many trade and growth models. These models predict that firms benefit from international trade through increased access to previously unavailable inputs, and this process generates static gains from trade. Access to these new imported inputs in turn enables firms to expand their domestic product scope through the introduction of new varieties, which generates dynamic gains from trade. Despite the prominence of these models, we have surprisingly little evidence to date on the relevance of the underlying microeconomic mechanisms. In this paper we take a step toward bridging the gap between theory and evidence by examining the relationship between new imported inputs and the introduction of new products by domestic firms in a large and fast-growing developing economy: India. During the 1990s, India experienced an explosion in the number of products manufactured by Indian firms, and these new products ∗ We thank Matthew Flagge, Andrew Kaminski, Alexander Mcquoid, and Michael Sloan Rossiter for excellent research assistance and Andy Bernard, N.S. Mohanram, Marc Melitz, Steve Redding, Andres Rodriguez-Clare, Jagadeesh Sivadasan, Peter Schott, David Weinstein, the referees, the Editor, and several seminar participants for useful comments. We are particularly grateful to Christian Broda and David Weinstein for making their substitution elasticity estimates available to us. Goldberg also thanks the Center for Economic Policy Studies at Princeton for financial support. All remaining errors are our own. The views expressed in this paper are those of the authors and should not be attributed to the International Monetary Fund, its Executive Board, or its management. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1727

1728

QUARTERLY JOURNAL OF ECONOMICS

accounted for a quarter of India’s manufacturing output growth (Goldberg et al. [henceforth GKPT] 2010a). During the same period, India also experienced a surge in imported inputs, with more than two-thirds of intermediate import growth occurring in new varieties. The goal of this paper is to determine if the increase in Indian firms’ access to new imported inputs can explain the introduction of new products into the domestic economy by these firms. One of the challenges in addressing this question is the potential reverse causality between imports of inputs and new domestic products. For instance, firms may decide to introduce new products for reasons unrelated to international trade. Once the manufacture of such products begins, the demand for imported inputs, both existing and new varieties, may increase. This would lead to a classic reverse causality problem: the growth of domestic products could lead to the import of new varieties and not vice versa. To identify the relationship between changes in imports of intermediates and introduction of new products by domestic firms, we exploit the particular nature of India’s trade reform. The reform reduced input tariffs differentially across sectors and was not subject to the usual political economy pressures because the reform was unanticipated by Indian firms. Our analysis proceeds in two steps. We first offer strong reduced-form evidence that declines in input tariffs resulted in an expansion of firms’ product scope: industries that experienced the greatest declines in input tariffs contributed relatively more to the introduction of new products by domestic firms.1 The relationship is also economically significant: lower input tariffs account on average for 31% of the observed increase in firms’ product scope over this period. Moreover, the relationship is robust to specifications that control for preexisting industry- and firm-specific trends. We also find that lower input tariffs improved the performance of firms in other dimensions including output, total factor productivity (TFP), and research and development (R&D) activity that are consistent with the link between trade and growth. In order to investigate the channels through which input tariff liberalization affected domestic product growth in India, we 1. Recent theoretical work by Bernard, Redding, and Schott (2006), Nocke and Yeaple (2006), and Eckel and Neary (2009) shows that trade liberalization should lead firms to rationalize their product scope. These theoretical models focus on the effect of final goods and tariffs on output, whereas the analysis of this paper focuses on input tariffs and the role of intermediates.

IMPORTED INPUTS AND PRODUCT GROWTH

1729

then impose additional structure guided by the methods of Feenstra (1994) and Broda and Weinstein (2006) and use India’s Input– Output (IO) Table to construct exact input price indices for each sector. The exact input price index is composed of two parts: a part that captures changes in prices of existing inputs and a part that quantifies the impact of new imported varieties on the exact price index. Thus, we can separate the changes in the exact input price indices faced by firms into a “price” and a “variety” channel. This methodology reveals substantial gains from trade through access to new imported input varieties: accounting for new imported varieties lowers the import price index for intermediate goods on average by an additional 4.7% per year relative to conventional gains through lower prices of existing imports. We relate the two components of the input price indices to changes in firm product scope. The results suggest an important role for the extensive margin of imported inputs. Greater access to imported varieties increases firm scope. This relationship is robust to an instrumental variable strategy that accounts for the potential endogeneity of input price indices using input tariffs and proximity of India’s trading partners as instruments. Hence, we conclude that input tariff liberalization contributed to domestic product growth not simply by making available imported inputs cheaper, but, more importantly, by relaxing technological constraints facing such producers via access to new imported input varieties that were unavailable prior to the liberalization.2 These findings relate to two distinct, yet related, literatures. First, endogenous growth models, such as the ones developed by Romer (1987, 1990) and Rivera-Batiz and Romer (1991), emphasize the static and dynamic gains arising from the import of new varieties. Not only do such varieties lead to productivity gains in the short and medium runs, but also the resulting growth fosters the creation of new domestic varieties that further contribute to growth. The first source (static) of gains has been addressed in the empirical literature before. Existing studies document a large expansion in new imported varieties (Feenstra 1994, Klenow and Rodriguez-Clare 1997, Broda and Weinstein 2006, Arkolakis et al. 2008), which, depending on the overall importance of new 2. The importance of increased access to imported inputs has been noted by Indian policy makers. In a recent speech, Rakesh Mohan, the managing director of the Indian Reserve Bank, argued that “trade liberalization and tariff reforms have provided increased access to Indian companies to the best inputs available globally at almost world prices” (Mohan 2008).

1730

QUARTERLY JOURNAL OF ECONOMICS

imported varieties in the total volume of trade, can generate substantial gains from trade (see, for example, Feenstra [1994] and Broda and Weinstein [2006]).3 Our evidence points to large static gains from trade because of increased access to imported inputs. The second source (dynamic) of gains from trade has been empirically elusive, partly because data on the introduction of domestic varieties produced in each country have been difficult to obtain.4 The two studies that are closest to ours (Feenstra et al. 1999 and Broda, Greenfield, and Weinstein 2006) resort to export data to overcome this difficulty. They use the fraction of the economy devoted to exports and industry-specific measures of export varieties as proxies for domestic R&D and domestic variety creation, respectively. The advantage of our data is that we directly observe the creation of new varieties by domestic firms. This enables us to link the creation of new domestic varieties to changes in imported inputs. In our framework, trade encourages creation of new domestic varieties because Indian trade liberalization significantly reduces tariffs on imported inputs. This leads to imports of new varieties of intermediate products, which in turn enables the creation of new domestic varieties. Hence, new imported varieties of intermediate products go hand-in-hand in our context with new varieties of domestic products. Our study also relates to the literature on the effects of trade liberalization on total factor productivity. Several theoretical papers have emphasized the importance of intermediate inputs for productivity growth (e.g., Ethier [1979, 1982], Romer [1987, 1990], Markusen [1989], Grossman and Helpman [1991]). Empirically, most recent studies have found imports of intermediates or declines in input tariffs to be associated with sizable productivity gains (see Halpern, Koren, and Szeidl [2006] , Amiti and Konings [2007], Kasahara and Rodrigue [2008]), with Muendler (2004) being an exception. Our findings are in line with the majority of the empirical literature on this subject, as we too document positive effects of input trade liberalization and imported intermediates. However, in contrast to earlier work, our main focus is not on TFP but rather on the domestic product margin.5 As noted by Erdem 3. Klenow and Rodriguez-Clare (1997) and Arkolakis et al. (2008) find small variety gains following the Costa Rican trade liberalization, which they attribute to the fact that the new varieties were imported in small quantities, thus contributing little to welfare. 4. Brambilla (2006) is an exception. 5. Nevertheless, we also provide evidence that conventionally measured TFP increases with input trade liberalization in our context. See also Topalova and Khandelwal (2011).

IMPORTED INPUTS AND PRODUCT GROWTH

1731

and Tybout (2003) and De Loecker (2007), a potential problem with the interpretation of the TFP findings is that the use of revenue data to calculate TFP implies that it is not possible to identify the effects of trade liberalization on physical efficiency separately from its effects on firm markups, product quality, and, in the case of multiproduct firms, the range of products produced by the firm. In light of this argument, one can interpret our findings as speaking to the effects of trade reform on one particular component of TFP that is clearly identified in our data: the range of products manufactured by the firm.6 The remainder of the paper is organized as follows. In Section II we provide a brief overview of the data we use in our analysis and the Indian trade liberalization of the 1990s. We next discuss the reduced-form evidence. Section III organizes our results in two subsections. In Section III.A, we provide descriptive evidence linking the expansion of the intermediate import extensive margin to tariff declines. In Section III.B, we provide reduced-form evidence that lower input tariffs caused firms to expand product scope and we conduct a series of robustness checks. Although these regressions establish our main empirical findings, they are unable to inform our understanding of the particular channels that are at work. In Section IV, we therefore impose more structure and develop a framework that allows us to interpret the reduced form results and identify the relevant mechanisms. Sections IV.A and IV.B present the framework and our identification assumptions; Sections IV.C and IV.D discuss the empirical implementation of the structural approach and our results, respectively. Section V concludes. II. DATA AND POLICY BACKGROUND II.A. Data Description The firm-level data used in the analysis are constructed from the Prowess database, which is collected by the Centre for Monitoring the Indian Economy (CMIE). Prowess has important 6. Exploring the relationship between the number of new products and TFP is beyond the scope of this analysis. The theoretical literature offers arguments for both a positive (Bernard, Redding, and Schott 2006) and a negative (Nocke and Yeaple 2007) relationship between these two variables. We note however, that although the effect of new products on firm-level TFP may depend on the particular theoretical model one adopts, there is substantial empirical evidence that new product additions by domestic firms account for a sizable share of sales growth in several countries (Navarro 2008, Bernard, Redding, and Schott 2010, GKPT 2010a).

1732

QUARTERLY JOURNAL OF ECONOMICS

advantages over the Annual Survey of Industries (ASI), India’s manufacturing census, for our study. First, unlike the repeated cross section in the ASI, the Prowess data are a panel of firms, which enables us to track firm performance over time. Second, Prowess records detailed product-level information at the firm level and can track changes in firm scope over the sample. Finally, the data span the period of India’s trade liberalization from 1989 to 2003. Prowess is therefore particularly well suited for understanding how firms adjust their product lines over time in response to increased access to intermediate inputs.7 Prowess enables us to track firms’ product mix over time because Indian firms are required by the 1956 Companies Act to disclose product-level information on capacities, production and sales in their annual reports. As discussed extensively in GKPT (2010a), several features of the database give us confidence in its quality. Product-level information is available for 85% of the manufacturing firms, who collectively account for more than 90% of Prowess’ manufacturing output and exports. More importantly, product-level sales comprise 99% of the (independently) reported manufacturing sales. We refer the reader to GKTP (2010a) for a detailed summary statistics. Our database contains 2,927 manufacturing firms that report product-level information and span the period from 1989 to 1997. We complement the product-level data with disaggregated information on India’s imports and tariffs. The tariff data, reported at the six-digit HS (HS6) level, are available from 1987 to 2001, and they are obtained from Topalova and Khandelwal (2011). We use a concordance by Debroy and Santhanam (1993) to aggregate tariffs to the National Industrial Classification (NIC) level. Input tariffs, the key policy variable in this paper, are computed by running the industry-level tariffs through India’s

7. Prowess accounts for 60% to 70% of the economic activity in the organized industrial sector and comprises 75% of corporate taxes and 95% of excise duty collected by the Government of India (CMIE). The Prowess is not well suited for understanding firm entry and exit because firms are under no legal obligation to report to the data collecting agency. However, because Prowess contains only relatively large Indian firms, entry and exit is not necessarily an important margin for understanding the process of adjustment to increased openness within this subset of the manufacturing sector. Very few firms exit from our sample during this period (7%), and we observe no statistical difference in initial firm scope, output, TFP and R&D activity between continuing and exiting firms. Using a nationally representative data covering Indian plants, Sivadasan (2009) finds that reallocation across firms played a minor role in aggregate TFP gains following India’s reforms. Our analysis below relies on within-firm variation in firm outcomes, rather than across-firm variation.

IMPORTED INPUTS AND PRODUCT GROWTH

1733

input–output matrix for 1993–1994. For each industry, we create an input tariff for that industry as the weighted average of tariffs on inputs used in the production of the final output of that industry. The weights are constructed as the input industry’s share of the output industry’s total output value. Formally, input tariffs are inp defined as τqt = i αiq τit , where αiq is the value share of input i in industry q. For example, if a final good uses two intermediates with tariffs of 10% and 20% and value shares of 0.25 and 0.75, respectively, the input tariff for this good is 17.5%.8 The weights in the IO table are also used to construct the components of the input exact price index. Official Indian import data are obtained from Tips Software Services. The data classify products at the eight-digit HS (HS8) level and record transactions for approximately 10,000 manufacturing products imported from 160 countries between 1987 and 2000. For the purposes of descriptive analysis in Section III.A, we assign products according to their end use into two classifications: intermediate goods (basic, capital, intermediates) and final goods (consumer durables and nondurables).9 This classification is adopted from Nouroz’s (2001) classification of India’s IO matrix. The codes from the IO matrix are then matched to the four-digit HS (HS4) level following Nouroz (2001), which enables us to classify imports broadly into final and intermediate goods.

II.B. India’s Trade Liberalization India’s postindependence development strategy was one of national self-sufficiency and heavy government regulation of the economy. India’s trade regime was among the most restrictive in Asia, with high nominal tariffs and nontariff barriers. The emphasis on import substitution resulted in relatively rapid industrialization, the creation of domestic heavy industry, and an economy that was highly diversified for its level of development (Kochhar et al. 2006).

8. The IO table includes weights for manufacturing and nontradables (e.g., labor, electricity, utilities, labor, etc.), but tariffs, of course, exist only for manufacturing. Therefore, the calculation of input tariffs implicitly assumes a zero tariff for nontradables. All of our regressions rely on changes in tariffs over time and not cross-sectional comparisons. 9. An input for one industry may be an output for another industry, though there are many products for which the most common use justifies this distinction (e.g., jewelry and clothing are usually considered final goods, whereas steel is considered an intermediate product).

1734

QUARTERLY JOURNAL OF ECONOMICS

In August 1991, in the aftermath of a balance-of-payments crisis, India launched a dramatic liberalization of the economy as part of an IMF adjustment program. An important part of this reform was to abandon the extremely restrictive trade policies.10 The average tariffs fell from more than 80% in 1990 to 39% by 1996. Nontariff barriers (NTBs) were reduced from 87% in 1987 to 45% in 1994 (Topalova and Khandelwal 2011). There were some differences in the magnitude of tariff changes (and especially NTBs) according to final and intermediate industries, with NTBs declining at a later stage for consumer goods. Overall, the structure of industrial protection changed, as tariffs were brought to a more uniform level across sectors reflecting the guidelines of the tariff reform spelled out in the IMF conditions (Chopra et al. 1995). Several features of the trade reform are crucial to our study. First, the external crisis of 1991, which came as a surprise, opened the way for market-oriented reforms (Hasan, Mitra, and Ramaswamy 2007).11 The liberalization of the trade policy was therefore unanticipated by firms in India. Moreover, reforms were passed quickly as a sort of “shock therapy” with little debate or analysis, to avoid the inevitable political opposition (Goyal 1996). Industries with the highest tariffs received the largest tariff cuts, implying that both the average and standard deviation of tariffs fell across industries. Consequently, although there was significant variation in the tariff changes across industries, Topalova and Khandelwal (2011) have shown that output and input tariff changes were uncorrelated with prereform firm and industry characteristics such as productivity, size, output growth during the 1980s, and capital intensity.12 The tariff liberalization does not appear to have been targeted toward specific industries and appears free of the usual political economy pressures. India remained committed to further trade liberalization beyond the Eighth Plan (1992–1997). However, following an election in 1997, Topalova and Khandelwal (2011) find evidence that 10. The structural reforms of the early 1990s also included a stepped-up dismantling of the “license raj,” the extensive system of licensing requirements for establishing and expanding capacity in the manufacturing sector, which had been the cornerstone of India’s regulatory regime. See GKPT (2010a). 11. This crisis was in part triggered by the sudden increase in oil prices due to the Gulf War in 1990, the drop in remittances from Indian workers in the Middle East, and the political uncertainty surrounding the fall of a coalition government and the assassination of Rajiv Gandhi, which undermined investors’ confidence. 12. This finding is consistent with those of Gang and Pandey (1996), who argue that political and economic factors cannot explain tariff levels at the time of the reform.

IMPORTED INPUTS AND PRODUCT GROWTH

1735

TABLE I PREREFORM FIRM CHARACTERISTICS AND INPUT TARIFF CHANGES

1997–1992 input tariff change Observations

Products (1)

Output (2)

TFP (3)

R&D (4)

0.180 (0.308) 713

−0.744 (0.745) 712

0.517 (0.507) 614

−1.531 (1.341) 667

Notes. The dependent variables in each column are the prereform 1989–1991 growth in firm-level outcomes. The variables are regressed on postreform (between 1992 and 1997) changes in input tariffs. Column (1) is the prereform firm-level change in (log) number of products. Columns (2)–(4) are the prereform changes in (log) firm output, TFP, and R&D expenditure. The number of observations varies in each column because the coverage of firm outcomes varies. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗ ∗ 5%, and ∗∗∗ 1% significance level.

tariffs under the Ninth Plan (1997–2002) changed in ways that were correlated with firm and industry performance in the previous years. This indicates that unlike the initial tariff changes following the reform, after 1997, tariff changes were subject to political influence. This concern leads us to restrict our analysis in this paper to the sample period that spans 1989–1997. We extend Topalova and Khandelwal’s (2011) analysis by providing additional evidence that the input tariff changes from 1992 to 1997 were uncorrelated with prereform changes in the firm performance measures that we consider in this paper. Column (1) of Table I regresses the prereform (1989–1991) growth in firm scope on the subsequent input tariff changes between 1992 and 1997. If the tariff changes were influenced by lobbying pressures, or targeted toward specific industries based on prereform performance, we would expect a statistically significant correlation. However, the correlation is statistically nonsignificant, suggesting that the government did not take prereform trends in firm scope into account while cutting tariffs. Columns (2)–(4) of Table I report the correlations of the input tariff changes with the prereform growth in firm output, TFP and R&D. As before, there is no statistically significant correlation between changes in these firm outcomes and input tariff changes. This table provides additional assurance that the tariff liberalization was unanticipated by firms. III. REDUCED-FORM RESULTS This section presents some descriptive and reduced-form evidence on the relationship between tariff liberalization and product scope. Before we review the evidence, it is instructive to briefly

1736

QUARTERLY JOURNAL OF ECONOMICS

explain the reasons we expect tariffs to affect the development of new products in the domestic market. Section IV provides a more formal analysis of specific channels. Suppose that the production technology of a product q in the final goods sector of the economy has the general form I , (1) Yq = f A, L, S, {Xi }i=1 where Y denotes output, A is the product-specific productivity, and L and S are labor and nontradable inputs (e.g., electricity, water, warehousing). The input vectors Xi = {Xi D, Xi F } comprise domestic (Xi D) and imported inputs (Xi F ), respectively. This production technology is general and for now does not commit us to any particular functional form. Suppose further that production of q has a fixed cost Fq . The firm will choose inputs optimally to maximize profits and will produce product q as long as the variable profits are greater than or equal to the fixed cost. Even without making any particular assumptions about market structure or functional forms, it is easy to see how a reduction in input tariffs would affect a firm’s decision to introduce a new product. First, input tariff reductions lower the prices of existing imported inputs. The increase in variable profits resulting from lower input tariffs raises the likelihood that a firm can manufacture previously unprofitable products. Second, liberalization may lead to the import of new varieties (e.g., see Klenow and Rodriguez-Clare [1997]), thereby expanding the set of intermediate inputs available to the firm.13 The significance of this second effect will depend on the particular form of the production technology, and in particular on the substitutability between domestic and imported inputs, as well as the substitutability between different varieties of imported intermediates. Suppose, for example, that at one extreme, some of the interI are essential, so that if one of mediate inputs included in {Xi F }i=1 these inputs falls to zero, product q cannot be produced. Then the effect of trade liberalization on the introduction of new products is expected to be large, as it will relax technological constraints facing domestic firms. On the other extreme, if the new imported varieties were perfect substitutes for domestic, or previously imported, varieties there would be no effect through the extensive 13. The fixed costs of production may also decline with input tariff liberalization, which would also increase the likelihood that firms manufacture new products.

IMPORTED INPUTS AND PRODUCT GROWTH

1737

margin of imports. The importance of the extensive margin relative to the pure price effects of trade liberalization is therefore an empirical question. The reduced-form evidence we present in this section does not allow us to distinguish between these two channels. That is, even if we found that tariff liberalization led to an increase in domestically produced varieties, this increase could have resulted solely from a decline in the prices of existing imported inputs; the reform would then have operated only through price effects on existing imports. Nevertheless, the descriptive evidence we present here indicates an enormous contribution of the extensive margin to import growth, which suggests that the reform is unlikely to have operated solely through the price channel. In Section IV, we place additional structure on the firm’s production function in order to quantify the specific channels generating the reduced-form findings.

III.A. Descriptive Evidence: Trade Liberalization and Import Data Before analyzing the relationship between input tariff declines and firm scope, we first examine India’s import data. We show that imports increased following the trade liberalization and decompose the margins of aggregate import adjustment during the 1990s. Next, we examine the impact of trade liberalization on key trade variables in our empirical framework: total imports, imports of intermediates, unit values, and the number of imported varieties. The goal of this analysis is to show that the extensive product margin was an important component of import growth (especially for intermediates) and that trade liberalization affected the variables relevant in our framework in expected ways. Import Decomposition. We begin by examining the growth of imports into India during the 1990s. Total import growth reflects the contribution of two margins: growth in HS6 products that existed in the previous period (intensive margin) and growth in products that did not exist in the previous period (extensive margin). Two striking features emerge from this decomposition, reported in Table II. The first observation is that India experienced a surge in overall imports; column (1) indicates that real imports

1738

QUARTERLY JOURNAL OF ECONOMICS TABLE II DECOMPOSITION OF IMPORT GROWTH, 1987–2000 Extensive margin

Product classification All products Intermediate products Final products

Intensive margin Import Product Product growth Net entry exit Net Growing Shrinking (1) (2) (3) (4) (5) (6) (7) 130 227 90

84 153 33

84 153 33

0 0 0

45 74 57

84 116 86

−39 −42 −29

Notes. Real import growth decomposed into the extensive and intensive margins between 1987 and 2000. Imports are deflated by the wholesale price index. Column (1) reports overall import growth. Columns (2) and (5) report the contribution of the extensive and intensive margins, respectively. The extensive margin is growth in imports due to new six-digit HS codes not imported in 1987. The intensive margin measures import growth within products that India had imported in 1987. The gross contributions are reported in columns (3) and (4) for the extensive margin, and columns (7) and (8) for the intensive margin. Rows (2) and (3) decompose import growth in the intermediate (basic, capital, and intermediates) and final (consumer durables and nondurables) products. The HS codes have been standardized to remove any issues due to changes in the Indian HS classification system.

(inclusive of tariffs) rose by 130% between 1987 and 2000.14 More interestingly, intermediate imports increased by 227%, whereas final goods increased by 90%. In other words, the overall import growth was dominated by an increase in intermediate imported products.15 The second fact that emerges from Table II is that the relative contribution of the extensive margin to overall growth was substantially larger in the intermediate imports. Intermediate products unavailable prior to the reform accounted for about 66% of the overall intermediate import growth, whereas the intensive margin accounted for the remaining third. Moreover, the net contribution of the extensive margin is driven entirely by gross product entry. Very few products ceased to be imported over this period. In contrast, the relative importance of each margin in the final goods sectors is reversed; the extensive margin accounted only for 37% of the growth in imports, whereas the intensive margin contributed 63% of the growth. In GKPT (2010b), we provide evidence that the majority of the growth in the extensive margin is driven by imports from OECD countries, which presumably are relatively high-quality imports. Table II therefore suggests 14. Nominal imports, inclusive of tariffs, grew 516% over this period. Excluding tariffs, real and nominal import growth was 228% and 781%, respectively. The reason the growth numbers excluding tariffs are higher is that tariffs were very high prior to the reform. 15. As discussed above, we rely on the Nouroz (2001) classification of products to final and intermediate goods in this section only. The results in Section IV rely on input–output matrices to construct the input price indices.

IMPORTED INPUTS AND PRODUCT GROWTH

1739

TABLE IIIa IMPORT VALUES AND TARIFFS All products (1)

Intermediates (2)

Final goods (3)

Output tariff

−0.136∗∗∗ (0.035)

−0.117∗∗∗ (0.044)

−0.151∗∗ (0.076)

Year FEs HS6 FEs R2 Observations

Yes Yes .82 35,833

Yes Yes .82 20,140

Yes Yes .80 11,838

Notes. Coefficients on tariffs from product-level regressions of log (fob) import value on lagged output tariffs, HS6 product fixed effects, and year effects. An observation is HS6–category–year. Column (1) pools across all sectors. Columns (2) and (3) report coefficients for the intermediate and final goods, respectively. Tariffs are at the HS6 level and regressions are run from 1987 to 1997. Standard errors (in parentheses) clustered at the HS6 level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

that imports increased substantially during our sample period and that this increase was largely driven by the growth in the number of intermediate products that were imported. Import Values, Prices, and Varieties. We next examine whether the expansion in trade noted in Table II was systematically related to the tariff reductions induced by India’s trade liberalization. To summarize our findings, we find that (a) lower tariffs led to an overall increase in import values, (b) lower tariffs resulted in lower unit values of existing product lines, and (c) lower tariffs led to an increase in the imports of new varieties. Moreover, this expansion of varieties in response to tariff declines was particularly pronounced for intermediate products. We begin by examining the responsiveness of import values to tariffs by regressing the (log) import value (exclusive of tariffs) of an HS6 product on the HS6-level tariff,16 an HS6-level fixed effect, and year fixed effects, and restrict the analysis to 1987–1997 (see Section II.B). We should emphasize that we interpret these regressions strictly as reduced-form regressions. In particular, unlike Klenow and Rodriguez-Clare (1997), we are not assuming complete tariff pass-through on import prices, so that the tariff coefficients in our regressions cannot be used to back out structural parameters.17 Table IIIa reports the coefficient estimates on tariffs for all sectors (column (1)), intermediate sectors 16. We lag the tariff measure one period in all specifications because the trade reform was implemented toward the end of 1991 (initiated in August 1991). 17. Incomplete pass-through can arise even with a CES utility function if the market structure is oligopolistic and/or nontraded local costs are present.

1740

QUARTERLY JOURNAL OF ECONOMICS

(column (2)), and final goods sectors (column (3)). In all cases, declines in tariffs are associated with higher import values. This analysis therefore confirms that the trade reform played an important role in the expansion of imports documented in Table II. Traditional trade theory usually emphasizes the benefits from trade that occur through increased imports of existing products/varieties at lower prices. This channel also plays a role in our context. We explore the impact of tariff declines on the tariffinclusive unit values of HS8-country varieties by regressing the variety’s unit value on the tariff, a year fixed effect, and a variety (HS8-country) fixed effect. Note that by including the variety fixed effect, we implicitly investigate how tariffs affected the prices of continuing varieties. The results are reported in Table IIIb. Overall, lower tariffs are associated with declines in the unit values of existing varieties (column (1)). Columns (2) and (3) report the coefficients for the intermediate and final goods sectors, respectively. Although the coefficient is positive and significant for both sectors, the magnitude of the coefficient is larger for the intermediate sectors. This suggests that to the extent that imported inputs are used in the production process by domestic firms, the observed declines in unit values of existing products will lower the marginal cost of production for Indian firms. The aggregate decomposition in Table II suggests that new imported varieties played an important role in the expansion of overall imports, particularly for the intermediate sectors. This is consistent with Romer (1994), who shows that if there is a fixed cost of importing a product, a country will import the product only if the profits from importing exceed the fixed costs. This means that high tariffs limit not only the quantity but also the range of goods imported. To provide direct evidence of the effect of tariffs on the extensive margin of imports we estimate the following specification: (2)

ln(vht ) = αh + αt + βτht + εht ,

where vht is the number of varieties within an HS6 product h at time t, τht is the HS6 tariff, αh is an HS6 fixed effect, and αt is a year fixed effect. The results are reported in Table IIIc. To show that our results are not sensitive to the definition of a variety, the table reports equation (2) with different definitions of a “variety” as the dependent variable: HS6–country (Panel A), HS8 codes (Panel B), and HS8 category–country (Panel C). Because our

IMPORTED INPUTS AND PRODUCT GROWTH

1741

TABLE IIIb IMPORT UNIT VALUES AND TARIFFS All products (1)

Intermediate (2)

Final goods (3)

Output tariff

0.273∗∗∗ (0.050)

0.304∗∗∗ (0.077)

0.245∗∗∗ (0.079)

Year FEs HS8–country FEs R2 Observations

Yes Yes .88 49,109

Yes Yes .85 32,619

Yes Yes .93 11,070

Notes. Regressions of (log) tariff-inclusive unit values on tariffs, HS8–country fixed effects, and year fixed effects. Unit values are computed for each HS8–country pair and the tariffs are the HS6 level. Column (1) uses all products and columns (2) and (3) report coefficients for the intermediates and final goods, respectively. Regressions are run from 1987 to 1997. Standard errors (in parentheses) clustered at the HS6 level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

TABLE IIIc IMPORT EXTENSIVE MARGIN AND TARIFFS All products (1) Output tariff R2 Observations Output tariff R2 Observations Output tariff R2 Observations

Intermediate (2)

Final goods (3)

−0.082∗∗∗ (0.012) .85 35,833

Panel A: Variety: HS6–country −0.106∗∗∗ (0.014) .84 20,140

−0.049∗ (0.026) .84 11,838

−0.015∗∗ (0.007) .88 35,833

Panel B: Variety: HS8 −0.023∗∗∗ (0.009) .90 20,140

−0.005 (0.014) .85 11,838

−0.095∗∗∗ (0.013) .87 35,833

Panel C: Variety HS8–country −0.129∗∗∗ (0.016) .86 20,140

−0.042 (0.028) .86 11,838

Notes. All regressions also include year fixed effects and HS6 fixed effects. The table reports coefficients on tariffs from product-level regressions of (log) number of varieties on output tariffs, HS6 product fixed effects, and year effects. The regressions are run at the HS6–year level and each panel uses an alternative definition of a variety. A variety is defined as an HS6–country pair in Panel A, an HS8 code in Panel B, and an HS8–country pair in Panel C. Within each panel, column (1) pools across all sectors, whereas columns (2) and (3) report coefficients for the intermediate and final goods, respectively. As in the previous tables, tariffs are at the HS6 level and the regressions are run from 1987 to 1997. Standard errors (in parentheses) clustered at the HS6 level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

1742

QUARTERLY JOURNAL OF ECONOMICS

results are robust to alternative definitions of a variety, we focus our discussion on the results in Panel A.18 Column (1) estimates equation (7) for all products and shows that tariff declines were associated with an increased number of imported varieties. This result confirms the importance of the new variety margin during a trade reform, as emphasized in Romer (1994). We rerun regression (2) for the intermediate and final products in columns (2) and (3) of each panel, respectively. Consistent with the evidence in Table II, the relationship between tariff declines and the extensive margin is particularly pronounced for intermediate products. The coefficient on tariffs for the intermediate products in column (2) is more than twice as large as the tariff coefficient for the final goods. Moreover, the results for intermediate products are robust to the alternative definitions of a variety in Panels B and C, whereas the results for final products are more sensitive to the definition of varieties.19 Our results are generally consistent with the evidence in Klenow and Rodriguez-Clare (1997) and Arkolakis et al. (2008), who also find that the range of imported varieties expands as a result of the tariff declines in Costa Rica. However, there is one important difference. In India, Table II indicates that new imported intermediate varieties accounted for a sizable share of total imports. In contrast, in Costa Rica, newly imported varieties accounted for a small share of total imports and thus generated relatively small gains from trade (Arkolakis et al. 2008). Thus, the evidence so far suggests that gains from new import varieties, particularly from the intermediate sectors, may be potentially large in the context of the Indian trade liberalization. In sum, a first look at the import data demonstrates that tariff declines led to increases in import values, reductions in the import prices of existing products and expansion of new varieties. These responses were particularly pronounced for imports of intermediate products. Thus, Indian firms may have benefited from the trade reform not only via cheaper imports of existing intermediate inputs but also by having access to new intermediate inputs. In the next section, we quantify the overall impact of input tariff reductions on firm-level outcomes. 18. We obtain qualitatively similar results using a Poisson regression, and when balancing the data to account for HS6 codes with no initial imports. Results are available upon request. 19. One explanation for the lack of robust findings for final goods is the fact that NTBs still existed in these HS lines.

IMPORTED INPUTS AND PRODUCT GROWTH

1743

III.B. Reduced-Form Evidence Input Tariffs and Domestic Varieties. In this section, we relate input tariffs to the number of new products introduced in the market by domestic Indian firms. We then examine the relationship between input tariff reductions and other variables that are relevant in endogenous growth models, such as firm sales, total factor productivity, and R&D. To explore the impact of input tariffs on the extensive product margin, we estimate the following equation: q inp (3) ln nit = αi + αt + βτqt + εit , q

where nit is the number of products manufactured by firm i inp operating in industry q at time t, and τqt is the input tariff that corresponds to the main industry in which firm i operates. This regression also includes firm fixed effects to control for timeinvariant firm characteristics, and year fixed effects to capture unobserved aggregate shocks. The coefficient of interest is β, which captures the semielasticity of firm scope with respect to tariffs on intermediate inputs. Standard errors are clustered at the industry level. In GKPT (2010a), we found virtually no evidence that firms dropped product lines during this period; 53% of firms report product additions during the 1990s, and very few firms dropped any product lines. Thus, the net changes in firm scope during this period can effectively be interpreted as gross product additions. Table IVa presents the main results in column (1). The coefficient on the input tariff is negative and statistically significant: declines in input tariffs are associated with an increase in the scope of production by domestic firms. The point estimate implies that a 10–percentage point fall in tariffs results in a 3.2% expansion of a firm’s product scope. During the period of our analysis, input tariffs declined on average by 24 percentage points, implying that within-firm product scope expanded 7.7%. Firms increased their product scope on average by 25% between 1989 and 1997, so our estimates imply that declines in input tariffs accounted for 31% of the observed expansion in firms’ product scope. In GKPT (2010a), we find that the (net) product extensive margin accounted for 25% of India’s manufacturing output growth during our sample. If India’s trade liberalization impacted growth only through the increase in product scope, our estimates imply that the lower input tariffs contributed 7.8% (0.25 × 0.31)

1744

QUARTERLY JOURNAL OF ECONOMICS TABLE IVa PRODUCT SCOPE AND INPUT TARIFFS

Input tariff

(1)

(2)

(3)

−0.323∗∗ (0.139)

−0.310∗∗ (0.150) −0.013 (0.043)

−0.327∗∗ (0.150) −0.014 (0.041) −0.032 (0.023)

Yes Yes .90 14,882

Yes Yes .90 14,864

Yes Yes .90 13,435

Output tariff Delicensed FDI liberalized Year effects Firm FEs R2 Observations

(4) −0.281∗∗ (0.125) −0.010 (0.041) −0.026 (0.021) 0.037 (0.024) Yes Yes .90 11,135

Notes. The dependent variable in each regression is (log) number of products manufactured by the firm. The delicensed variable is an indicator variable obtained from Aghion et al. (2008) that switches to one in the year that the industry becomes delicensed. The FDI variable is a continuous variable obtained from Topalova and Khandelwal (2011), with higher values indicating a more liberal FDI policy. As with the tariffs, the licensed and FDI policy variables are lagged. All regressions include firm and year fixed effects and are run from 1989 to 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

to the overall manufacturing growth. This back-of-the-envelope calculation suggests a sizable effect of increased access to imported inputs for manufacturing output growth. As discussed in Section II.B, the trade liberalization coincided with additional market reforms. In the remaining columns of Table IVa, we control for these additional policy variables. Column (2) introduces output tariffs to control for procompetitive effects associated with the tariff reduction. The coefficient on output tariffs is not statistically significant, whereas the input tariff coefficient hardly changes and remains negative and statistically significant. Although it may appear puzzling that the output tariff declines did not result in, for instance, a rationalization of firm scope, we refer the reader to GKPT (2010a) for explanations of this finding. In column (3), we include a dummy variable for industries delicensed (obtained from Aghion et al. [2008]) during our sample, and the input tariff coefficient remains robust. Finally, column (4) includes a measure of FDI liberalization taken from Topalova and Khandelwal (2011). The coefficient implies that firms in industries with FDI liberalization increased scope, but the coefficient is not statistically significant. The input tariff remains negative and significant, indicating that even after conditioning on other market reforms during this period, input tariff declines led to an expansion of firm product scope.

IMPORTED INPUTS AND PRODUCT GROWTH

1745

In Table IVb, we run a number of robustness checks to examine the sensitivity of our main results to alternative specifications of the main estimating equation, most importantly to controlling for preexisting sector and firm trends. Specifications (1) and (2) of Table IVb introduce NIC2–year and NIC3–year pair fixed effects, respectively, to control for preexisting, sectorspecific trends. These controls capture several factors, such as sector-specific technological progress, that may be correlated with input tariff changes. Not only do the input tariff coefficients in each column remain statistically significant, the magnitude of the point estimates hardly changes. This is further evidence that input tariffs are not correlated with potentially omitted variables. Specifications (3)–(6) control for industry-specific trends by interacting year fixed effects with the prereform (1989–1991) growth in the number of products by industry (3), output growth (4), and TFP growth (5). Specifications (6)–(10) control for a number of preexisting firm trends. Specification (6) reports the coefficient on input tariffs by augmenting equation (3) with year fixed effects interacted with a dummy that indicates whether the firm manufactured multiple products in its initial year. Specification (7) presents more flexible controls by interacting year fixed effects with the number of initial products manufactured by the firm. Specifications (8) and (9) place firms into output and TFP deciles, based on their initial year, and interacts the deciles with year dummies. This specification controls for shocks to firms of similar sizes over time. Specification (10) interacts a dummy indicating whether the firm had initial-period positive R&D expenditures with year dummies. The input tariff coefficient is robust to including all these flexible industry and firm controls. More importantly, the magnitude of the input tariff coefficient is remarkably stable across specifications, which provides further reassurance that the baseline results are not driven by omitted variable bias or preexisting trends. Specification (11) reports the input tariff coefficient using a Poisson specification that uses the number of products as the dependent variable. Finally, specification (12) addresses potential concerns about entry and exit by rerunning specification (3) on a set of constant firms that appear in each year of the sample period from 1989 to 1997. As before, the input tariff coefficient remains stable and statistically significant. The bottom panel of Table IVb reports robustness checks using long differences. The first check (specification (13)) regresses changes in firm scope on changes in input tariffs between 1989 and 1997. The standard error is now larger (p-value: 19%), but

Notes. The dependent variable in each regression is (log) number of products manufactured by the firm. Each row reports the coefficient on the input tariff with the additional controls beyond firm and year fixed effects. Specifications (1)–(12) have approximately 14,000 observations. Specifications (1) and (2) include two-digit NIC-year and three-digit NIC-year fixed effects, respectively. Specification (3) interacts the firms’ initial multiple-product status with year fixed effects. Specification (4) interacts the firms’ initial number of products with year fixed effects. Specifications (5) and (6) include interactions of year fixed effects with deciles of initial firm output and TFP. Specification (7) includes firms’ initial R&D status with year dummies. Specifications (8)–(10) include interactions of year dummies with prereform (1989–1991) industry product, output and TFP growth, respectively. Specification (11) runs equation (3) using a Poisson regression using product scope (rather than log product scope) as the dependent variable. Specification (12) reruns equation (3) on a constant set of firms. The bottom panel reports long-difference specifications. Specification (13) runs a long-difference regression of the change in (log) scope on the change in input tariffs between 1989 and 1997. Specification (14) runs a long-difference regression that controls for firm-specific trends by running the change in log scope between 1991–1997 and 1989–1991 on the equivalent double difference in input tariffs. All regressions include firm and year fixed effects. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

(l) NIC2 × year FEs

Product scope regressed on input tariffs, firm and year fixed effects, plus controls −0.323∗ (7) Initial products × year FEs −0.408∗∗∗ (0.191) (0.128) (2) NIC3 × year FEs −0.424∗∗ (8) Initial output decile × year FEs −0.311∗∗ (0.204) (0.146) (3) Prereform industry product growth × year FEs −0.327∗∗ (9) Initial TFP decile × year FEs −0.321∗ (0.145) (0.163) (4) Prereform industry output growth × year FEs −0.315∗∗ (10) Initial R&D dummy × year FEs −0.312∗∗ (0.141) (0.154) (5) Prereform industry TFP growth × year FEs −0.336∗∗ (11) Poisson model −0.286∗ (0.141) (0.162) (6) Initial MPF dummy × year FEs −0.393∗∗∗ (12) Constant firms −0.403∗ (0.127) (0.206) Long-difference robustness checks Dependent variable: 1997–1989 change in log Dependent variable: 1997–1991 change in log product scope scope—1991–1989 change in log scope (13) Input tariffs1997−1989 −0.315 (14) Input tariffs1997−1991 −0.234 (0.237) Input tariffs1991−1989 0.251 Observations 696 Observations 662

TABLE IVb REDUCED-FORM ROBUSTNESS CHECKS

1746 QUARTERLY JOURNAL OF ECONOMICS

IMPORTED INPUTS AND PRODUCT GROWTH

1747

TABLE IVc INPUT TARIFFS AND OTHER FIRM OUTCOMES Output (1) Input tariff

TFP (2)

R&D (3)

−1.125∗∗ (0.436)

−0.454∗ (0.233)

−1.559 (1.751)

Yes Yes .92 14,874

Yes Yes .81 13,714

Yes Yes .21 14,233

Input tariff × large firm Year effects Firm FEs R2 Observations

R&D (4) −0.077 (1.124) −1.903∗ 1.111 Yes Yes .21 14,233

Notes. The dependent variable in column (1) is log output. The dependent variable in column (2) is firm TFP obtained from Topalova and Khandelwal (2011). Columns (3) and (4) are R&D expenditures. Column (4) includes an interaction with a dummy if the firm is above median size. All regressions include firm and year fixed effects and are run from 1989 to 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

the coefficient is remarkably close to the annual regression results in Table IVa and the previous regressions in Table IVb. Specification (14) reports a double-difference specification by inp inp q q regressing (ln ni,97−91 − ln ni,91−89 ) on (τq,97−91 − τq,91−89 ). This double-difference specification removes firm-specific trends throughout the sample period. Although not statistically significant, the input tariff coefficient is again very close to the previous regressions. The finding that the long-difference specifications do not substantially attenuate the input tariff coefficient suggests that omitted variables are not biasing our main results in Table IVa. Input Tariffs and Other Firm Outcomes. In Table IVc, we estimate variants of equation (3) that use other firm outcome variables as dependent variables. These variables—firm sales, productivity, and R&D—were chosen based on their relevance to the mechanisms emphasized in endogenous growth models. We find that declines in input tariffs were associated with increased firm sales (column (2)) and higher firm productivity (column (3)).20 This 20. We obtain TFP for our sample of firms from Topalova and Khandelwal (2011). We should emphasize that the interpretation of the TFP findings is difficult in our setting for reasons discussed in Erdem and Tybout (2003). The presence of multiproduct firms further complicates the interpretation of TFP obtained from the methodology of Olley and Pakes (1996; see De Loecker [2007]). We therefore view these results simply as a robustness check that allows us to compare our findings to those of the existing literature.

1748

QUARTERLY JOURNAL OF ECONOMICS

evidence is consistent with predictions of theoretical papers that have emphasized the importance of intermediate inputs for productivity growth (e.g., Ethier [1979, 1982], Romer [1987, 1990], Markusen [1989], Grossman and Helpman [1991], and RiveraBatiz and Romer [1991]). It is also in line with recent empirical studies that find imports of intermediates or declines in input tariffs to be associated with sizable measured productivity gains (see Amiti and Konings [2007], Kasahara and Rodrigue [2008], Halpern, Koren, and Szeidl [2009], and Topalova and Khandelwal [2011]). Finally, we find that lower input tariffs are associated with increased R&D expenditures (column (3)), although the coefficient is imprecisely estimated. The imprecision might in part reflect heterogeneity in the R&D response across firms. In column (4), we allow the effect of input tariffs to differ across firms that are above and below the median value of initial sales. The coefficient on the interaction between input tariffs and the size indicator is negative and statistically significant. Thus, lower input tariffs are associated with increased R&D participation, but only in initially larger firms. Overall, the above results provide further support for the effects emphasized in the endogenous growth literature. Our earlier findings in GKPT (2010a) indicate no systematic relationship between India’s liberalization of output tariffs and domestic product scope. In sharp contrast, here we find strong and robust evidence that the reductions of input tariffs were associated with an increase in the range of products manufactured by Indian firms. Moreover, we also observe that lower input tariffs are associated with an increase in firm output, total factor productivity, and R&D expenditure among (initially) larger firms.

IV. MECHANISMS The results presented in the preceding section quantify the overall impact of access to imported inputs on firm scope and other outcomes. A limitation of this analysis is that it cannot uncover the mechanisms through which lower input tariffs influence product scope. In particular, it does not tell us whether the effects operate through lower prices for existing imported intermediate products or through increases in the variety of available inputs. This section explores and quantifies the relative importance of the price and variety channels.

IMPORTED INPUTS AND PRODUCT GROWTH

1749

IV.A. Theoretical Framework We first provide the theoretical foundation for understanding the mechanisms through which imported inputs lead to growth in domestic varieties. This necessitates introducing functional form assumptions for the production function of producing product q in equation (1). The functional forms we choose are motivated by the nature of our data, and importantly, the model provides a specification that is easy to implement empirically. We start by specifying a Cobb–Douglas production function, α Lq

Yq = AL

(4)

S

αsq

I

α

Xi iq ,

i=1

l

where α Lq + αsq + i=1 αiq = 1.The production of the final good requires a fixed cost Fq . The minimum cost of manufacturing one unit of output is given by I I αiq αLq αSq −αLq −αSq −α α Lq α Sq Pi αiq iq , PL PS (5) Cq = A−1 i=1

i=1

where Pk denotes the price index associated with input k = L, S, 1 . . . i . . . I. We assume that each input sector i has a domestic and an imported component (e.g., Indian and imported steel) that are combined according to the CES aggregator:21 γ γi−1 yi−1 i γi−1 γi γi (6) Xi = Xi D + Xi F , where Xi F and Xi D denote the domestic and foreign inputs, and γi is the elasticity of substitution between the two input bundles. The overall price index for input industry i is a weighted average of the price index for the domestic and foreign input bundles, i D and i F : Pi = iωDi D iωFi F .

(7)

The weights {ωi D, ωi F } are the Sato–Vartia log-ideal weights, (8) ωi B =

si B−si B ln si B−ln si B

si B−si B B=D,F ln si B−ln si B

and

si B =

i B Xi B , B = D, F, B=D,F i B Xi B

21. Halpern, Koren, and Szeidl (2009) use a similar production structure.

1750

QUARTERLY JOURNAL OF ECONOMICS

where the notation denotes the value of a variable in the previous period. We assume that the imported input industry Xi F is itself a CES aggregator of imported varieties (e.g., Japanese and German steel): ⎡ ⎤ σ σi i−1 σ σi−1 σi i ⎣ ⎦ (9) Xi F = aiv xiv , σi > 1, v∈Ii F

where σi is the industry-specific elasticity of substitution, aiv is the quality parameter for variety v, and Ii F is the set of available foreign varieties in industry i. The minimum cost function associated with purchasing the basket of foreign varieties in equation (9) is given by ⎡ ⎤ 1−σ1 i σ aiv pivi−1 ⎦ (10) c( piv , aiv, Ii F ) = ⎣ v∈Ii F

Following Feenstra (1994) and Broda and Weinstein (2006), the price index over a constant set of imported varieties is the conventional price index, Piconv F : piv wiv c( piv , aiv , Ii F ) conv = (11) Pi F = , c( piv , aiv , Ii F ) piv v∈Ii F

IiF

where Ii F = Ii F ∩ is the set of common imported varieties between the current and previous period. The weights in equation (11) are again the Sato–Vartia log-ideal weights: (12)

wiv =

siv −siv ln siv −ln siv

siv −siv ln siv −ln siv

and

siv =

v∈ Ii F

piv xiv . v∈ Ii F piv xiv

Feenstra (1994) shows that the price index of these foreign varieties in equation (11) can be modified to account for the role of new imported varieties as long as there is some overlap in the varieties available between periods (Ii F = ∅). The exact price index adjusted for new imported varieties is (13)

i F = Piconv F i F

Equation (13) states that the exact price index from purchasing the basket of imported varieties in equation (9) is the conventional

1751

IMPORTED INPUTS AND PRODUCT GROWTH

price index multiplied by a variety index, i F, that captures the role of new and disappearing varieties: i F =

(14) with (15)

v∈ I λi F = i F v∈Ii F

piv xiv piv xiv

λi F λi F

σ 1−1 i

and

v∈ I λi F = i F

v∈I i F

piv xiv piv xiv

.

As has been noted in the literature, i F, has an intuitive interpretation. Suppose there are no disappearing varieties (Table II) so that the denominator of (14) is one; then i F, measures the expenditure on the varieties that are available in both periods relative to the expenditure on the set of varieties available in the current period. The more important the new varieties are (i.e., higher expenditure share), the lower will be i F, and the smaller the exact price index will be relative to the conventional index. Equation (14) also shows that i F, depends on the substitutability of the foreign varieties captured by the elasticity of substitution σi . The more substitutable the varieties are, the lower is the term 1/(σi − 1) and the lower is the difference between the exact and conventional price indices. In the limit case of an infinite elasticity of substitution, the second term becomes unity, indicating that changes in the available varieties have no effect on the price index. Substituting equation (13) into equation (7) indicates that the overall input price index for input industry i is ωi F . Substituting this expression back into the Pi = iωDi D (Piconv F i F ) minimum cost function in equation (5) and taking logs yields I conv αiq ωi F ln Pi F + α Lq ln PL + αsq ln Ps ln Cq = (16) i=1

+

I

αiq ωi F lni F

+ v,

i=1

I −α −α −α I where v ≡ i=1 αiq ωi Dlni D + ln(α Lq Lq αsq sq i=1 αiq iq ) − lnA. The expression in equation (16) illustrates the channels through which changes in the minimum cost of production affect the set of products manufactured by domestic firms. Equation (16) can be expressed in terms of observable data (the terms in the

1752

QUARTERLY JOURNAL OF ECONOMICS

first two brackets) and the unobservable component captured by v. The first bracket captures the overall conventional price index for imported inputs (Piconv F ), labor (PL) and nontradables (PS ): (17)

lnPqinp,conv ≡

I

αiq ωi F lnPiconv + α Lq lnPL + αsq lnPs . F

i=1

The second bracket in equation (16) captures the importance of new imported inputs: (18)

inp

lnqF ≡

I

αiq ωi F lni F .

i=1

As discussed above, the term in (18) adjusts the price index to reflect new (or disappearing) imported varieties available to firms; a lower value indicates larger gains from variety. We use this structure to guide our analysis of the mechanisms driving the link between imported input use and domestic product scope. As we discuss in Section III, lower tariffs on imported inputs will affect product innovation if they lower the variable cost of producing the product below the fixed cost of introducing a product. Our approach relates the change (between 1989 and 1997) in firms’ product scope to the observable input price indices [equations (17) and (18)] in the firms’ minimum cost function. Although equation (13) suggests that reductions in the price and variety indices should have an equal effect on product scope, there are additional factors, not explicitly modeled above, which may break this equality. For example, the technological complementarity between varieties within the firm or within product lines of a firm could be much stronger than that implied by the index we estimate at the level of aggregation we use in our empirical analysis. In this case, new varieties would be more important to the firm than suggested by the variety index, which would make the introduction of new domestic products more responsive to the estimated variety index. We therefore allow the impact of the input indices to vary in the following specification: (19)

ln nαf = α + β1 lnPqinp,conv + β2 lnqF + ε f . inp

The theoretical framework suggests that coefficients on both input price components should be negative.22 inp,conv

22. Note that (19) is a change-on-change regression because both Pq inp qF are price indices.

and

IMPORTED INPUTS AND PRODUCT GROWTH

1753

IV.B. Identification Strategy The error term in (19) captures unobservable factors that might influence changes in firm scope. These factors include the unobserved components in v as well as potential demand shocks. Specification (19) clearly illustrates the endogeneity issues that arise in estimating how imported inputs affect firm scope. For instance, suppose firms expand the set of domestic varieties in response to lower price and variety indices for imported inputs. The expansion of domestic varieties will affect the exact price index of domestic inputs (contained in the unobserved ε). This domestic variety expansion will further drive down (depending on parameters) the minimum cost of production, thereby increasing the likelihood of more domestic variety expansion. This feedback between the foreign and domestic price indices creates a correlation between the error term and the observable input price indices in (19); in the absence of a shock to changes in the input indices, it is difficult to separate cause and effect. Alternatively, suppose that firms introduce new domestic varieties due to demand shocks, and manufacturing these new varieties requires more imported inputs. The imports and domestic input indices will both adjust in response to the demand shock, further influencing the minimum cost of production. This reverse causality concern is precisely the econometric complication that has limited previous research from identifying the impact between imported inputs and domestic variety growth. Equation (19) therefore highlights the importance of the policy change (i.e., the tariff liberalization) that we exploit. Section II established that declines in India’s tariffs were plausibly unanticipated and not correlated with firm and industry characteristics prior to the reform, so tariff changes are a natural instrument for identifying the channels. The exogenous reform allows us to establish a casual chain of the following events. A sharp and unanticipated decline in tariffs led to lower prices of existing inputs (as seen in Table IIIb), and hence a lower conventional price index for imports. Tariff declines also resulted in increased imported varieties (Table IIIc); this finding is consistent with models with fixed costs of exporting where lower variable exporting costs increase variable profits and make it more likely that the returns to exporting exceed the fixed cost of entering the foreign market. Thus, changes in tariffs will be correlated with the input price and variety indices in equation (19), satisfying a necessary condition for the IV strategy.

1754

QUARTERLY JOURNAL OF ECONOMICS

Although the price index of domestic inputs changes as firms introduce new domestic varieties, this phenomenon is an indirect effect of the trade reform affecting imported inputs. This point reflects our main identification assumption: input tariffs affect the price index of domestic inputs and TFP only through their impact on imported input prices and varieties, which we capture through the right hand–side variables in (19). That is, there is no direct effect of changes in input tariffs on the unobserved components of (19). Perhaps the most controversial component of this identification strategy is that the unobservable components in (19) include total factor productivity because there is evidence that trade liberalizations lead to productivity improvements. However, most of this evidence pertains to productivity improvements that result from reallocation effects associated with output tariff liberalization (e.g., Pavcnik [2002] and Melitz [2003]); these findings are not pertinent to our analysis because we focus on changes within firms over time,23 which we argue are the result of input tariff liberalization. More relevant to our study are findings from recent empirical studies that report within-firm (measured) productivity improvements following trade reforms.24 The three prevailing arguments for why trade reforms affect within-firm measured productivity are (a) product rationalization, (b) improved access to imported inputs, and (c) elimination of x-inefficiencies through managerial restructuring. From Table IVa (see also GKPT [2010a]), there is no evidence that Indian firms dropped relatively unproductive product lines to improve measured TFP; this rules out point (a) in the Indian context. The input channel (argument (b)) is precisely the focus of our analysis: the trade reform affects productivity through the intermediate input channels in (19), which are captured by the observable part of this equation. Elimination of x-inefficiency is a plausible argument, but it is important to note that our policy instruments are input tariffs. One would expect elimination of x-inefficiency to be driven by procompetitive output tariffs, rather than changes in input tariffs.25 Hence, our identification 23. Recall that Prowess contains relatively large firms for which entry and exit are not important margins of adjustment. Moreover, Sivadasan (2009) finds very little support for the reallocation mechanism in the context of India’s market reforms. 24. See Amiti and Konings (2007), Halpern, Koren, and Szeidl (2009), Sividasan (2009), and Topalova and Khandelwal (2011). For theoretical evidence, see Bernard, Redding, and Schott (2006), Nocke and Yeaple (2006), and Eckel and Neary (2009). 25. We can control for this channel by controlling for changes in output tariffs in equation (19).

IMPORTED INPUTS AND PRODUCT GROWTH

1755

assumption is supported by existing theoretical and empirical research. Because equation (19) contains two endogenous variables, we need a second instrument to identify the coefficients. Our second instrument is motivated by the insights of Helpman, Melitz, and Rubinstein (2008) and is based on the idea that the potential for exporting to India following the liberalization may be higher for those countries with “stronger ties” or proximity to India.26 Tariff declines lower the conventional price index and the variety index, but because India sets a common tariff to all countries, tariff declines alone cannot explain which countries are more likely to start exporting products to India after the reform. In other words, tariffs alone are not sufficient as instruments for the increase in varieties, defined as export country/product pairs. Our second instrument, based on a common language between India and its potential trading partners in a given industry, attempts to explain, for a given decline in tariffs, which industries experience a larger growth in new countries that begin exporting to India (i.e., new imported varieties). The instrument is constructed as follows. We first identify the set of countries that speak English (English is an official language of India). These countries plausibly possess a lower fixed cost of exporting to India (Helpman, Melitz, and Rubinstein 2008).27 Next, we identify the set of countries with a revealed comparative advantage (RCA) for each HS4 industry. Countries with a RCA are more likely to respond to the trade liberalization than countries that do not have a RCA. We identify countries’ RCA using Comtrade data that reports countries’ HS4-level exports to the world (excluding India) in 1989 (prior to India’s reform). We then take the intersection of these two sets to identify, for each HS4 industry, the set of English-speaking and RCA countries. Our proximity measure is a GDP-weighted average of these countries with the idea that industries with a higher average are likely to experience a larger increase in the extensive margin following trade liberalization. The proximity measure therefore uses country-specific differences in fixed costs of exporting to India (captured by language), 26. We are grateful to a referee for suggesting the idea of this instrumentation strategy. 27. Other possible fixed cost proxies might include common religion, border, and colonial origin. Common religion and border are not very good fixed cost proxies in the Indian context, and a colonial origin dummy is colinear with the English language dummy. Distance is not a good proxy because it is more likely to capture variable costs than fixed costs.

1756

QUARTERLY JOURNAL OF ECONOMICS

combined with information on RCA, to construct a proxy for fixed costs that varies across industries. We then pass this variable through the input-output matrix and use the concordances described above to obtain an NIC-level measure of language proximity of potential trading partners to India. This industry-specific variable therefore reflects the lower fixed cost of exporting intermediates to India. Finally, we interact this measure of proximity of potential trading partners in a given NIC code with the change in input tariffs. This interaction serves as our second instrument. IV.C. Empirical Implementation We use the formulas from the theoretical model to guide our empirical implementation. We begin by constructing the import indices, i F and i F . We calculate these indices from India’s import data according to equations (11) and (14) at the HS4 level of aggregation. We chose this level of aggregation because although the method proposed by Feenstra (1994) and Broda and Weinstein (2006) is designed to quantify the gains from new varieties within existing codes, the method is unable to quantify the introduction of entirely new codes.28 We obtain estimates for the elasticity of substitution σi from Broda, Greenfield, and Weinstein (2006), who estimate India’s elasticities of substitution at the HS3 level. Table V reports i F computed between 1989 and 1997.29 Row 1 reports the mean of each component across all HS4 codes. The mean variety index between 1989 and 1997 is 0.899, implying that the exact import price index adjusted for variety growth fell about 10% faster than the conventional import price index. There is a considerable heterogeneity in the impact of variety growth across HS4 price indices (for examples of HS4 codes, see GKPT [2010b]). Column (3) aggregates across all HS4 codes to compute the overall import price index. Accounting for the introduction of new varieties lowers the conventional import price index by 28. This is because index decomposition relies on a set of overlapping varieties across time periods. Between 1989 and 1997, the Indian import data indicate that the number of imported HS6 codes increased from 2,958 to 4,115, which means that computing indices at the HS6 level would ignore this substantial increase in new products. We therefore chose to compute indices at the HS4 level (although we still are unable to compute indices for the 220 (out of 1,145 HS4 codes) that appear between 1989 and 1997). 29. For HS4 codes that enter the import data after 1989, we assign a variety index of one. This is a conservative estimate of the gains from variety. For HS4 codes with missing price and variety indices in 1997 (for instance, because there is no overlap in varieties or units for prices are missing), we assign average values of coarser HS codes.

IMPORTED INPUTS AND PRODUCT GROWTH

1757

TABLE V IMPORT VARIETY INDICES Variety index

All sectors Intermediate sectors Final sectors

Mean

Median

Overall

0.899 0.881 0.904

0.986 0.954 1.000

0.688 0.624 0.850

Notes. Table reports the variety index computed at the HS4 level using elasticities of substitution from Broda, Greenfield, and Weinstein (2006) for India. The indices use HS6-country pairs as the definition of a variety. Columns (1) and (2) report the median and mean variety index across HS4 groups. Column (3) aggregates the HS4 indices to the overall economy level using equation (13) in Broda and Weinstein (2006). The first row reports the variety index over all imported sectors. The second and third rows compute the indices for the intermediate and final sectors. The numbers are computed using data between 1989 and 1997.

31% over nine years, or by 3.9% per year. This contribution of the extensive margin to the import price index is substantially larger than estimates obtained for Costa Rica (Arkolakis et al. 2008). It is also larger than the estimates for the United States, where aggregate import prices are on average 1.2% lower per year due to new imported varieties (Broda and Weinstein 2006). This large contribution of the extensive margin in India reaffirms the evidence from the raw data in Section (3) and reflects the restrictive nature of the Indian trade policy prior to the 1991 liberalization. The second and third rows of Table V report the price index computed separately for the HS4 codes classified by intermediate and final goods, respectively. Consistent with the import decompositions in Table II and the import variety regressions in Table IIIc, we observe that new variety growth was more substantial in the intermediate sectors than in the final goods sectors. The mean variety index for the intermediate sectors was 0.881 between 1989 and 1997 compared to 0.904 for final goods sectors. The difference in the overall aggregate price index is even starker. Variety growth deflated the conventional price index by 38% for intermediate sectors, compared to 15% for final sectors. This figure implies that the import price index for intermediates is on average 4.75% lower per year due to new varieties. Table V clearly highlights the gains from new imported varieties, particularly for intermediate inputs.30 30. As discussed earlier, Prowess does not contain reliable product-level information on imported inputs used by a firm. We therefore cannot create a reliable

1758

QUARTERLY JOURNAL OF ECONOMICS

Having established that variety growth has a substantial impact on the import price index, and that this effect is particularly pronounced in the intermediate goods sector, we next turn to quantifying the relative importance of the price and variety margins in the expansion of domestic product scope. We construct the two components of price index from (17) and (18) that capture the price and variety channels. This requires several pieces of information in addition to the conventional import price and import variety indices discussed above. We calculate the nominal wage index (PL) from the ASI by taking the ratio of the total industry wage bill between 1997 and 1989. We use the wholesale price index (WPI) for the nontradable price index (PS ).31 Finally, we need the two sets of weights: the Cobb–Douglas shares, αiq , and the share of foreign imports, ωi F . India’s IO matrix provides estimates of αiq . We obtain ωi F using equation (8) from the information on the share of imports in total domestic consumption for each sector in India’s IO matrix. We collapse the import indices to the level of aggregation in India’s IO matrix and combine it with the additional variables described above, to construct the indices using (17) and (18). We then map these indices to industry-level NIC codes associated with the main product a firm produced prior to reform. IV.D. Results We begin by reporting the OLS estimates of equation (19) in Table VIa. Table VIa offers a preliminary lens to the mechanisms driving the reduced-form results in Section III. Columns (1) and (2) estimate equation (19) with the conventional input price and variety index separately. A negative coefficient on the conventional input price index in column (1) suggests that lower prices of existing inputs are associated with higher product scope, measure of the actual number of imported inputs used by a firm. Of course, because the import data used in our analysis are a census of all imported varieties, it is clear from the previous tables that firm access to new inputs expanded during this period. The ASI collects the number of inputs (imported and total) at the plantlevel, but this information is only available after the reform period, so we cannot look at changes in firm input use. Moreover, because the ASI is a cross-sectional database, we cannot directly observe changes in inputs at a level more disaggregate than the industry. Nevertheless, we conducted one robustness check using the 1999 ASI data and regressed the industry-level average number of inputs per plant on changes in input tariffs between 1989 and 1997. We observe a statistically significant and negative relationship for both imported and total inputs. This is consistent with the trade reform enabling firms to expand their range of inputs. 31. A separate price index for electricity is available, so we separate the nontradable inputs into electricity and other inputs (e.g., warehousing, communication, water, gas) for which we do not have detailed price indices (and assign the WPI).

IMPORTED INPUTS AND PRODUCT GROWTH

1759

TABLE VIa PRODUCT SCOPE AND CHANNELS: OLS (1) Conventional price index

−0.156 (0.121)

Variety index R2 Observations

.002 696

(2)

−5.97∗∗ (2.55) .009 696

(3) −0.124 (0.113) −5.70∗∗ (2.41) .010 696

Notes. OLS regressions of firm scope on the imported input price indices. Column (1) includes the conventional index, column (2) includes the variety index, and column (3) includes both indices. Regression is run for the years 1989 and 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

although the coefficient is not statistically significant. The coefficient on the input variety index in column (2) is negative and statistically significant suggesting that an increase in input variety (captured by a lower index number) is associated with an expansion of firm scope. This finding continues to hold in column (3), when we estimate equation (19) with both indices as independent variables. Thus, the OLS results indicate that an increase in input variety is correlated with firm scope expansion. The theoretical section showed that the import indices may be correlated with the error term of the estimating equation. This would bias the OLS coefficients in Table VIa. We therefore turn to the IV results next. Columns (1) and (2) of Table VIb report the coefficients from first stage regressions. Column (1) reports first stage results with the conventional input price index as the dependent variable. As expected, a decline in tariffs leads to a decline in the conventional input price index. The coefficient on the interaction of the input tariff with language proximity to India is not significant, indicating no differential decline in the conventional input price index across sectors that vary in their language proximity to India. Column (2) reports first stage results for the input variety price index. Lower input tariffs result in more imported input varieties (i.e., a decline in the variety component), particularly in industries where countries with a RCA share language with India (i.e., a higher value of proximity cost variable). This is consistent with the interpretation that industries with closer language proximity to India experience a larger increase in varieties for a given decline in input tariffs.

696

696

1.340∗∗∗ −0.007 (0.476) (0.020) −0.707 0.121∗ (1.618) (0.065)

{Variety index}q (2)

−14.24∗ (7.56)

{Firm scope} f (4)

−0.033 (0.299) −13.40 (10.46)

{Firm scope} f (5)

−0.047 (0.245) −14.88∗ (8.97)

−0.493∗ (0.271) −14.80∗∗∗ (5.00)

{Firm scope} f — polynomial instruments (7)

[5.3, p = .01; [33.4, p = .00; [21.4, p = .00; [5.3, p = .01] [3.1, p = .05] 3.1, p = .05] 5.5, p = .01] 12.8, p = .00] 696 696 696 696 696

−0.240 (0.211)

{Firm scope} f (3)

{Firm scope} f — 1998/99 IO matrix (6)

2nd stage regressions

Notes. Table reports IV regressions of firm scope on the imported input price indices. The instruments are changes in input tariffs and changes in input tariffs interacted with the proximity measure described in the text. Columns (1) and (2) report the first-stage regressions for the conventional price and variety indices, respectively. Columns (3)–(7) report the second-stage regressions. Column (6) uses the 1998/1999 input–output matrix. Column (7) includes third-order polynomials of the instruments and is estimated using a continuously updated GMM estimator. The regressions are run for the years 1989 and 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

F-test 1st stage instruments Observations

{Variety index}q

{Conventional price index}q

{Input tariff}q,97−89 × {proximity}q

{Input tariff}q,97−89

{Conv. price index}q (1)

1st stage regressions

TABLE VIb PRODUCT SCOPE AND CHANNELS: INSTRUMENTAL VARIABLES

1760 QUARTERLY JOURNAL OF ECONOMICS

IMPORTED INPUTS AND PRODUCT GROWTH

1761

The remaining columns of Table VIb report IV estimates of equation (19). The first-stage F statistics on excluded instruments are reported at the bottom of each column. Column (3) reports the results using only the conventional input price index; this is the IV version of column (1) in Table VIa. As with OLS, the result is not significant, but the sign of the coefficient suggests that lower input prices of existing inputs are associated with increases in firm scope. Column (4) presents the IV result for the input variety index. The coefficient on the variety index is negative and significant. Column (5) presents the results when equation (19) is estimated with IV and both indices are included; this equation is just identified with the two instruments and two endogenous regressors. The coefficient on the variety index is not statistically significant at conventional levels (p-value 20%), which is not surprising, given the well-known problems associated with the efficiency of IV estimators. However, the point estimates are very close to the IV results in column (4), which do not condition on the conventional input price index. The results in columns (4) and (5) suggest that more imported variety (i.e., a lower variety index) is associated with expansion in product scope. Note that the IV estimates of the variety effect in columns (4) and (5) are lower (higher in magnitude) than the OLS estimates. A priori, it is difficult to sign the bias of the OLS estimates. As noted earlier, the error term in equation (19) contains the (unobserved) price index of domestic inputs (i D) as well as unobserved inp demand shocks. If the correlation between the error term and qF is positive, the OLS estimates are biased downward (i.e., too negative). If the correlation is negative, the OLS estimates are biased upward (i.e., not negative enough). In order to understand why the bias is ambiguous, suppose there is an increase in (unobserved) demand. The demand shock will likely raise the demand for forinp eign inputs resulting in a lower qF . The shock may also induce domestic input suppliers to manufacture new varieties, which will cause downward pressure on i D because more varieties lower the price index. This effect suggests a positive correlation between inp qF and the error term in (19). However, the domestic shock will also induce an increase in the prices of existing domestic inputs, therefore causing i D to increase. If the price increase of existing domestic inputs outweighs the downward pressure on i D due to new varieties, there will be an overall negative correlation beinp tween qF and the error term in (19). Thus, the potential bias of

1762

QUARTERLY JOURNAL OF ECONOMICS

the OLS estimates is, a priori, ambiguous. The IV coefficients on the variety index are lower than the OLS estimates, suggesting that the negative correlation dominates. We estimate additional variants of equation (19). Our analysis so far has relied on the 1993–1994 IO table for India. This IO table likely reflects India’s production technology across industries at the start of the reform period. At that time, industries may not have relied heavily on inputs of machinery that were subject to high tariffs. Such an IO matrix may thus provide a more noisy measure of the potential to benefit from trade in inputs. As a robustness check, we reconstructed the conventional and variety input price indices using India’s 1998–1999 IO matrix. In column (6) of Table VIb, we report IV results based on these measures. We find that the point estimates are similar to those in column (5) but are, not surprisingly, more precisely estimated. The response of the extensive margin to tariffs is likely to be nonlinear, because India’s strongest or weakest trading partners are less likely to be affected by changes in tariffs. In column (7) we therefore use a third-order polynomial expansion of input tariffs and language proximity as instruments for the conventional and variety input price index and estimate equation (19) with a continuously updating GMM estimator. This estimator is more efficient than the two-stage least-squares estimator and also less prone to potential problems with weak instruments when there are multiple instruments (see Stock, Wright, and Yogo [2002] and Baum, Schaffer, and Stillman [2007]). We again find that lower input variety is associated with expanded product scope and that the magnitudes of the coefficients are similar to those in previous columns, and the first-stage F-statistics improve. Finally, we reestimate equation (19), controlling for changes in output tariffs. This specification directly controls for the possibility that trade liberalization affected TFP of domestic firms through declines in output tariffs. These regressions (available upon request) yield coefficients very similar to those reported in columns (4)–(7), suggesting that our assumption that input tariffs affect firms’ product scope only through the conventional input price and variety index is valid. Overall, the analysis suggests that the increase in imported variety enabled Indian firms to expand their product scope. The magnitudes of the coefficients on the imported variety index in columns (3)–(7) are also economically significant and consistent

IMPORTED INPUTS AND PRODUCT GROWTH

1763

with the reduced form results in Section III. Consider the coefficient in column (5). The coefficient implies that a 1% decline in the variety index leads to a 13.4% increase in firm scope. This elasticity is large, but it is important to note that the input variety index has been weighted by import shares (see equation (18)) and so the import-share-weighted variety indices are orders of magnitude smaller than the numbers in Table V. During the period of our analysis, input tariffs declined on average by 24 percentage points, and from column (2), the decline in input tariffs led to a 0.25% decline in input variety index on average. The IV point estimate therefore implies a 3.4% increase in scope for the average firm due to the increased availability of imported varieties. Although our theoretical framework suggests that reductions in the price and variety components should have an equal effect on product scope (see equation (13)), our empirical results suggest a much higher elasticity with respect to the variety index. As we noted earlier, one possible explanation for this finding is that the technological complementarity within product lines in a firm is much stronger than the complementarity we capture through the variety index we construct at a more aggregate level. Suppose, for example, that the production of a particular product required the use of particular inputs in fixed proportions. The firm might adjust product lines in response to tariff changes, but there would be no substitution of inputs within product lines. This would make the effect of new varieties very strong: the availability of new inputs would enable the firm to produce entirely new products. Although we cannot provide direct evidence on this hypothesis given the level of aggregation in available data, the large elasticity of product scope with respect to the variety index is highly suggestive. To conclude, the results in Table VIa and IVb provide insight into the mechanisms generating the reduced-form results we presented earlier.32 Given that new product additions accounted for about 25% of growth in Indian manufacturing output during our sample, the results suggest that the availability of new imported intermediates played an important role in the growth of Indian manufacturing in the 1990s. 32. We estimated equation (19) for output, TFP, and R&D activity. The analysis confirms that access to more varieties resulted in higher firm output and R&D activity, but lower TFP (although these results are not statistically significant). The counterintuitive sign on TFP could in part reflect the difficulties associated with measuring TFP noted in the introduction.

1764

QUARTERLY JOURNAL OF ECONOMICS

V. CONCLUSIONS After decades of import-substitution policies, Indian firms responded to the 1991 trade liberalization by increasing their imports of inputs. Importantly, two-thirds of the intermediate import growth occurred in products that had not been imported prior to the reforms. During the same period, India also experienced an explosion in the number of products manufactured by Indian firms. In this paper, we use a unique firm-level database that spans the period of India’s trade liberalization to demonstrate that the expansion in domestic product scope can be explained in large part by the increased access of firms to new imported intermediate varieties. Hence, our analysis points to an additional benefit of trade liberalization: Lower tariffs increase the availability of new imported inputs. These in turn enable the production of new outputs. Local consumers gain from an increase in domestic variety (on top of the increased number of imported consumer goods). Our approach relies on detailed product-level information on all Indian imports to measure the input price and variety changes. Because similar data are readily available for many countries, our approach can in principle be used by other researchers interested in the consequences of trade and imported inputs. Additionally, disaggregate data on the use of imported intermediates at the firm level may be available for some countries. However, we believe that relying on aggregate, product-specific import data rather than firm-level data on input use offers a few advantages. First, because the data on product imports are a census, we can say with confidence that the varieties classified as “new” were not available anywhere in India prior to the reform: their total imports were zero. Second, firms frequently access imported inputs through intermediary channels rather than direct imports; hence, it is possible that a firm that reports zero imported intermediates is in fact using imported intermediates that have been purchased through a domestic intermediary. This implies that there are advantages to using products or sectors as the appropriate units of aggregation. Third, the level of aggregation we use in this study allows us to take advantage of the tariff reforms in our identification strategy. Nevertheless, firm-level data with detailed information on imported inputs by firm may strengthen our understanding of the mechanisms that we highlight. More detailed data would enable us, for example, to study the determinants and consequences

IMPORTED INPUTS AND PRODUCT GROWTH

1765

of differential adoption of imported inputs by Indian firms, although such a study would need to address the endogeneity of this differential adoption of imported inputs by firms—the trade policy changes we exploit as a source of identification do not vary by firm. Our findings relate to growth models that highlight the importance of access to new imported inputs for economic growth and to recent cross-country evidence that lower tariffs on intermediate inputs are associated with income growth (Estevadeordal and Taylor 2008). Our firm-level analysis offers insights into the microeconomic mechanisms underlying growth by focusing on one particular channel, access to imported intermediates, and one particular margin of firm adjustment, product scope. Although we do not concentrate on aggregate growth, the fact that the creation of new domestic products accounted for nearly 25% of total Indian manufacturing output growth during our sample period suggests that the implications of access to new imported intermediate products for growth are potentially important. In future work we plan to further explore the contribution of these new products to TFP by exploiting product-level information on prices and sales available in our data. This will allow us ultimately to provide a direct estimate of the dynamic gains from trade. REFERENCES Aghion, Philippe, Robin Burgess, Stephen Redding, and Fabrizio Zilibotti, “The Unequal Effects of Liberalization: Evidence from Dismantling the License Raj in India,” American Economic Review, 98 (2008), 1397–1412. Amiti, Mary, and Jozef Konings, “Trade Liberalization, Intermediate Inputs and Productivity,” American Economic Review, 97 (2007), 1611–1638. Arkolakis, Costas, Svetlana Demidova, Peter J. Klenow, and Andres RodriguezClare, “Endogenous Variety and the Gains from Trade,” American Economic Review Papers and Proceedings, 98 (2008), 444–450. Baum, Christopher F., Mark E. Schaffer, and Steven Stillman, “Enhanced Routines for Instrumental Variables/GMM Estimation and Testing,” Stata Journal, 7 (2007), 465–506. Bernard, Andrew B., Stephen J. Redding, and Peter K. Schott, “Multi-product Firms and Trade Liberalization,” NBER Working Paper No. w12782, 2006. ——, “Multi-product Firms and Product Switching,” American Economic Review, 100 (2010), 70–97. Brambilla, Irene, “Multinationals, Technology, and the Introduction of Varieties of Goods,” NBER Working Paper No. w12217, 2006. Broda, Christian, Joshua Greenfield, and David E. Weinstein, “From Groundnuts to Globalization: A Structural Estimate of Trade and Growth,” NBER Working Paper No. w12512, 2006. Broda, Christian, and David E. Weinstein, “Globalization and the Gains from Variety,” Quarterly Journal of Economics, 121 (2006), 541–585. Chopra, Ajai, Charles Collyns, Richard Hemming, Karen Elizabeth Parker, Woosik Chu, and Oliver Fratzscher, “India: Economic Reform and Growth,” IMF Occasional Paper No. 134, 1995.

1766

QUARTERLY JOURNAL OF ECONOMICS

Debroy, Bibek, and A. T. Santhanam, “Matching Trade Codes with Industrial Codes,” Foreign Trade Bulletin (New Delhi: Indian Institute of Foreign Trade, 1993). De Loecker, Jan, “Product Differentiation, Multi-product Firms and Estimating the Impact of Trade Liberalization on Productivity,” Princeton University, Mimeo, 2007. Eckel, Carsten, and J. Peter Neary, “Multi-product Firms and Flexible Manufacturing in the Global Economy,” Review of Economic Studies, forthcoming, 2009. Erdem, Erkan, and James R. Tybout, “Trade Policy and Industrial Sector Responses: Using Evolutionary Models to Interpret the Evidence,” Brookings Trade Forum (2003), 1–43. Estevadeordal, Antoni, and Alan M. Taylor, “Is the Washington Consensus Dead? Growth, Openness, and the Great Liberalization, 1970s–2000s,” NBER Working Paper No. w14246, 2008. Ethier, Wilfred J., “Internationally Decreasing Costs and World Trade,” Journal of International Economics, 9 (1979), 1–24. ——, “National and International Returns to Scale in the Modern Theory of International Trade,” American Economic Review, 72 (1982), 389–405. Feenstra, Robert C., “New Product Varieties and the Measurement of International Prices,” American Economic Review, 84 (1994), 157–177. Feenstra, Robert C., Dorsati Madani, Tzu-Han Yang, and Chi-Yuan Liang, “Testing Endogenous Growth in South Korea and Taiwan,” Journal of Development Economics, 60 (1999), 317–341. Feenstra, Robert C., James R. Markusen, and William Zeile, “Accounting for Growth with New Inputs: Theory and Evidence,” American Economic Review, 82 (1992), 415–421. Gang, Ira N., and Mihir Pandey, “Trade Protection in India: Economics vs. Politics?” University of Maryland Working Paper No. 27, December 1996. Goldberg, Pinelopi K., Amit K. Khandelwal, Nina Pavcnik, and Petia Topalova, “Multi-product Firms and Product Turnover in the Developing World: Evidence from India,” Review of Economics and Statistics, forthcoming, 2010a. ——, “Trade Liberalization and New Imported Inputs,” American Economic Review, 99 (2010b), 494–500. Goyal, S. K., “Political Economy of India’s Economic Reforms,” Institute for Studies in Industrial Development (ISID) Working Paper, 1996. Grossman, Gene M., and Elhanan Helpman, Innovation and Growth in the Global Economy (Cambridge, MA: MIT Press, 1991). Halpern, Lazlo, Miklos Koren, and Adam Szeidl, “Imports and Productivity,” Federal Reserve Bank of New York, Mimeo, 2009. Hasan, Rana, Devashish Mitra, and K. V. Ramaswamy, “Trade Reforms, Labor Regulations and Labor Market Elasticities: Empirical Evidence from India,” Review of Economics and Statistics, 89 (2007), 466–481. Helpman, Elhanan, Marc J. Melitz, and Yona Rubinstein, “Estimating Trade Flows: Trading Partners and Trading Volumes,” Quarterly Journal of Economics, 123 (2008), 441–487. Kasahara, Hiroyuki, and Joel Rodrigue, “Does the Use of Imported Intermediates Increase Productivity?” Journal of Development Economics, 87 (2008), 106– 118. Klenow, Peter J., and Andres Rodriguez-Clare, “Quantifying Variety Gains from Trade Liberalization,” Penn State University, Mimeo, 1997. Kochhar, Kalpana, Utsav Kumar, Raghuram Rajan, Arvind Subramanian, and Ioannis Tokatlidis, “India’s Pattern of Development: What Happened, What Follows,” Journal of Monetary Economics, 53 (2006), 981–1019. Markusen, James R., “Trade in Producer Services and in Other Specialized Intermediate Inputs,” American Economic Review, 79 (1989), 85–95. Melitz, Marc J., “The Impact of Trade on Intra-industry Reallocations and Aggregate Industry Productivity,” Econometrica, 71 (2003), 1695–1725. Mohan, Rakesh, “The Growth Record of the Indian Economy, 1950–2008: A Story of Sustained Savings and Investment,” Reserve Bank of India Speech, 2008.

IMPORTED INPUTS AND PRODUCT GROWTH

1767

Muendler, Marc-Andreas, “Trade, Technology, and Productivity: A Study of Brazilian Manufacturers, 1986–1998,” University of California at San Diego, Mimeo, 2004. Navarro, Lucas, “Plant Level Evidence on Product Mix Changes in Chilean Manufacturing,” University of London, Mimeo, 2008. Nocke, Volker, and Stephen R. R. Yeaple, “Globalization and Endogenous Firm Scope,” NBER Working Paper No. w12322, 2006. Nouroz, H., Protection in Indian Manufacturing: An Empirical Study (Delhi: MacMillan India Ltd., 2001). Olley, Steven G., and Ariel Pakes, “The Dynamics of Productivity in the Telecommunications Equipment Industry,” Econometrica, 64 (1996), 1263–1298. Pavcnik, Nina, “Trade Liberalization, Exit, and Productivity Improvements: Evidence from Chilean Plants,” Review of Economic Studies, 69 (2002), 245–276. Rivera-Batiz, Luis, and Paul M. Romer, “Economic Integration and Endogenous Growth,” Quarterly Journal of Economics, 106 (1991), 531–555. Romer, Paul M., “Growth Based on Increasing Returns Due to Specialization,” American Economic Review, 77 (1987), 56–62. ——, “Endogenous Technological Change,” Journal of Political Economy, 98 (1990), 71–102. ——, “New Goods, Old Theory, and the Welfare Costs of Trade Restrictions,” Journal of Development Economics, 43 (1994), 5–38. Sivadasan, Jagadeesh, “Barriers to Competition and Productivity: Evidence from India,” B.E. Journal of Economic Analysis & Policy, 9 (2009), 42. Stock, James H., Jonathan H. Wright, and Motohiro Yogo, “A Survey of Weak Instruments and Weak Identification in Generalized Method of Moments,” Journal of Business and Economics Statistics, 20 (2002), 518–529. Topalova, Petia, and A. K. Khandelwal, “Trade Liberalization and Firm Productivity: The Case of India,” Review of Economics and Statistics, forthcoming, 2011.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES∗ EFRAIM BENMELECH EUGENE KANDEL PIETRO VERONESI The use of stock-based compensation as a solution to agency problems between shareholders and managers has increased dramatically since the early 1990s. We show that in a dynamic rational expectations model with asymmetric information, stock-based compensation not only induces managers to exert costly effort, but also induces them to conceal bad news about future growth options and to choose suboptimal investment policies to support the pretense. This leads to a severe overvaluation and a subsequent crash in the stock price. Our model produces many predictions that are consistent with the empirical evidence and are relevant to understanding the current crisis.

I. INTRODUCTION Although a large theoretical literature views stock-based compensation as a solution to an agency problem between shareholders and managers, a growing body of empirical evidence shows that it may also lead to earnings management, misreporting, and outright fraudulent behavior. Does stock-based compensation amplify the tension between the incentives of managers and shareholders instead of aligning them? The ongoing global financial crisis has brought forth renewed concerns about the adverse incentives that stock-based compensation may encourage. Many managers of the recently troubled financial institutions were among the highest-paid executives in the United States, with huge equity-based personal profits realized when their firms’ stock prices were high. Although the subsequent sharp decline of their firms’ stock prices may have been due to exogenous systemic shocks to the economy, it is an important ∗ We would like to thank Mary Barth, Joseph Beilin, Dan Bernhardt, Jennifer Carpenter, Alex Edmans, Xavier Gabaix, Dirk Hackbarth, Zhiguo He, Elhanan Helpman (the editor), Ilan Guttman, Alex Joffe, Ohad Kadan, Simi Kedia, Ilan Kremer, Holger Muller, Thomas Philippon, Andrei Shleifer, Lucian Taylor, and three anonymous referees, as well as seminar participants at the 2009 NBER Summer Institute Asset Pricing meetings, the 2008 Western Finance Association meetings in Waikoloa, the 2006 ESSFM Conference in Gerzensee, the 2006 Finance and Accounting Conference in Atlanta, Hebrew University, IDC, Michigan State, NYU Stern, Oxford, Stanford, Tel Aviv, the University of Chicago, the Chicago Fed, the University of Illinois at Urbana–Champaign, and Washington University for helpful comments and suggestions. Kandel thanks the Krueger Center for Finance Research at the Hebrew University for financial support. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1769

1770

QUARTERLY JOURNAL OF ECONOMICS

open question whether the extent of their stock-based compensation may have induced CEOs to willingly drive prices up in full awareness of the impending crash. Indeed, similar concerns about these possible perverse effects of stock-based compensations on CEOs’ behavior were raised after the burst of the dot-com bubble. As governments across the globe are preparing a new wave of sweeping regulation, it is important to study the incentives induced by stock-based compensation, as well as the trade-offs involved in any decision that may affect the stock components in executives’ compensation packages.1 In this paper, we show formally that although stock-based compensation induces managers to exert costly effort to increase their firms’ investment opportunities, it also supplies incentives for suboptimal investment policies designed to hide bad news about the firm’s long-term growth. We analyze a dynamic rational expectations equilibrium model and identify conditions under which stock-based executive compensation leads to misreporting, suboptimal investment, run-up, and a subsequent sharp decline in equity prices. More specifically, we study a hidden-action model of a firm that is run by a CEO whose compensation is stock-based. The firm initially experiences high growth in investment opportunities and the CEO must invest intensively to exploit the growth options. The key feature of our model is that at a random point in time the growth of the firm’s investment opportunities slows down. The CEO is able to postpone the expected time of this decline by exercising costly effort. But when the investment opportunities growth does inevitably slow down, the investment policy of the firm should change appropriately. We assume that whereas the CEO privately observes the slowdown in the growth rate, shareholders are oblivious to it. Moreover, they do not observe investments, but base their valuation only on dividend payouts. When investment opportunities decline, the CEO has two options: revealing the decline in investment opportunities to shareholders, or behaving as if nothing had happened. Revealing the decline to shareholders leads to an immediate decline in the stock price. If the CEO chooses not to report the change in the business environment of the firm, the stock price does not fall, as the outside 1. For instance, in January 2009 the U.S. government imposed further restrictions on the non–performance related component of the compensation packages. In light of our results, it seems that the administration is moving in the wrong direction.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1771

investors have no way of deducing this event, and equity becomes overvalued. To maintain the pretense over time, the CEO must design a suboptimal investment strategy: we assume that as long as the reported dividends over time are consistent with the high growth rate, the CEO keeps his or her job. Any deviation in dividends that follows a suboptimal investment policy leads to the CEO’s dismissal. We show that when a CEO’s compensation is based on stock, and the range of possible growth rates is large, there exists a pooling Nash equilibrium for most parameter values. In this equilibrium, the CEO of a firm that experienced a decline in the growth rate of investment opportunities follows a suboptimal investment policy designed to maintain the pretense that investment opportunities are still strong. We solve for the dynamic pooling equilibrium in closed form and fully characterize the CEO’s investment strategy. In particular, because the CEO is interested in keeping a high growth profile for as long as possible, he or she initially invests in negative–NPV projects as stores of cash, and later on foregoes positive–NPV projects in order to meet rapidly growing demand for dividends. In both cases, he destroys value. Because this strategy cannot be kept forever, at some point the firm experiences a cash shortfall, the true state is revealed, and the stock price sharply declines as the firm needs to recapitalize. Our model highlights the tension that stock-based compensation creates. Although the common wisdom of hidden-action models is to align the manager’s objectives with those of investors by tying his compensation to the stock price, we show that stockbased compensation may lead the manager to invest suboptimally and destroy value to conceal bad news about future growth. The trade-off is made apparent by the fact that for most reasonable parameter values, and especially for medium- to high-growth companies, stock-based compensation indeed induces an equilibrium with high effort but also leads to a suboptimal investment strategy. That is, the cost of inducing high managerial effort ex ante comes from the suboptimal investment policy after the slowdown in investment opportunities, which eventually leads to undercapitalization and a stock price crash. Although our analysis focuses on a linear stock-based compensation contract, we also consider alternative compensation schemes widely used in the industry. We analyze (i) flat wage contracts, (ii) deferred compensation, (iii) option-based compensation, (iv) bonuses, and (v) clawback clauses. We discuss the pros

1772

QUARTERLY JOURNAL OF ECONOMICS

and cons of each of these contracts and show that these commonly used compensation schemes are not necessarily efficient either ex ante in inducing managerial effort, or ex post in forcing the manager to reveal the true state of the firm. We then propose and analyze an optimal managerial compensation contract. We show that this double incentive problem (i.e., inducing high effort and revelation) can often be overcome by a firm-specific compensation scheme characterized by a combination of stock-based compensation and a bonus awarded to the CEO upon revelation of the bad news about long-term growth. Indeed, although stock-based compensation is necessary to induce the manager to exert costly effort and increase investment opportunities, it also implicitly punishes the CEO for truth telling, as the stock price will sharply decline. The contingent bonus, possibly in the form of a golden parachute or a generous severance package, is then necessary to compensate for the loss in the CEO stock holdings. However, we also show that a bonus contract alone would not work ex ante, because although it induces truth telling, it also provides an incentive not to exert effort, as this behavior anticipates the time of the bonus payment. The stock-based component ensures high effort. An important implication of the model is that different types of firms need to put different levels of stocks in place in the compensation package. Specifically, we find that the CEOs’ compensation packages of growth firms, that is, those with high investment opportunities growth and high return on capital, should have little stock-price sensitivity. Indeed, a calibration of the model shows that for most firms the stock-based compensation component should never be above 40% of the total CEO compensation in order to induce truth revelation and optimal investments. Similarly, for most firms with medium-high return on investment, the stock-based compensation component should be strictly positive, to induce high effort. These results suggest that policymakers and firms’ boards of directors should be careful both with an outright ban of stock-based compensation and with too much reliance on it. Our model’s predictions are consistent with the empirical evidence documenting that stock-based executive compensation is associated with earnings management, misreporting and restatements of financial reports, and outright fraudulent accounting (e.g., Healy [1985], Beneish [1999], Ke [2005], Bergstresser and Philippon [2006], Burns and Kedia [2006], Johnson, Ryan, and

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1773

Tian [2009], and Kedia and Philippon [2010]). In fact, our model’s predictions go beyond the issue of earnings manipulation and restatements, as we focus on the entire investment behavior of the firm over the long haul. Similarly, our model’s predictions are consistent with the survey results of Graham, Harvey, and Rajgopal (2005), according to which most managers state that they would forego a positive–NPV project if it caused them to miss on earnings target, with high-tech firms much more likely to do so. High-tech firms are also much more likely to cut R&D and other discretionary spending to meet a target. On the same note, our model’s predictions are also consistent with Skinner and Sloan (2002), who show that the decline in firm value following a failure to meet analysts’ forecasts is more pronounced in high-growth firms. Our paper is related to the literature on managerial “shorttermism” and myopic corporate behavior (e.g., Stein [1989], Bebchuk and Stole [1993], Jensen [2005], and Aghion and Stein [2008]). In terms of assumptions, our paper bears some similarities to Miller and Rock (1985), who study the effects of dividends announcements on the value of firms. Similarly to Inderst and Mueller (2006) and Eisfeldt and Rampini (2008), we also assume that the CEO has a significant informational advantage over investors, but differently from them, we focus on investors’ beliefs about future growth rates and their effect on managers’ incentives. Our paper is also related to Bolton, Scheinkman and Xiong (2006), Goldman and Slezak (2006), and Kumar and Langberg (2009), but it differs from them in that we emphasize the importance of firms’ long-term growth options, which have a strong “multiplier” impact on the stock price and thus on CEO incentives to hide any worsening of investment opportunity growth. Finally, our paper is also related to the recent literature on dynamic contracting under asymmetric information (e.g., Quadrini [2004], Clementi and Hopenhayn [2006], and DeMarzo and Fishman [2007]). These papers focus on the properties of the optimal contract that induces full revelation, such that there is no information asymmetry in equilibrium. Although we also find the contract that induces full revelation and the first best, the main focus of our paper is the properties of the dynamic pooling equilibrium in which the manager does not reveal the true state, which we believe to be widespread. This analysis is complicated by the feedback effect that the equilibrium price dynamics exerts on

1774

QUARTERLY JOURNAL OF ECONOMICS

the CEO’s compensation and thus on the CEO’s optimal intertemporal investment strategy, which in turn affects the equilibrium price dynamics itself through shareholders beliefs. The solution of this fixed point problem is absent in other dynamic contracting models, but is at the heart of our paper. We organize the paper as follows. Section II presents the model setup. Section III presents the disincentives that stockbased compensation creates when there is information asymmetry. Section IV considers alternative compensation schemes. Section V provides the quantitative implications of our model. We discuss the broader implications of our results in Section VI. II. THE MODEL We consider a firm run by a manager who (a) chooses an unobservable effort level that affects the growth opportunities of the firm; (b) privately observes the realization of the growth opportunities and decides whether to report them to the public; and (c) chooses the investment strategy for the firm that is consistent with his or her public announcement. Our analysis focuses on the manager’s trade-off between incentives to exert costly effort to maintain high growth of investment opportunities, and incentives to reveal to shareholders when investment opportunities growth slows down. We start by defining the firm’s investment opportunities, which are described by the following production technology: given the stock of capital Kt , the firm’s operating profit (output) Yt is zKt if Kt ≤ Jt (1) , Yt = zJt if Kt > Jt where z is the rate of return on capital and Jt defines an upper bound on the amount of deployable productive capital that depends on the technology, operating costs, demand, and so on. The Leontief technology specification (1) implies constant returns to scale up to the upper bound Jt , and then zero return for Kt > Jt . This simple specification of a decreasing–returns to scale technology allows us to conveniently model the evolution of the growth rate in profitable investment opportunities, which serves as the driving force of our model. The existing stock of capital depreciates at a rate of δ.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1775

0.25

t=4

0.2 t=2 t=0

Output

0.15

0.1

0.05

0

0

0.5

1

1.5

2

2.5

3

Capital

FIGURE I Growth in Investment Opportunities This figure reproduces the output (earnings) profile Yt as a function of capital Kt for three different time periods, t = 0, t = 2, and t = 4.

We assume that the upper bound Jt in (1) grows according to dJt = gJ ˜ t, dt where g˜ is a stochastic variable described below. The combination of (1) and (2) yields growing investment opportunities for the firm. Because the technology displays constant returns to scale up to Jt , it is optimal to keep the capital at the level Jt if these investments are profitable, which we assume throughout. Figure I illustrates investment opportunities growth. We set time t = 0 to be the point when shareholders know firm’s capital K0 , as well as the current growth rate of investment opportunities, g˜ = G. One can think of t = 0 as the time of the firm’s initial public offering (IPO) or seasoned public offering (SEO) or of a major corporate event, such as a reorganization, that has elicited much information about the state of the firm. This is mostly a technical simplifying assumption, as we believe that all the insights would remain if the market had a system of beliefs over the initial capital and the growth rate. Firms tend to experience infrequent changes in their growth rates. We are interested in declines of the growth rate, as these are the times when the manager faces a hard decision as to whether to reveal the bad news to the public. Any firm may experience such a decline; thus our analysis applies to a wide variety of scenarios. We (2)

1776

QUARTERLY JOURNAL OF ECONOMICS

model the stochastic decline in investment opportunities growth as a discrete shift from the high-growth regime, g˜ = G, to a lowgrowth regime, g˜ = g (< G), that occurs at a random time τ ∗ , where τ ∗ is exponentially distributed with parameter λ. Formally, G for t < τ ∗ ∗ (3) where f (τ ∗ ) = λe−λτ . g˜ = ∗ g for t ≥ τ We assume that the manager’s actions affect the time at which the decline occurs. After all, CEOs must actively search for investment opportunities and monitor markets and internal developments, all of which require time and effort. In our model, higher effort translates into a smaller probability of shifting to lower growth. More specifically, the manager can choose to exert high or low effort, e H > e L. Choosing higher effort increases the expected time τ ∗ at which the investment opportunities growth declines. Formally, λ H ≡ λ(e H ) < λ(e L) ≡ λ L ⇐⇒ E[τ ∗ |e H ] > E[τ ∗ |e L]. The cost of high effort is positive, whereas the cost of low effort is normalized to zero: c(e) ∈ {c H , c L},

s.t. c H > c L = 0.

To keep the analysis simple, we assume linear preferences of the manager, T −β(u−t) (4) e wu[1 − c(e)] du , Ut = Et t

where wu is the periodic wage of the CEO, and β is his or her discount rate.2 We specify a cost of effort in a multiplicative fashion, which allows us to preserve scale invariance. Economically, this assumption implies complementarity between the wage and “leisure” [1 − c(e)], a relatively standard assumption in macroeconomics. That is, effort is costly exactly because it does not allow the CEO to enjoy his or her pay wt as much as possible.3 In (4), T is the time at which the manager leaves the firm, possibly T = ∞.4 However, the departing date T may occur 2. Our results also hold with risk-averse managers, although analytical tractability is lost. 3. See Edmans, Gabaix, and Landier (2009). 4. T = ∞ also corresponds to the case in which there is a constant probability that the manager leaves the firm or dies, whose intensity is then included in the discount rate β, as in Blanchard (1985).

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1777

earlier, as the manager may be fired if the shareholders learn that he has followed a suboptimal investment strategy. We must make several technical assumptions to keep the model from diverging, degenerating, or becoming intractable. First, we assume for tractability that manager’s decisions are firm-specific, and thus do not affect the systematic risk of the stock and its cost of capital, which we denote r. Then we must assume that (5)

z > r + δ;

that is, the return on capital z is sufficiently high to compensate for the cost of capital r and depreciation δ. This assumption implies that it is economically optimal for investors to provide capital to the company and invest up to its fullest potential, as determined by the Leontief technology described in (1). To ensure a finite value of the firm’s stock price, we must assume that r > G − λ H and r > g. We also set β > G to ensure that the total utility of the manager is finite. We also assume that β ≥ r; that is, the manager has a higher discount rate than fully diversified investors.5 Although we assume that the market does not observe the investments and the capital stock, there is a limit to what the firm can conceal. We model this by assuming that to remain productive, the firm must maintain a minimum level of capital Kt ≥ Kt , where Kt is exogenously specified, and for simplicity it depends on the optimal size of the firm: (6)

Kt ≥ Kt = ξ Jt

for 0 ≤ ξ < 1,

where Jt is defined in (2). This is a purely technical assumption, and ξ is a free parameter. Finally, we assume for simplicity that the firm does not retain earnings; thus the dividend rate equals its operating profit, Yt , derived from its stock of capital, Kt , less the investment it chooses to make, It . Given the technology in (1), the dividend rate is (7)

Dt = z min(Kt , Jt ) − It .

5. It is intuitive that the discount rate of an individual, β, is higher than the discount rate of shareholders: for instance, a manager may be less diversified than the market, or β may reflect some probability of leaving the firm early or death, or simply a shorter horizon than the market itself.

1778

QUARTERLY JOURNAL OF ECONOMICS

II.A. Investments and Stock Prices in the First Best To build intuition, it is useful first to derive the optimal investment and the stock price dynamics under the first best. To maximize the firm value, the manager must invest to its fullest potential, that is, keep Kt = Jt for all t. We solve for the investment rate It that ensures that this constraint is satisfied, and we obtain the following: PROPOSITION 1. The first-best optimal investment policy given λ is It =

(8)

(G + δ)e Gt ∗ ∗ (g + δ)e Gτ +g(t−τ )

for t < τ ∗ . for t ≥ τ ∗

The dividend stream of a firm that fully invests is given by (9)

Dt = zKt − It =

for t < τ ∗

DtG = (z − G − δ) e Gt g

Dt = (z − g − δ) e Gτ

∗

∗

+g(t−τ )

for t ≥ τ ∗

.

The top panel of Figure II plots the dynamics of the optimal dividend path for a firm with high growth in investment opportunities until τ ∗ , and low growth afterward. As the figure shows, the slowdown in the investment opportunities requires a decline in the investment rate, which initially increases the dividend pay∗ g out rate: Dτ ∗ − DτG∗ = (G − g)e Gτ . Given the above assumptions, the dividend rate is always positive. Moreover, from (9), the dividend growth rate equals the growth rate of investment opportunities, g. ˜ Given the dividend profile, the price of the stock follows: PROPOSITION 2. Given λ, under symmetric information the value of the firm is

(10)

(11)

before Pfi,t

∞

e−r(s−t) Dsg ds z − g − δ Gτ ∗ +g(t−τ ∗ ) e = for t ≥ τ ∗ , r−g τ ∗

∗ after = Et e−r(s−t) DsG ds + e−r(τ −t) Pfi,τ ∗

after = Pfi,t

t

t

= e Gt Afiλ

for t < τ ∗ ,

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1779

Dividend path under perfect information

0.7

0.6 Low growth

Dividend

0.5

0.4

0.3 High growth

0.2

0.1 τ* 0

0

5

10

15

11

20 25 30 Time Price path under perfect information

35

40

45

50

40

45

50

10 9

Price

8

Low growth

7 High growth 6 5 4 τ*

3 2

0

5

10

15

20

25 Time

30

35

FIGURE II A Dividend Path (Top Panel) and a Price Path (Bottom Panel) under Perfect Information Parameter values are in Table I.

where (12)

Afiλ

z−g−δ (z − G − δ) +λ . = r+λ−G (r − g)(r + λ − G)

In the pricing functions (10) and (11) the subscript “fi” stands for “full information” and superscripts “after” and “before” are for t ≥ τ ∗ and t < τ ∗ , respectively. Under full information, the share price drops at time τ ∗ by after before = −e Gτ Pfi,τ ∗ − Pfi,τ ∗

∗

(z − r − δ)(G − g) , (r − g)(r + λ − G)

1780

QUARTERLY JOURNAL OF ECONOMICS

which increases in τ ∗ . The bottom panel of Figure II plots the price path in the benchmark case corresponding to the dividend path in the top panel. We finally show that all else equal, shareholders prefer the manager to exert high effort, as the choice of e H maximizes firm value. COROLLARY 1. The firm value under e H is always higher than under e L; that is, (13)

before H before L Pfi,t (λ ) > Pfi,t (λ ).

By simple substitution in Proposition 2, it is easy to see that (13) holds iff z − r − δ > 0,

(14)

which is always satisfied (see condition (5)). It is intuitive that without a proper incentive scheme, the CEO won’t exert high effort in this environment, even if τ ∗ is observable, because of the cost of effort. In our setting, shareholders cannot solve this incentive problem by simply “selling the firm to the manager”: COROLLARY 2. The manager’s personal valuation of the firm before τ ∗ is as in (11), but with β substituted for r. Thus, a manager/owner exerts high effort, e H , iff (15)

λL + β − G 1 + λ L H Div > , λH + β − G 1 − c H + λ H H Div

where (16)

H Div =

(z − g − δ) . (z − G − δ)(β − g)

Condition (15) is intuitive. First, when effort does not produce much increase in the expected τ ∗ , that is, when λ H ≈ λ L, then the condition is never satisfied with c H > 0, and therefore the manager does not exert high effort. Second, and less intuitively, even when effort is costless (c H = 0) the manager may still not choose high effort, even if he owns the firm. In this case, condition

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1781

(15) is satisfied iff 6 z − β − δ > 0,

(17)

which is similar to (14), but with the manager discount β taking the place of the shareholders’ discount r. That is, if the manager is impatient, and the return on capital is low, so that β > z − δ, then the manager/owner won’t exert high effort. Because of this, the manager’s personal valuation of the firm is much lower than the shareholders’ valuation, a difference that goes beyond the simple difference due to discounting. In line with previous work, stock-based compensation provides the incentive for the manager to exert costly effort. For simplicity, we focus first on the simplest compensation program, in which the manager receives shares at the constant rate η per period. Because of linear preferences, it is always optimal for the manager to sell the shares immediately;7 therefore his or her effective compensation at time t is wt = η Pt .

(18)

PROPOSITION 3. In equilibrium with consistent beliefs where λ is before = the current intensity of τ ∗ and the stock price is Pfi,t fi Gt fi H e Aλ , where Aλ is in (12), the manager exerts high effort, e , under stock-based compensation (18) iff (19)

Afiλ + λ L H Stock λL + β − G > , λH + β − G Afiλ (1 − c H ) + λ H H Stock

where H Stock =

z−g−δ . (r − g)(β − g)

It follows that a high-effort Nash equilibrium occurs iff (19) is satisfied for Afiλ H . A low-effort Nash equilibrium occurs iff (19) is not satisfied for AfiλL . 6. This condition is obtained by substituting c H = 0 and the value of H Div into (15) and rearranging terms. 7. This is true in the full information equilibrium, as there is no signaling role of consumption. It is also true in a pooling asymmetric equilibrium, discussed next, as any deviation from this rule would reveal the manager’s information and destroy the equilibrium.

1782

QUARTERLY JOURNAL OF ECONOMICS

We note that if the cost of effort is zero, c H = 0, then condition (19) is always satisfied, as the price multiplier of the stock, largely due to future investment opportunities, is capitalized in the current salary, providing the proper incentive for the manager to exert high effort.

III. THE (DIS)INCENTIVES OF STOCK-BASED COMPENSATION The preceding section shows that when τ ∗ is observable, a simple stock-based compensation will resolve the shareholders’ incentive problem. Clearly the manager has much more information than the investors regarding the future growth opportunities of the firm, as well as its actual investments. Is this compensation scheme still optimal when τ ∗ is private information of the manager? To build intuition, consider again the random time τ ∗ in the bottom panel of Figure II, and assume now that τ ∗ is private information of the manager. In this case, if the manager disclosed the information, his wage would drop from wt = η Ptbefore to wt = η Ptafter , depicted in the figure by the relatively sharp drop in price. That is, a pure form of stock-based compensation effectively implies that shareholders severely punish the manager for revealing bad news about the growth prospects of the firm. It is important to note that the bad news is only about the growth prospects—which we refer to as growth options, in line with the asset-pricing terminology—and not about the return on assets in place, which we assume constant and equal to z. Given an opportunity, the manager will try to conceal this information. However, this conceal strategy is harder to implement than it seems at first, even if shareholders have no information about the firms’ investment and capital dynamics. In reality, shareholders have only imprecise signals about the amount of economic capital and investment undertaken by the corporation. This is especially true for those industries characterized by high R&D expenditures, intellectual property, or a high degree of opacity in their operation (e.g., financial institutions), or in rapidly growing new industries, as the market does not know how to distinguish between investments and costs. For tractability reasons, we assume that signals about the level of capital and investments have in fact infinite noise, and thus shareholders form beliefs about the manager’s actions only

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1783

by observing realized dividend payouts. Although this assumption is extreme, it reflects the lack of randomness of the return on capital z that we assume. A more realistic model would have both z stochastic and informative signals about capital and investments. Such a model is much more challenging to analyze, not only because shareholders’ beliefs dynamics, which affect prices, are more complicated, but also because the CEO’s optimal investment strategy would be extremely complex, as he or she would have to balance out the amount of information to reveal with the need to conceal the bad news for as long as possible. We note that although some information about investments and capital would make it easier for shareholders to monitor the manager, the presence of random return on capital z would also make it easier for the manager to conceal bad news longer, as he or she could blame low dividend payouts to temporary negative shocks on z rather than a permanent decline in investment opportunities. It is thus not obvious that our assumption of nonstochastic z but unobservable K and I make it easier for the manager to conceal τ ∗ than a more realistic setting. Shareholders know that at t = 0 the firm has a given K0 of capital and high growth rate G of investment opportunities. As long as the firm is of type G, they expect a dividend DtG , as described in (9).8 We assume that whenever the dividend deviates from the path of a G firm, shareholders perform an internal investigation in which the whole history of investments is made public. This assumption is realistic, as only major changes in the firm’s dividend policy may act as a coordination device across dispersed shareholders, who may then call for a possibly costly internal investigation on the firm. If the drop in dividend payouts does not correspond to the time of slowdown in investment opportunities (τ ∗ ), the CEO is dismissed from the firm.

III.A. Investment under Conceal Strategy If the CEO decides to conceal the truth, he or she must design an investment strategy that enables the firm to continue paying the high-growth dividend stream DtG in (9). Intuitively, such a strategy cannot be held forever, as it will require more cash than the firm produces. We denote by T ∗∗ the time at which the 8. The assumption that dividends can be used to reduce agency costs and monitor managers has been suggested by Easterbrook (1984).

1784

QUARTERLY JOURNAL OF ECONOMICS

firm experiences a cash shortfall and must disclose the truth to investors. Because the firm’s stock price will decline at that time, and the manager will lose his or her job, it is intuitive that the best strategy for the CEO is to design an investment strategy that maximizes T ∗∗ , as established in the following lemma: LEMMA 1. Conditional on the decision to conceal the true state at τ ∗ , the manager’s optimal investment policy is to maximize the time until the cash shortfall T ∗∗ . The next proposition characterizes the investment strategy that maximizes T ∗∗ : PROPOSITION 4. Let Kτ ∗ − denote the capital accumulated in the firm by time τ ∗ . If the CEO chooses to conceal the decline in growth opportunities at τ ∗ , then 1. He or she employs all the existing capital stock: Kτ ∗ = Kτ ∗ − . 2. His or her investment strategy for t > τ ∗ is It = z min(Kt , Jt ) − (z − G − δ)e Gt . 3. The firm’s capital dynamics is characterized as follows: Let h∗ and h∗∗ be the two constants defined in (49) and (50) in the Appendix, with T ∗∗ = τ ∗ + h ∗∗ . Then a. For t ∈ (τ ∗ , τ ∗ + h∗ ), the firm’s capital Kt exceeds its optimal level Jt . b. For t ∈ (τ ∗ + h∗ , T ∗∗ ], the firm’s capital Kt is below its optimal level Jt . Point 1 of Proposition 4 shows that in order to maximize the time of cash shortfall T ∗∗ , the manager must invest all of its existing capital in the suboptimal investment strategy. This suboptimal investment strategy, in point (2) of the proposition, ensures that dividends are equal to the higher growth profile DtG = (z − G − δ)e Gt (see (9)) for as long as possible. The extent of the suboptimality of this investment strategy is laid out in point (3) of Proposition 4. In particular, the CEO initially amasses an amount of capital that is above its optimal level Jt (for t < τ ∗ + h∗ ), whereas eventually the capital stock must fall short of Jt (for t ∈ (τ ∗ + h∗ , T ∗∗ )). These dynamics are illustrated in Figure III for a parametric example. The top panel shows that the optimal capital stock initially exceeds the upper bound on the employable capital, Kt > Jt . This implies that the pretending firm initially invests in negative– NPV projects, as shown in the bottom panel. Indeed, although

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1785

Capital path after τ *

1.4

Reveal strategy Conceal strategy Minimum required capital

1.35 1.3 1.25

Capital

1.2 1.15 1.1 1.05 1 Crisis time T **

0.95 0.9

0

2

4

6

8

10 Time

12

14

16

18

20

16

18

20

Investment path after τ*

0.08 0.06 0.04

Investment

0.02 0 −0.02 −0.04 −0.06 −0.08

Crisis time T **

−0.1 −0.12

Reveal strategy Conceal strategy

0

2

4

6

8

10 Time

12

14

FIGURE III The Dynamics of Capital and Investments under Reveal and Conceal Equilibrium after τ ∗ (Normalized to 0 in This Figure) This figure shows the capital dynamics (top panel) and investment dynamics (bottom panel) for a g firm pretending to be a G firm (dashed line), relative to the revealing strategy (solid line). The vertical dotted line denotes the liquidity crisis time T ∗∗ . Parameter values are in Table I.

the excess capital stock Kt − Jt has a zero return by assumption, it does depreciate at the rate δ. Intuitively, when investment opportunities slow down, the CEO is supposed to return capital to the shareholders (see Figure II). Instead, if the CEO pretends that nothing has happened, he or she will invest this extra cash in negative–NPV projects as a storage of value to delay T ∗∗ as much as possible. The bottom panel of Figure III shows that eventually the pretending firm engages in disinvestments to raise cash for the larger dividends of the growing firm. The firm can do this as

1786

QUARTERLY JOURNAL OF ECONOMICS

long as its capital Kt is above the minimal capital Kt . Indeed, the condition KT ∗∗ = K T ∗∗ determines T ∗∗ .9 In a conceal Nash equilibrium, rational investors anticipate the behavior of the managers, and price the stock accordingly. We derive the pricing function next. III.B. Pricing Functions under Asymmetric Information At time T ∗∗ , the pretending firm experiences a cash shortfall and is not able to pay its dividends DtG . At this time, there is full information revelation and thus the valuation of the firm becomes straightforward. The only difference from the symmetric information case is that the firm now does not have sufficient capital to operate at its full potential; thus it needs to recapitalize. Because at T ∗∗ the firm’s capital equals the minimum employable capital level, KT ∗∗ = K T ∗∗ , whereas the optimal capital should be JT ∗∗ , the firm must raise JT ∗∗ − K T ∗∗ . From assumption (6), K T ∗∗ = ξ JT ∗∗ , which yields the pricing function

∞

∗∗

Dt e−r(t−T ) dt − JT ∗∗ (1 − ξ ) T ∗∗ z−r −δ (G−g)τ ∗ +gT ∗∗ +ξ . =e r−g

L Pai,T ∗∗ =

(20)

g

The pricing formula for t < T ∗∗ is then (21)

Pai,t = Et t

T ∗∗

e−r(s−t) DsG ds + e−r(T

∗∗

−t)

L Pai,T . ∗∗

The subscript “ai” in (21) stands for “asymmetric information.” Expression (21) can be compared with the analogous pricing formula under full information (11); the only difference is that the after switch time τ ∗ is replaced by the (later) T ∗∗ , and the price Pfi,τ ∗ is L replaced with the much lower price Pai,T ∗∗ . We are able to obtain an analytical solution: PROPOSITION 5. Let shareholders believe that λ is the current intensity of τ ∗ . Under asymmetric information and conceal 9. Therefore the technical assumption of minimal capital stock in equation (6) affects the time at which the firm can no longer conceal the decline in its stock of capital.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1787

strategy equilibrium, the value of the stock for t ≥ h∗∗ is10 (22)

Pai,t = e Gt Aai λ,

where (23) Aai λ =

(z − G − δ) ∗∗ + λe−(G−g)h (r + λ − G)

z − r − δ + (r − g)ξ (r − g)(r + λ − G)

.

Comparing the pricing formulas under asymmetric and symmetric information, (22) and (11), we observe that the first terms in the constants Afiλ and Aai λ are identical. However, the second term is smaller in the case of asymmetric information: the reason is that under asymmetric information, rational investors take into account two additional effects. First, even if the switch time τ ∗ has not been declared yet, it may already have taken place, and the true investment opportunities may be growing at a lower rate g for ∗∗ a while (up to h∗∗ ). The adjustment e−(G−g)h < 1 takes this possibility into account. Second, at time T ∗∗ , the firm must recapitalize to resume operations, which is manifested by the smaller numerator of the second term, compared to the equivalent expression in (11). The top panel of Figure IV illustrates the value loss associated with the conceal strategy. Because the manager’s compensation is not coming out of the firm’s funds, the value loss is equal to the loss of the shareholders relative to what they would have gotten under the reveal strategy (full information). These costs can be measured by the present value (as of τ ∗ ) of the difference in the dividends paid out to the shareholders under the two equilibria. Relative to the reveal strategy, the conceal strategy pays lower dividends for a while, as the manager pretends to actively invest, and then must pay higher dividends, which arise from allegedly high cash flow. These higher dividend payouts come at the expense of investment, and thus are essentially borrowed from the future dividends. The lower the minimum employable capital Kt (i.e., lower ξ in (6)), the longer the CEO can keep up the pretense, and thus the higher the recapitalization that is required when the firm experiences a cash shortfall. This also implies lower dividends forever after the T ∗∗ . How does the information asymmetry affect the price level? The bottom panel of Figure IV plots price dynamics under the conceal equilibrium and compares them with prices under the 10. The case t < h∗∗ does not yield additional intuition, yet it is much more complex. For this reason, we leave it to equation (51) in the Appendix.

1788

QUARTERLY JOURNAL OF ECONOMICS Dividend path in two equilibria

1.4 Conceal equilibrium Reveal equilibrium 1.2

Dividend

1

0.8

0.6

0.4

τ*

0.2

0

0

5

10

15

20

25 Time

30

35

40

45

50

35

40

45

50

Price path under two equilibria

22

Conceal equilibrium Reveal equilibrium

20 18 16

Price

14 12 10 8 6 τ*

4 2

0

5

10

15

20

25 Time

30

FIGURE IV Dividend Dynamics and Price Dynamics in Reveal and Conceal Equilibria The vertical dotted line denotes time τ ∗ of the growth change from G to g. Parameter values are in Table I.

reveal equilibrium. Rational investors initially reduce prices in the conceal equilibrium, as they correctly anticipate the manager’s suboptimal behavior after τ ∗ . The stock price, however, at some point exceeds the full information price, as the firm’s cash payouts increase (see top panel). The price finally drops at T ∗∗ when the firm experiences a severe cash shortfall and needs to recapitalize. The exact sizes of underpricing and price drop depend on parameter values, as further discussed in Section V. The conceal equilibrium discussed in this section also provides CEOs a motive to “meet analysts’ earnings expectations,”

1789

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

a widespread managerial behavior, as recently documented by Graham, Harvey, and Rajgopal (2005). Indeed, the stock behavior at T ∗∗ is consistent with the large empirical evidence documenting sharp price reductions following failures to meet earnings expectations, even by a small amount (see, e.g., Skinner and Sloan [2002]). III.C. Equilibrium Strategy at t = τ ∗ We now consider the manager’s incentives at time τ ∗ to conceal or reveal the true growth rate. Because after τ ∗ there is nothing the manager can do to restore high growth G, the choice is driven solely by the comparison between the present value of the infinite compensation stream under the reveal strategy, and the finite stream under the conceal strategy. Recall also that after τ ∗ the manager no longer faces any uncertainty (even T ∗∗ is known), and thus the two utility levels can be computed exactly. The rational-expectations, pure-strategies Nash equilibrium must take into account investors’ beliefs about the manager’s strategy at time τ ∗ , because they determine the price function. There are three intertemporal utility levels to be computed at τ ∗ , depending on the equilibrium. In a reveal equilibrium, the manafter in equation (10) if at τ ∗ the ager’s utility is determined by Pfi,t manager decides to reveal. In contrast, if the manager decides to before conceal, his or her utility is determined by the price function Pfi,t in equation (11). In a conceal equilibrium, if the manager follows the Nash equilibrium strategy (conceal at τ ∗ ), then the price function must be the asymmetric information price function Pai,t in equation (22). If instead, the manager reveals at τ ∗ the true state of the firm, the price function reverts back to the full information after in equation (10). These three utility levels are price Pfi,t (24) reveal UStock,τ ∗

=

(25) conceal,ai UStock,τ ∗

(26) conceal,fi UStock,τ ∗

e T∗

e−β(t−τ

τ∗

=

−β(t−τ ∗ )

τ∗

=

∗

∞

T∗

e τ∗

∗

after ηe Gτ η Pfi,t dt = (β − g)

) Gτ ∗ η Pai,t dt = η Aai λe

−β(t−τ ∗ )

before ∗ η Pfi,t dt = η Afiλ e Gτ

z−g−δ r−g

, ∗∗

1 − e−(β−G)h β−G

,

∗∗

1 − e−(β−G)h β−G

.

1790

QUARTERLY JOURNAL OF ECONOMICS

The following proposition provides the equilibrium conditions: PROPOSITION 6. Let τ ∗ ≥ h∗∗ .11 A necessary and sufficient condition for a conceal equilibrium under stock-based compensation is (z − g − δ) Aai ∗∗ λ 1 − e−(β−G)h > (27) , (β − G) (r − g)(β − g) where the constant Aai λ is given in equation (23). Similarly, a necessary and sufficient condition for a reveal equilibrium under stock-based compensation is (28)

(z − g − δ) Afiλ ∗∗ 1 − e−(β−G)h < , (β − G) (r − g)(β − g)

where the constant Afiλ is given in equation (12). Intuitively, the right-hand sides of both conditions (27) and (28) are the discounted utility under the reveal strategy. Because the compensation is stock-based, the stock multiplier “1/(r − g)” enters the formula. The left-hand sides of both conditions are the discounted utility values under the conceal strategy. In particular, now the stock multiplier Aai λ appears under the conceal equilibrium, whereas the stock multiplier Afiλ appears under the reveal equilibrium. Because Afiλ > Aai λ , conditions (27) and (28) imply that the two equilibria in pure strategies are mutually exclusive, and thus it is not possible to find parameters for which both equilibria can exist at the same time. However, it may happen that for some parameter combination, no pure-strategy Nash equilibrium exists. III.D. Rational Expectations Equilibrium with the Choice of Effort We now move back to t < τ ∗ and obtain conditions for Nash equilibrium that include the manager’s effort choice. The equilibrium depends on the type of compensation and on the equilibrium at time τ ∗ . The expected utility for t < τ ∗ is given by τ ∗

∗ (29) Ut = Et e−β(u−t) wu[1 − c(e)] du + e−β(τ −t) Uτ ∗ , t

11. The solution for the case τ ∗ < h∗∗ is cumbersome and thus is relegated to the Appendix.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1791

where Uτ ∗ is the manager’s utility at τ ∗ , computed in the preceding section, whose exact specification depends on the equilibrium itself. We now derive the conditions under which the stock-based compensation induces high effort. Because “conceal” is the most frequent equilibrium at t = τ ∗ (see Section V), we focus our attention only on this case. PROPOSITION 7. Let t ≥ h∗∗ and let λ H be such that a conceal equilibrium is obtained at τ ∗ . Then high effort e H is the equilibrium strategy iff λL + β − G 1 + λ L HaiStock , > λH + β − G 1 − c H + λ H HaiStock

(30) where

∗∗

HaiStock ≡

1 − e−(β−G)h . β−G

Condition (30) is similar to condition (19) in the benchmark case, and it has the same intuition (see discussion after Proposition 3). To summarize, Propositions 6 and 7 show that under conditions (27) and (30), pure stock-based compensation induces a high effort/conceal equilibrium: The CEO exerts high effort to increase the firm’s growth options but will not disclose bad news about growth options when the time comes. Section V shows that these conditions hold for a wide range of parameters. IV. ALTERNATIVE SIMPLE COMPENSATION SCHEMES The preceding section focused on a simple, linear stock-based compensation scheme. In this section we consider alternative simple compensation schemes widely used in the industry. The final section also discusses the properties of an optimal contract. IV.A. Flat Wage Suppose the manager simply gets a wage wt that is not contingent on anything. For simplicity, assume wt = w, a constant. In this case, it is intuitive that the manager at time τ ∗ prefers to reveal the decrease in investment opportunities to shareholders, as he would get w for a longer period. The drawback, of course, is that the manager has no incentive to exert costly effort,

1792

QUARTERLY JOURNAL OF ECONOMICS

because it would bear the effort cost c H . Thus, the resulting Nash equilibrium is a low effort/reveal equilibrium, as shareholders expect that the manager will not exert effort, and prices adjust to before = e Gt AfiλL in (11). Pfi,t From the shareholders’ point of view, the interesting question is whether it is better to induce a low effort/reveal equilibrium through a simple flat wage, or a high effort/conceal equilibrium through stock-based compensation. Because each of the two equilibria has one positive and one negative feature, the question is which one is better. The next corollary answers this question, whereas Section V contains a quantitative assessment of this corollary. L

COROLLARY 3. There are λ H and λ such that for λ H < λ H and λ L > L λ the value of the firm under high effort/conceal equilibrium is higher than under low effort/reveal equilibrium. That is, before . Pai,0 > Pfi,0 Intuitively, as λ H → 0, the price under asymmetric information converges to the Gordon growth formula with high growth: Pai,0 → (z − G − δ)/(r − G). Similarly, as λ L → ∞, the price under full information converges to the same model but with low before → (z − g − δ)/(r − g). Because in our model growth rate g, Pfi,0 before z > r + δ, in the limit Pai,0 > Pfi,0 . This corollary implies that if the manager’s effort strongly affects the investment opportunities growth, then shareholders prefer an incentive scheme that induces a conceal strategy as a side effect. They are willing to tolerate the stock price crash at T ∗∗ and recapitalization as a delayed cost to provide incentives for longer-term growth. This fact implies that it is not necessarily true that finding ex post managers who have not been investing optimally during their tenure is in contrast with shareholders’ ex ante choice. Given the choice between these two equilibria, ex ante shareholders would be happy to induce high growth at the expense of the later cost of a market crash. We believe that this is a new insight in the literature. Section V below shows that stock-based compensation is ex ante optimal for a wide range of reasonable parameters. IV.B. Deferred Compensation: Vesting A popular incentive scheme is to delay the compensation of managers for a few years. Indeed, there is a conventional wisdom

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1793

that delayed stock-based compensation provides the incentives both to exert high effort and to reveal any bad news about the company. Unfortunately, this conventional wisdom is not warranted, as we now show. To see the problem with the argument, consider the case in which the firm pays the managers at a rate of ηt shares per period, which are vested for k years. Because of linear preferences, it is optimal for the manager to sell off all of the shares that are becoming eligible for vesting, and consume out of the proceeds.12 Thus, at time t, the manager’s consumption is wt = ηt−k Pt . As in the preceding section, we study the case in which the firm always awards the same number of shares per period, ηt = η, which makes the consumption at time t simply wt = η Pt . Assume that if the CEO conceals at τ ∗ , he will lose all the nonvested shares at the time of the cash shortfall T ∗∗ . It is immediate then that if τ ∗ > k, the intertemporal utilities at τ ∗ under either a reveal or a conceal equilibrium are given by the expressions (24)–(26). Thus, in this case, the incentive problem of the manager is identical to the one examined earlier in Proposition 7. That is, delayed stock-based compensation is completely ineffective in inducing the manager to reveal bad news about growth options. Intuitively, if τ ∗ > k, then at τ ∗ the manager has accumulated enough shares becoming eligible for vesting per period so that the revelation of bad news about growth options would undermine. The manager will then retain the information. What if τ ∗ < k? The only change in the expressions for the intertemporal utilities (24)–(26) is that the integral will start from k instead of τ ∗ , and thus the expressions have to be modified accordingly. In this case, indeed, for k long enough the intertemporal utility under the conceal strategy always decreases to zero more quickly than the one under the reveal strategy, and thus a large vesting period k would provide the correct incentives if τ ∗ < k. However, relying on this event (τ ∗ < k) to provide the proper incentives to the CEO is misguided, as the lower utility under stockbased compensation only stems from the fact that the CEO has not had enough time to accumulate shares before τ ∗ . This logic is problematic for two reasons: First, shareholders actually want τ ∗ 12. See footnote 7.

1794

QUARTERLY JOURNAL OF ECONOMICS

to occur as far in the future as possible, and thus hoping also to have τ ∗ < k is against their desire to push the manager to exert high effort, unless the delay k is unreasonably high. Second, the argument relies on t = 0 being the time at which the CEO starts accumulating shares, which is also problematic. In our model, t = 0 is any time at which there is full disclosure about both the capital K0 and the company growth rate g˜ = G. For instance, if this time reflects the time of an IPO, it is typically the case that the owner selling the firm becomes the CEO of the new company while initially retaining a large fraction of the firm’s ownership. Similarly, managers who are promoted from within the firm have already accumulated shares during their time at the firm. Thus, if we assume that at time t = 0 the manager is already endowed with shares becoming eligible for vesting, then τ ∗ < k can never happen, and the equilibrium is effectively identical to the one in the preceding section. We finally note that when the manager decides at τ ∗ to conceal the bad news about future investment opportunities, he or she does so in the full knowledge that at the later time T ∗∗ he or she will lose all of the nonvested shares (ηk), suffering an effective loss at T ∗∗ equal to ηkPT ∗∗ . This amount can be quite substantial, as PT ∗∗ is very high (see Figure IV), and thus it may appear at first that a CEO who loses a massive amount of wealth at the time of the firm’s liquidity crisis (i.e., T ∗∗ ) cannot be responsible for the crisis itself, as he or she is the first to lose. But this conclusion is not warranted, because the price reached the large level PT ∗∗ exactly because of the (misleading) behavior of the CEO. Had the CEO behaved in the best interest of the shareholders, such a high value of the stock would have been not realized in the first place, as shown by the dashed line in Figure IV. This analysis therefore cautions against reaching any type of conclusion on the behavior of CEOs based on personal losses of wealth at the time of the firm’s liquidity crisis. IV.C. Deferred Compensation: Delayed Payments The main problem with vesting is that the effective compensation at t still depends on the stock price at t. A popular variation of deferred compensation is to delay the payment only to k years in the future, so that the consumption at time t is wt = ηt−k Pt−k

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1795

if the manager acts in the best interest of the shareholders, or zero after the cash shortfall T ∗ , if it happens. This compensation scheme is equivalent to placing the cash equivalent of the pure stock-based compensation in an escrow account, and paying it k years later if the CEO has not been caught misbehaving. The next proposition characterizes the equilibrium: PROPOSITION 8. a. Let k be defined by the equation

−(β−G)(h∗∗ −k) z−g−δ fi 1 − e (31) . Aλ = β−G (r − g)(β − g) Then the manager reveals at τ ∗ iff k ≥ k. b. Let t ≥ k = k and let τ ∗ not have been realized yet.13 Then the manager exerts high effort e H iff

L ∗∗ H β +λ −G c (32) < (λ L − λ H ) e−(β−G)h . β + λH − G Thus, there exists a constant c H , such that if c H < c H , then high effort/reveal is a Nash equilibrium. This proposition shows that the stock-based delayed-payment compensation scheme may effectively achieve the first best, as the CEO exerts high effort (b) and reveals the true state at τ ∗ (a). This is good news, and it matches the intuition that the stock component of the compensation still provides the incentive to work hard, and the delayed payment provides the incentive to reveal any bad news about the long term. However, the proposition also highlights two facts: First, the delay must be sufficiently long (k > k), and second, the cost of effort must be sufficiently low (c H < c H ). Unfortunately, if either of these requirements is not satisfied, the equilibrium breaks down. In our basic calibration, we find that the minimum delay to induce truth revelation is k ≈ 11.8 years, which we find unrealistically long. In addition, we also find that the lower bound to the managerial cost is c H ≈ 1.8%, which is below the parameter range we use in our calibration. The optimal contract in Section IV.G and its implementation through a stocksplus-bonus compensation scheme in Section V.C provide a solution that works across a large range of parameter values. 13. We assume t ≥ k to sidestep the issue of having already accumulated shares by τ ∗ , as discussed in the preceding section.

1796

QUARTERLY JOURNAL OF ECONOMICS

IV.D. Option-Based Compensation How does an option-based contract affect the incentive to conceal bad news about growth options? We now show that such a contract amplifies the incentive to conceal. Let ηt denote the number of options awarded at time t, and let k be their time to maturity. As is standard practice, we assume that these options are issued at the money, with strike price Ht = Pt . Thus, the consumption of the manager at t is given by wt = ηt−k max (Pt − Ht−k, 0) = ηt−k max (Pt − Pt−k, 0). Consider again time τ ∗ . In this case, the intertemporal utilities under the reveal and conceal strategy in the reveal Nash equilibria14 are given by τ ∗ +k after ∗ Reveal,fi before (33) UOption,τ e−β (t−τ ) ηt−k max Pfi,t − Pfi,t−k , 0 dt ∗ = τ∗ ∞ ∗ after after e−β t−τ ηt−k max Pfi,t − Pfi,t−k , 0 dt, + Conceal,fi (34) UOption,τ ∗ =

τ ∗ +k τ ∗ +h∗∗

τ∗

∗ before before e−β (t−τ ) ηt−k max Pfi,t − Pfi,t−k , 0 dt.

The intertemporal utilities under these two strategies in the conceal equilibrium are identical but with the asymmetric informabefore in both (33) and (34). tion price Pai,t substituted in place of Pfi,t The next proposition shows that the leverage implied by option like contracts in fact makes the conceal strategy more likely. PROPOSITION 9. a. Let kfi∗ be defined by r−g 1 (35) Afiλ . kfi∗ = log G z−g−δ Then for k < kfi∗ a reveal equilibrium at τ ∗ holds iff (e gk − 1) β − G ∗ ∗∗ > e Gkfi eβk 1 − e−(β−G)h . (36) −Gk (1 − e ) β−g There exists g > 0 such that this condition is always violated for g < g. Thus, a reveal equilibrium cannot be supported when g is small. 14. See notation and discussion in Section III.C.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1797

fi ∗ be defined as in (35) but with Aai b. Let kai λ in place of Aλ , and ∗ ∗∗ ∗ let τ > h + k. Then, for k < kai , a conceal Nash equilibrium at τ ∗ occurs if (e gk − 1) β − G ∗ ∗∗ < e Gkai eβk 1 − e−(β−G)h . −Gk (1 − e ) β−g

There exists g > 0 such that this condition is always satisfied for g < g. Thus, a conceal equilibrium can always be supported when g is small. The intuition behind this proposition is straightforward. Consider the case in which g = 0. In this case, the pricing formula in (11) shows that when the information is revealed, the price drops after , which is a constant. The value of kfi∗ in (35) is the time lag to Pfi,t after that ensures that the price at revelation, Pfi,τ ∗ , equals the price before after before revelation while it was still increasing, Pfi,τ ∗ −k∗ = Pfi,τ ∗ . fi ∗ Clearly, for k < kfi , it follows that the price after revelation is albefore ways smaller than the strike price Hτ ∗ −k = Pfi,τ ∗ −k, pushing the option out of the money. The case in which g = 0 also implies that the manager cannot expect that his future options will ever be in the money. Thus, in this case, by revealing the CEO gets intertemporal utility equal to zero. By concealing, in contrast, he always receive positive utility. It follows from this argument that optionlike payoffs tend to increase the incentive to conceal bad news compared to the case in which the manager has a linear contract. If k > kfi∗ calculations are less straightforward, but because the simple stock-based compensation can be considered a special type of option-based contract with strike price equal to H = P0 , it follows that increasing the strike price only decreases the payoff if the manager reveals information, decreasing his incentive to reveal. IV.E. Cash Flow–Based (Bonus) Compensation One alternative to stock-based compensation is a profit-based compensation contract. In the spirit of our simple neoclassical model with no frictions, we assume the firm pays out its output net of investments in the form of dividends. From an accounting standard, these should be considered the firm’s free cash flow,15 which coincides with dividends and earnings in our model, but 15. We thank Ray Ball for pointing this out.

1798

QUARTERLY JOURNAL OF ECONOMICS

that in reality they are different and subject to different degrees of manipulation. Free cash flows are arguably harder to manipulate and thus we consider a simple compensation defined on cash flows. Let then the compensation be given by wt = ηd Dt . In this case, we obtain the following: PROPOSITION 10. Under cash flow–based compensation, a necessary and sufficient condition for a “reveal” equilibrium at t = τ ∗ is z−G−δ z−g−δ −(β−G)h∗∗ (37) 1−e . < β−G β−g In addition, the manager exerts high effort, e H , iff (38)

1 + λ L H Div λL + β − G > , λH + β − G 1 − c H + λ H H Div

where (39)

H Div =

(z − g − δ) . (z − G − δ)(β − g)

A Nash equilibrium with high (low) effort obtains iff (38) is (is not) satisfied. This compensation strategy can achieve first best under some parameterizations. In particular, we find that condition (37) is satisfied for most parameter configurations. This result is in fact intuitive, and leads us to the optimal compensation discussed later. Referring to the top panel of Figure II, we see that when the manager optimally reveals, he or she has to increase the payout to shareholders, as there are no longer any investment opportunities available. Effectively, by doing so, the manager also increases his or her own compensation. That is, this cash flow–based compensation resembles a “bonus” contract in which the revelation of bad news leads to a higher cash payment and thus higher utility. Of course, a drawback of this compensation scheme is that it provides an incentive to pay out too high dividends and thus sacrifice investments. In fact, because the manager is impatient, β > r, he or she prefers to have τ ∗ occur as soon as possible, thus decreasing his or her incentive to exert high effort. Indeed, we find that condition (38) is satisfied only under some extreme parameterizations, in which both the return on capital z and the growth rate G are large. In this case, the higher discount β is

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1799

compensated by (much) larger cash flows in the future if the manager invests heavily, that is, if he or she exerts high effort. In all the other cases, the cash flow–based compensation leads to low effort and revelation, thereby generating the same type of conundrum already discussed in Section IV.A IV.F. Clawback Clauses One final popular incentive scheme is to insert clawback clauses into the CEO compensation package. Such clauses establish that the CEO has to return part or all of the compensation he or she received during a given time if he or she is found guilty of misconduct. In our model, in a conceal equilibrium the CEO is not disclosing all the information to shareholders, which can be considered reasonable cause for shareholders or regulators to proceed against the CEO. Clearly, if the difference between τ ∗ and T ∗∗ is verifiable in court, then by imposing a sufficiently large penalty at T ∗∗ we can always ensure that the manager discloses. The clawback clause is just such a penalty. Our model, however, suggests that shareholders and regulators have to be careful even with clawback clauses. For instance, suppose that the distinction between τ ∗ and T ∗∗ is observable but not verifiable, meaning that it would be hard to effectively prove in court that the manager has misbehaved. Although in our stylized model it is simple to detect misbehavior of the CEO, in reality the nonoptimal investment strategy of the CEO is much harder to detect, let alone to prove in court. In this case, one may decide to make the clawback clause contingent on some measure of performance. Consider, for instance, that shareholders move against the manager to claw back the salary paid when the price drops. In this case it is intuitive that the manager has no incentive to reveal his or her information at time τ ∗ , as it would induce a price decline. By concealing, the manager can push the price decline further back in the future, and thus maximize his or her utility. Another possibility is to set up a clawback clause contingent on a (large) recapitalization, which is the main difference between τ ∗ and T ∗∗ . This clause would indeed solve the problem within our model,16 although it relies on the fact that in our model the firm never needs to go to the capital markets. In an extension of the 16. The clawback clause must require the CEO to return the actual compensation received. As shown in Section IV.B, losing only the shares ηk not yet vested at T ∗∗ , for instance, would not alleviate the incentive to conceal the bad news at τ ∗ .

1800

QUARTERLY JOURNAL OF ECONOMICS

model in which the firm may need to raise more capital for investment purposes, for instance to open a new identical firm with the same technology that allows to increase the size, then again there is the risk that by putting a clawback clause the shareholders would not be inducing the optimal CEO behavior. IV.G. The Optimal Contract The preceding sections considered relatively standard simple compensation packages, adapted to our stylized model, and discussed their pros and cons. In this final section we briefly discuss the characteristics of the optimal contract, and compare them to the previous contracts. As in our dynamic model all quantities increase at an exponential rate, we restrict our attention to contracts of the form wt =

wtb = Ab e Bb t ∗ wta = Aa e Ba t+Ca τ

if t < τ ∗ , if t ≥ τ ∗

where the subscript a stands for “after τ ∗ ” and the subscript b stands for “before τ ∗ .” We assume for simplicity that although τ ∗ is not ex ante observable by shareholders, they are able to observe whether τ ∗ has been realized or not once the announcement is made. As discussed earlier, the manager may produce convincing evidence that investment opportunities deteriorated at τ ∗ , whereas he or she may refrain from producing this information in a conceal equilibrium. This simplifying assumption allows us to make the contract’s payoff contingent on the announcement itself.17 All bargaining power is with the firm, but the manager has an outside option, given by UtO = AO e BO t , which may be growing over time. Because in our model the resources to pay the manager are outside the model (do not come from dividends themselves), effectively the firm solves min E

∞

e−rt wt dt

0

17. For simplicity, we sidestep here the issue of truthfull revelation, that is, the incentive to have the manager announce τ ∗ when it actually happens.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1801

conditional on the following incentive compatibility constraints: (40)

(Reveal at τ ∗ )

(41)

(High effort before τ ∗ ) UtH ≥ UtL

(42)

(Outside option)

where UτReveal = ∗ for i = H, L,

∞ τ∗

Uti = Et

τ∗ t

UτReveal ≥ UτConceal ∗ ∗

for all t ≤ τ ∗ ,

Ut ≥ UtO ∗

e−β(t−τ ) wta dt, UτConceal = ∗

for all τ ∗ ,

for all t, T ∗∗ τ∗

∗

e−β(t−τ ) wtbdt, and

∗ e−β(s−t) wsb 1 − csi ds + e−β(τ −t) UτReveal |i . ∗

Note that we assume that by concealing, the manager loses the outside option. That is, there is a serious penalty for concealing. This is realistic. Adding back the outside option after concealing is possible at the cost of additional complications, but without much change in intuition. This assumption skews the manager against concealing. Because we find that with stock-based compensation concealing is widespread, adding the outside option even after (be18 ing caught) concealing would just ∞ make it even more frequent. Finally, to ensure that E[ 0 e−rt wt dt] is finite, we must assume that r + λ > Bb and r > Ba . That is, the compensation does not grow at a higher rate than the cost of capital. PROPOSITION 11. The incentive compatibility constraints are satisfied iff the following constraints are met: (43)

(44)

(Reveal at τ ∗ ) ∗∗ (β − Ba ) Aa ≥ Ab 1 − e−(β−Bb )h ; (β − Bb) (Ba + Ca ) ≥ Bb; (High effort before τ ∗ ) c H (β + λ L − Bb) (β − Ba ) 1− ; Aa ≤ Ab (β − Bb) [λ L − λ H ] Bb ≥ (Ca + Ba ) ;

18. Although we did not mention any outside option in preceding sections, as the compensation was assumed exogenous, it is necessary here to ensure that the manager has a positive lower bound to his or her payment (the firm has all the bargaining power). The analysis here is nonetheless consistent with the previous one so long as BO is small enough, and the free parameter η is set to equalize the manager’s expected utility at time 0 with the value of the outside option.

1802 (45)

(46)

QUARTERLY JOURNAL OF ECONOMICS

(Outside option)

1 λ H Aa H ≥ AO ; Ab 1 − c + H (β − Ba ) β + λ − Bb Bb ≥ BO ; Aa ≥ AO ; Ca ≥ 0; Ba ≥ BO . β − Ba

Subject to these constraints, the firm then minimizes

∞

1 λ H Aa . V =E e−rt wt dt = Ab + (r − Ba ) (r + λ H − Bb) 0 Constraint (43) shows that after τ ∗ , the level Aa of the compensation has to be above some value to induce the manager to reveal the bad news. Similarly, constraint (44) shows that the level of compensation cannot be too high after τ ∗ ; otherwise the manager prefers not to exert effort and increase his or her payoff sooner. These two constraints combined imply that Ba + Ca = Bb and (β − Ba ) c H (β + λ L − Bb) Ab 1− (β − Bb) (λ L − λ H ) (β − Ba ) ∗∗ 1 − e−(β−Bb )h . ≥ Aa ≥ Ab (β − Bb) This last constraint determines a feasibility region for Aa . This region is not empty iff the cost to exert high effort is below a threshold: ∗∗ [λ L − λ H ] e−(β−Bb )h H . c ≤ (β + λ L − Bb) Because the right-hand side is increasing in Bb, this constraint implies a lower bound on the growth rate of the CEO compensation before τ ∗ . We further discuss the properties and the intuition of the optimal contract in our calibration analysis in Sections V.B and V.C. There we also compare the stock-based compensation to the optimal contract and illustrate how the optimal contract can be approximated by using an appropriate stock-plus-bonus compensation. It is worth emphasizing immediately, however, that because of the simplicity of the model, we are able here to make the contract only contingent on time t and τ ∗ . This is useful to gauge the characteristics of the contract. In its implementation, however, one must use a better proxy than t for the firm’s growth. We return to this issue below.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1803

TABLE I PARAMETER VALUES Cost of capital High growth rate Low growth rate Depreciation rate Minimal capital level

r 8% G 7% g 1% δ 1% ξ 90%

Return on capital CEO discount rate Expected τ ∗ (high effort) Expected τ ∗ (low effort) CEO cost of effort

z 16% β 18% E[τ ∗ |e H ] = 1/λ H 15 years E[τ ∗ |e L] = 1/λ L 2 years 5% cH

V. QUANTITATIVE IMPLICATIONS In this section we examine when stock-based compensation generates a high effort/conceal equilibrium. For comparison, and to pave the way for the discussion of the optimal contract in Section V.C, we also consider the equilibrium induced by a cash flow–based (bonus) type of contract discussed in Section IV.E. The base parameter values are in Table I. Figure V shows the partition of the parameter space of (z, G) into regions corresponding to various equilibria.19 In the top right area the manager chooses high effort regardless of the compensation mode. Consequently, in this region, compensating the manager based only on a cash-flow (bonus) type of contract achieves the first best, as in this case he or she also reveals the bad news to investors, and maximizes firm value. This region consists of firms characterized by high returns on investment, z, and high growth, G, of investment opportunities. Such firms do not have to use stock-based compensation to induce high effort. The region below and to the left of the top right area is where bonus compensation no longer induces high effort, whereas stockbased compensation does, although in a conceal equilibrium. This is indeed the most interesting region, where we observe a trade-off between effort inducement and truth-telling inducement. Firms with reasonably high growth rates and return on investment are in that region. Finally, the region below and to the left from there is where we no longer have a pure strategy equilibrium under stockbased compensation, whereas cash flow–based bonus compensation still induces a low effort equilibrium. This is a region where a stock-compensated manager would prefer to conceal if he or she chose high effort, but would no longer choose high effort if he or she 19. Online Technical Appendix B shows that stock-based compensation indeed induces a conceal equilibrium for most parameter values in the spaces (g, G) and (E[τ ], G) as well. The implications are similar and omitted for brevity.

1804

QUARTERLY JOURNAL OF ECONOMICS Stock comp.: high effort/conceal eq. Cash-flow (bonus) comp.: high effort/reveal eq.

0.24

Return on Investment z

0.22

0.2

Stock comp.: high effort/conceal eq. Cash-flow (bonus) comp.: low effort/reveal eq.

0.18

0.16

0.14 Stock comp.: no pure strategy eq. Cash-flow (bonus) comp.: high effort/reveal eq.

0.03

0.04

0.05

0.06

0.07

0.08 0.09 High Growth G

0.1

0.11

0.12

0.13

FIGURE V Equilibrium Areas under Stock-Based Compensation and Cash Flow–Based Compensation In the (z, G) space, the figure shows the areas in which the following equilibria are defined: (a) the high effort/reveal equilibrium under dividend-based compensation; (b) the low effort/reveal equilibrium under dividends-based compensation; and (c) the high effort/conceal equilibrium under stock-based compensation. For all combination of parameters, dividend compensation generates a reveal equilibrium. z ranges between 12% and 25%, whereas G ranges between 3% and 14%. The remaining parameters are in Table I.

concealed. Part of that region corresponds to the conceal / low effort equilibrium (the worst possible scenario), whenever it exists. The existence depends on λ L: it does not exist for high levels of λ L. The remainder of the region corresponds to equilibria in mixed strategies. Solving for these is complicated, as the dynamic updating of investors’ beliefs becomes very tedious. They are not likely to provide new intuitions; thus we ignore them. In fact we find the region above that, where the real trade-off takes place, of most interest. V.A. High Effort or Truthful Revelation? The preceding section shows a large area in the parameter space in which a high effort/conceal equilibrium and a low effort/reveal equilibrium may coexist. Are shareholders better off with low effort and an optimal investment strategy, or high effort and a suboptimal investment strategy? Corollary 3 shows that the choice depends on the difference between λ L and λ H . This section provides a quantitative illustration of the trade-off.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1805

Dividend path under three equilibria

1

Stock−based compensation Dividend−based compensation High effort — symmetric information

0.9 0.8 0.7

Dividend

0.6 0.5 0.4 0.3 0.2

0

τ* under λ H

τ* under λ L

0.1

0

5

10

15

20

25 Time

30

35

40

45

50

35

40

45

50

Price path under three equilibria

22

Stock−based compensation Dividend−based compensation High effort — symmetric information

20 18 16

Price

14 12 10 8 6 4 2

τ* under λ H

τ* under λL 0

5

10

15

20

25 Time

30

FIGURE VI Dividend and Price Paths in Three Equilibria The figure plots hypothetical dividend (top panel) and price (bottom panel) paths in the cases of “stock-based compensation” (solid line); “dividend-based compensation” (dotted line); and the first best benchmark case with symmetric information and optimal investment (dashed line). Parameter values are in Table I.

To illustrate the trade-off, Figure VI plots the hypothetical price and dividend paths under the high effort/conceal and low effort/reveal equilibriua. For comparison, it also reports the first best, featuring high effort and the optimal investment after τ ∗ . As shown in Corollary 3, the low effort/reveal equilibrium induced, for instance, by a flat wage or cash flow–based (bonus) compensation may induce too low an effort, and this loss outweighs the benefits of optimal investment behavior at τ ∗ . The high effort/conceal equilibrium, induced by stock-based compensation, in contrast, gets closer to the first best, yet also leads to

1806

QUARTERLY JOURNAL OF ECONOMICS

suboptimal investment behavior, which generates the bubblelike pattern in dividend payouts and prices. To gauge the size of the trade-off between the two equilibria under various parameter choices, Table II reports the firm before , under the two equilibria value at time t = 0, Pai,0 , and Pfi,0 (columns (2) and (4)), and the average decline in price when the true growth rate of investment is revealed, at T ∗∗ in the high effort/conceal equilibrium (column (3)) and at τ ∗ in the low effort/reveal equilibrium (column (5)). Online Technical Appendix A contains closed-form formulas to compute the average decline (see Corollary A1). The first column reports the parameter that we vary compared to the benchmark case in Table I. The last two columns report the value and the expected decline in the first-best case. Panel A of Table II shows that even when the high growth rate G is relatively low, G = 6%, the high effort/conceal equilibrium achieves a higher firm value (Pai,0 = 2.48) than the low before L effort/reveal equilibrium (Pfi,0 (λ ) = 2.10), even though the former equilibrium induces a substantial expected market crash E[PT ∗∗ /PT−∗∗ − 1] = −51.49% at T ∗∗ , against a milder decline of only E[Pτ ∗ /Pτ−∗ − 1] = −4.59% in the latter case. The last two columns show that the first best achieves an even higher firm before H (λ ) = 2.58, although this value is not so much higher value, Pfi,0 than the one under asymmetric information. Note that even in the first-best case there is a market decline at revelation (−22.38%), although it is far smaller than in the asymmetric information case. The remainder of Table II (Panels B–F) shows that a similar pattern is realized for a wide range of parameter choices.20 For instance, in the base case, low effort induces an expected time of investment growth E[τ ∗ |e L] = 2 years. However, Panel B shows that even if E[τ ∗ |e L] is as high as eight years, a similar result before L (λ ) is always lower than Pai,0 . Panel C shows applies, as Pfi,0 that a higher return on investments z leads to an increase in prices (across equilibria) and a mild decline in the size of the crash at T ∗∗ for the asymmetric information case. Panel D shows that the higher cost of capital r reduces both the prices across the equilibria and the decline at revelation, although the impact on the asymmetric information case is smaller than that on the 20. In Panels B and F we set the cost c H = 2% instead of c H = 5% assumed throughout to ensure that all three equilibria exist under the parameter choices in column (1).

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1807

TABLE II HIGH EFFORT OR TRUTHFUL REVELATION? High effort/conceal eq.

G

Pai,0

6.00% 2.48 8.00% 2.85 10.00% 3.48 E[τ L]

Pai,0

4.00 6.00 8.00

2.65 2.65 2.65

z

Pai,0

16.00% 2.65 18.00% 3.19 20.00% 3.71 r

Pai,0

7.00% 8.00% 9.00%

3.38 2.65 2.15

δ

Pai,0

0.00% 1.00% 2.00%

2.92 2.65 2.37

ξ

Pai,0

60.00% 2.64 70.00% 2.64 80.00% 2.64

Low effort/reveal eq.

High effort/reveal eq.

Panel A: Investment opportunities growth P ∗∗ P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 τ−

T−

−51.49% −66.24% −78.99%

2.10 2.14 2.19

−4.59% −6.54% −8.57%

τ−

2.58 3.05 3.93

−22.38% −34.42% −49.08%

Panel B: Expected τ L under low effort P P∗ P∗ before L before H E PT ∗∗ −1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 ∗∗ τ−

T−

−59.11% −59.11% −59.11%

2.23 2.34 2.44

−10.34% −14.51% −18.18%

τ−

2.78 2.78 2.78

−28.12% −28.12% −28.12%

Panel C: Return on investment P P∗ P∗ before L before H E PT ∗∗ −1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 ∗∗ τ−

T−

−59.11% −57.52% −56.26%

2.12 2.44 2.76

−5.55% −6.21% −6.71%

τ−

2.78 3.29 3.80

−28.12% −30.56% −32.35%

Panel D: Shareholders’ discount rate P ∗∗ P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 τ−

T−

−60.66% −59.11% −57.89%

2.49 2.12 1.84

−6.42% −5.55% −4.71%

τ−

3.53 2.78 2.26

−33.96% −28.12% −22.88%

Panel E: Depreciation rate δ P P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 ∗∗ τ−

T−

−58.69% −59.11% −59.58%

2.28 2.12 1.96

−5.90% −5.55% −5.15%

τ−

3.04 2.78 2.53

−29.44% −28.12% −26.53%

Panel F: Minimum capital requirement ξ P ∗∗ P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 τ−

T−

−66.70% −64.35% −61.85%

2.11 2.11 2.11

−5.55% −5.55% −5.55%

τ−

2.78 2.78 2.78

−28.12% −28.12% −28.12%

Notes. Column (1) reports the value of the parameter that is changed from its base value in Table I. Columns (2) and (3) report the firm value at t = 0 and the average stock decline at T ∗∗ , respectively, under the high effort/conceal equilibrium induced by stock-based compensation. Columns (4) and (5) report the firm value at t = 0 and the average stock decline at τ ∗ , respectively, under the low effort/reveal equilibrium induced, for example, by flat wage compensation or cash flow–based (bonus) compensation. The last two columns report the same quantities under the high effort/reveal equilibrium induced by the optimal contract. Panels B and F use c H = 2% instead of c H = 5% to ensure that all three types of equilibria exist under the parameters in column (1).

1808

QUARTERLY JOURNAL OF ECONOMICS

symmetric information case. The last two panels show the results for various depreciation rates δ and minimum employable capital ξ . In particular, the smaller is the minimum capital requirement ξ , the higher is the size of the crash at T ∗∗ , as the firm can pretend for longer and will need an even larger recapitalization at T ∗∗ .21 The results in Table II highlight that if the shareholders’ choice is between a contract that induces a low effort/reveal equilibrium and one that induces a high effort/conceal equilibrium, they should choose the latter, as firm value is much higher in this case and in fact rather close to the first-best high effort/reveal equilibrium. This finding may also explain why stock-based compensation is so widespread in the real world: If there is any difficulty in implementing an optimal contract that induces the first best, then a simple stock-based compensation achieves a second best that is not too far from the first best, as can be seen by comparing the value of the assets in columns (2) and (6) of Table II. Erring on the side of inducing truth revelation but low effort, instead, has a much worse impact on the value of the firm than erring on the side of providing incentives to increase investment opportunities. V.B. The Optimal Contract What does the optimal contract look like? Figure VII compares the optimal contract established in Section IV.G to the stock-based compensation, for a hypothetical realization of τ ∗ , under the assumption that at τ ∗ there is full revelation in both cases. This is the relevant scenario to understand why the CEO is reluctant to reveal information under stock-based compensation. The figure contains six panels: The left hand–side panels plot the optimal contract (dashed line) and the stock-based contract. Each of the three panels corresponds to a different level of the cost of effort c H . In all cases, the payoffs have been scaled to ensure that the CEO receives the same expected utility at time 0. Of course, the optimal contract is cheaper for the firm, as it minimizes the 21. We note that some parameters do not affect the comparative statics: for instance, the manager discount rate β or the cost of effort c H affects only whether a conceal equilibrium or reveal equilibrium is obtained. But because the CEO strategy conditional on concealing is just to push the time of cash shortfall T ∗∗ as far into the future as possible, the latter depends only on the technological parameters and not on preferences. Thus, both the value of the firm and the size of the crash at T ∗∗ are independent of these preference parameters.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

A. Stock comp. versus optimal comp. c H = 0.02

B. Combined comp. versus optimal comp. c H = 0.02

t

0.4 w

wt

0.4 0.2 0

0.2

Optimal Stock 0

10

20

0

30

0

10

20

30

Time

C. Stock comp. versus optimal comp. c H = 0.05

H D. Combined comp. versus optimal comp. c = 0.05

t

0.4 w

wt

Optimal Combined

Time

0.4 0.2 0

Optimal Stock 0

10

20

0.2 0

30

Time

0

10

20

30

F. Combined comp. versus optimal comp. c H = 0.08 0.4 wt

0.4 0.2 0

Optimal Combined Time

E. Stock comp. versus optimal comp. c H = 0.08

wt

1809

Optimal Stock 0

10

20 Time

30

0.2 0

Optimal Combined 0

10

20

30

Time

FIGURE VII Optimal Contract versus Stock-Based Compensation The figure plots compensation paths under the optimal contract and simple stock-based compensation (left panels) and a combined compensation with both stocks and cash-flow (bonus) components (right panels). The vertical dotted bar corresponds to a hypothetical occurrence of τ ∗ . The figure reports three sets of panels, each pair corresponding to a different effort cost c H . The remaining parameters are in Table I. The stock-based compensation and combined compensation are normalized to provide the same intertemporal utility to the CEO as the optimal contract at time 0, conditional on full revelation.

discounted expected future cash payouts. Finally, we assume the outside option has AO = 1 and BO = 1%, which reflects the fact that in our baseline case in Table I the growth of the firm also shifts to g = 1% at τ ∗ .

1810

QUARTERLY JOURNAL OF ECONOMICS

It is convenient to start from the middle panel (Panel C) which contains our base case c H = 5% (see Table I). We see two noteworthy features of the optimal contract: First, the payoff to the manager is increasing over time, and it jumps up at τ ∗ upon revelation. The latter discrete increase is a bonus compensation that the manager must receive to provide the incentive to reveal the bad news about investment opportunities. Because the manager must be compensated for telling when τ ∗ occurs, the impatient manager has an incentive to work little in order to anticipate τ ∗ (which is ex post observable) as much as possible. An increasing payoff provides the incentive to the manager to push τ ∗ into the future. These characteristics of the optimal contract, namely, an increasing pattern plus a bonus to reveal bad news, strongly contrast with the payoff implicit in stock-based compensation, which is depicted by the solid line. Indeed, the latter implies a fast-growing payoff, which provides the incentive to work hard and increase investment opportunities. At the same time, however, the payment to the CEO drops substantially at τ ∗ if he or she reveals the bad news about growth opportunities. That is, a stock-based contract implicitly punishes the CEO for good behavior of truth revelation. It follows that this implicit punishment provides an incentive to conceal the bad news as discussed in Section III.A We see an additional interesting characteristics of the optimal contract by comparing Panel C with Panels A and E, which use a different effort cost c H : The higher the effort cost c H the stronger must be the increasing pattern in the optimal contract to ensure the manager is willing to exert high effort for longer, and thus the larger must be the bonus when he or she reveals the bad news. Indeed, when c H is small (top panel), then the overall compensation of the manager drops at τ ∗ . Still, even in this case, the drop is far smaller than the one implied by the stock-based compensation. V.C. The Stock-Plus-Bonus Contract This discussion shows that indeed optimal compensation is in general rising over time to provide the incentive for high effort and has a bonus component to provide the incentive to reveal bad news. Whereas stock-based compensation implies a drop at τ ∗ , we recall from Section IV.E that a compensation wt = ηd Dt generates a bonus type of compensation at time τ ∗ . However, we

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1811

also recall that this compensation has a bonus that is too generous, and thus the manager has an incentive to work less to anticipate its payment. A possibility is combining stock-based compensation with cash flow–based compensation to mimic the optimal contract, using variables that are observable in the firm. The right hand–side panels of Figure VII report the contract obtained by combining stock-based compensation wt = η Pt with the cash flow–based compensation wt = ηd Dt , where we normalize ηd = η/(r − g), a value that ensures that the expected utility under stock-based and cash flow–based compensation is the same. We emphasize that we use dividend Dt to be consistent with the model, but the interpretation of Dt could also be as a cash payment over time with the same characteristics as dividends in our model, in particular, with a bonus component at revelation. We let ω be the weight on the stock-based compensation, so that the combined compensation is (47)

wt = ωη Pt + (1 − ω)ηd Dt .

The compensation is only linked to variables in the firm, and they do not depend any more on time t, nor on τ ∗ itself. We must choose ω to ensure that under compensation (47) the manager has an incentive both to exert high effort and to reveal at τ ∗ . Proposition A1 in Online Technical Appendix A contains the formal conditions. We find that ω = 30% works for all three right hand–side panels in Figure VII.22 Consider first the base case in Panel D. In this case, the combined compensation is similar to the optimal contract, in that it implies both a rising compensation over time and a bonus at τ ∗ . When the cost of effort is higher, as c H = 8% in Panel F, the match between the optimal contract and the combined stock-plusbonus contract is very accurate. Instead, the combined compensation is not close to the optimal in the case in which the cost is small, such as c H = 2% as in Panel B. The stock-plus-bonus contract resembles real contracts that include golden parachutes and generous severance packages as a way of compensating the manager for revealing bad news at τ ∗ . An important question pertains to the weight to give to stock in combined compensation to ensure that the manager exerts effort and reveals bad news (recall that this discussion applies also to deferred compensation; see Section IV.B). Section B.2 in the 22. Again, we rescaled η to ensure that in all cases, the expected utility at time 0 from the optimal contract and from the combined compensation is the same.

1812

QUARTERLY JOURNAL OF ECONOMICS

Online Technical Appendix shows that in our numerical exercises, the weight on stock ω in (47) is always below 40% for reasonable parameter values. In addition, unless the return on capital z is excessively high, we also find that ω is strictly positive, showing that some of the CEO’s compensation must be in stocks to ensure high effort. Our analysis abstracts from the costs that different incentive schemes impose on the firm itself. Endogenizing the compensation costs to the firm in our dynamic model, however, is quite hard, as dividend flows have to be adjusted depending on the equilibrium price, and the fixed point that sustains the Nash equilibrium is hard to obtain. Nevertheless, we can approximate the size of these costs in the various equilibria by taking their present value at the cost of capital of the firm, and comparing the value of the firm net of these costs across the various incentive schemes. Our analysis (see Section B.3 in the Online Technical Appendix), although approximate, shows that the combined optimal compensation plan discussed earlier achieves the first best without imposing too high a burden on the company. VI. DISCUSSION AND CONCLUSIONS Our paper contributes to the debate on executive compensation.23 On one hand, advocates of stock-based compensation highlight the importance of aligning shareholder objectives with managers’ and argue that compensating managers with stocks achieves the goal. Detractors argue that stock-based compensation instead gives managers the incentive to misreport the true state of the firm and in fact even to engage at times in outright fraudulent behavior. This paper sheds new light on the debate by analyzing the ex ante incentive problem of inducing managers to exert costly effort to maximize the firms’ investment opportunities, and simultaneously inducing managers to reveal the true outlook for the firm ex post and follow an optimal investment rule. We show that a combined compensation package that uses both stock-based performance and a cash flow–based bonus type of compensation reaches the first best, inducing the manager to 23. See Murphy (1999), Hall and Murphy (2003), Bebchuk and Fried (2004), Gabaix and Landier (2008), and Edmans and Gabaix (2009) for recent discussions.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1813

exert costly effort and reveal any worsening of the investment opportunities, if it happens. Firm value is then maximized in this case. Each component (stock and bonus) in the combined compensation package serves a different purpose and thus they are both necessary “ingredients”: the stock-based component increases the manager’s effort to expand the growth options of the firm, whereas compensating managers with bonuses when they reveal bad news about long-term growth significantly reduces their incentives to engage in value-destroying activities to support the inflated expectations. Thus, our model supports the inclusion of a golden parachute or a generous severance package in the stock-based compensation package of the CEO. It is crucial to realize, though, that the weight on stocks in the combined compensation package is not identical across firms: for instance, high-growth firms should not make much use of stocks in their compensation packages, whereas the opposite is true for lowgrowth firms. That is, there is no fixed rule that works for every type of firm. As a consequence, generalized regulatory decisions that ban stock-based compensation, or board of directors’ decisions on CEO compensation that are based on “conventional wisdom,” are particularly dangerous, as they do not consider that each firm needs a different incentive scheme. Interestingly, our calibrated model also shows that if for any reason implementing the first-best contract is not possible, then it is better for shareholders to compensate CEOs with too much stock rather than too little. In fact, for most parameter specifications firm value under a high effort/conceal equilibrium is much closer to the first best than under a low effort/reveal equilibrium. This finding may explain the widespread use of stock-based compensation in the real world, even in situations where ex post it may appear that it was not optimal for shareholders. Indeed, our model helps shed light on the incentives and disincentives of CEOs when their compensation is too heavily tilted toward stock. The 1990s high-tech boom and collapse and the 2007–2008 financial-crisis offer interesting examples of the mechanism discussed in our model. The 1990s high-tech boom was characterized by expectations of high growth rates and high uncertainty, coupled with highpowered, stock-based executive compensation. Firms with high perceived growth options were priced much higher than firms with similar operating performance, but with lower perceived growth options. We argue that because of their high-powered incentives,

1814

QUARTERLY JOURNAL OF ECONOMICS

executives had an incentive to present theirs as high-growth firms, even when the prospects of high future growth faded at the end of the 1990s. Our analysis suggests that high-powered incentives induce the pretense of high-growth firms and lead eventually to the crash of the stock price. Similarly, the source of the banking crisis of 2007–2008 may also be partially understood through the mechanism discussed in the paper, as banks also share some of the key characteristics assumed in the model. In particular, there is a serious lack of transparency of banks’ investment behavior (e.g., complicated derivative structures) as well as of the available investment opportunities. Moreover, banks, and especially investment banks, employ high-powered, stock-based incentives, or contracts that are effectively stock-based. Consider, for instance, the growth in the mortgage market. It is reasonable to argue that banks’ CEOs observed a slowdown in the growth rate of the prime mortgage market. When investment opportunities decline, the first-best action is to disclose the news to investors, return some capital to shareholders, and suffer a capital loss on the stock market. However, if a CEO wants to conceal the decline in investment opportunities’ growth, then our model implies that in order to maintain the pretense that nothing happened, the bank’s manager has to first invest in negative–NPV projects, such as possibly subprime mortgages, if the mortgage rate charged does not correspond to the riskiness of the loan.24 Moreover, to keep up the pretense for as long as possible, the manager also has to disinvest and pass on positive–NPV projects. According to the model, the outcome of the suboptimal investment program is a market crash of the stock price, and the need for a large recapitalization of the firm. As the debate about optimal CEO compensation is evolving, our model shows that too much stock sensitivity is “bad,” as it induces this perverse effect on managers’ investment ex post. Nevertheless, too little stock sensitivity has a potentially even worse effect, providing the CEO with no incentives to search for good investment opportunities. 24. Laeven and Levine (2009) provide empirical evidence that bank risk taking is positively correlated with ownership concentration. Fahlenbrach and Stulz (2009), however, conclude that bankers’ incentives “cannot be blamed for the credit crisis or for the performance of banks during the crisis” (p. 18), basing their conclusion on evidence on CEOs’ personal losses during the crisis. On this last point, however, see our discussion at the end of Section IV.B

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1815

APPENDIX This Appendix contains only sketches of the proofs of the propositions. Details can be found in a separate Online Technical Appendix available on the authors’ Web pages. Proof of Proposition 1. The capital evolution equation is given by (48)

dKt = It − δ Kt . dt

From (2), the target level of capital, Jt , is given by Jt = e Gt for ∗ ∗ t < τ ∗ and Jt = e Gτ +g(t−τ ) for t ≥ τ ∗ . Imposing Kt = Jt for every t and using (48) the optimal investment policy is given by (8). From (7), the dividend stream is (9). QED after Proof of Proposition 2. For t ≥ τ ∗ , the Pfi,t stems from intebefore gration of future dividends. For t < τ ∗ , the expectation in Pfi,t can be computed by integration by parts. QED before H before L Proof of Corollary 1. Pfi,t (λ ) > Pfi,t (λ ) iff Afiλ H > AfiλL . Substituting, this relation holds iff z − r − δ > 0, which is always satisfied. QED

Proof of Corollary 2. The first part immediately follows the fact that a manager/owner values the firm as the present value of future dividends discounted at β. The second part follows from the utility of the manager at τ ∗ and t < τ ∗ . First, after τ ∗ , there is no benefit from exerting effort. Thus the manager/owner’s utility is ∞ (z − g − δ) Gτ ∗ ∗ g e . e−β(t−τ ) Dt dt = UDiv,τ ∗ = (β − g) τ∗ Before τ ∗ , the utility of the manager for given effort e is

τ ∗ −β (u−t) G −β (τ ∗ −t) UDiv,t (e) = E e Du (1 − c(e))du + e UDiv,τ ∗ t

(z − g − δ ) Gt (z − G − δ ) . 1 − c(e) + λ (e) =e β + λ(e) − G ( z − G − δ ) (β − g ) Given H Div in (16), the condition UDiv,t (e H ) > UDiv,t (e L) translates into (38). QED Proof of Proposition 3. As in Corollary 2, from τ ∗ onward the manager will not exercise high effort, resulting in a utility level

1816

QUARTERLY JOURNAL OF ECONOMICS

at τ ∗ given by

UStock,τ ∗ =

∞

τ∗

after e−β(s−t) (η Pfi,s )ds =

η (z − g − δ ) Gτ ∗ e . (r − g) (β − g)

∗

Thus, for t < τ we have

τ ∗ −β(s−t) before −β(τ ∗ −t) UStock,t (e) = E e (η Pfi,s )(1 − c(e)) ds + e UStock,τ ∗ t

=

z−g−δ ηe Gt . Afiλ (1 − c(e)) + λ(e) β + λ(e) − G (r − g) (β − g)

Let e H be the optimal strategy in equilibrium. The price function before with Afiλ H . We then obtain the condition UStock,t (e H ) > is then Pfi,t L UStock,t (e ) iff (19) holds. The Nash equilibrium follows. Similarly, if e L is the optimal strategy in equilibrium, then the price function before with AfiλL . Thus, UStock,t (e H ) < UStock,t (e L) iff (19) does not is Pfi,t hold. QED Proof of Lemma 1. Conditional on the decision to conceal g, the manager must provide a dividend stream DtG , as any deviation make him or her lose his or her job. Because he or she cannot affect the stock price, after τ ∗ his or her utility only depends on the length of his or her tenure. Because we normalize the manager’s outside options to zero, his or her optimal choice is to maximize QED T ∗∗ . Proof of Proposition 4. Point (1). The CEO must mimic DtG for as long as possible. From (7), this target determines the investments It (in point (2) of the proposition) and thus the evoτ ∗ . lution of capital dKt /dt = It − δ Kt for given initial condition K From the monotonicity property of differential equations in their initial value and the definition of T ∗∗ as the time at which τ ∗ . The claim KT ∗∗ = K T ∗∗ = ξ JT ∗∗ , T ∗∗ must be increasing with K follows. ∗ Point (3). At time τ ∗ we have Kτ ∗ = Jτ ∗ = e Gτ , which implies ∗ ∗ dKt /dt|τ ∗ = Ge Gτ > dJt /dt|τ ∗ = ge Gτ . Thus, the trajectory of capital at τ ∗ is above Jt . By continuity, there is a [0, t1 ] in which Kt > Jt . Solving explicitly for the capital evolution shows that as t increases, Kt − Jt → −∞. Because Kτ ∗ +dt − Jτ ∗ +dt > 0, there must be a t1 at which Kt1 − Jt1 = 0. Define h∗ ≡ t1 − τ ∗ , and substituting in Kt1 − Jt1 = 0, h∗ must satisfy

z−g−δ −Gh∗ gh∗ −δh∗ −(δ+G)h∗ z−G−δ (49) 0 = e . e −e + e −1 δ+g δ+G

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1817

For t > t1 , Kt < Jt . Solving explicitly for the capital evolution shows that Kt − Jt diverges to −∞ as t → ∞. From the condition KT ∗∗ − JT ∗∗ ξ = 0, and defining h∗∗ ≡ T ∗∗ − τ ∗ , we obtain the equation defining h∗∗ : ∗ ∗∗ ∗ ∗∗ (50) 0 = 1 + e−(G−g)h − 1 e(z−G−δ)(h −h ) − e−(G−g)h ξ. QED Proof of Proposition 5. Let t > h∗∗ . If a cash shortfall has not been observed by t, then a shift cannot have occurred before t − h∗∗ . Bayes’ formula implies that time T ∗∗ = τ ∗ + h∗∗ conditional on not observing a cash shortfall by time t has the conditional distribution FT ∗∗ (t |T ∗∗ > t) = Pr τ ∗ < t − h∗∗ |τ ∗ > t − h∗∗ ∗∗

∗∗

e−λ(t−h ) − e−λ(t −h = e−λ(t−h∗∗ )

)

= 1 − e−λ(t −t) .

That is, T ∗∗ has the exponential distribution f (T ∗∗ |no cash ∗∗ shortfall by t) = λe−λ(T −t) . The value of Pai,t for t > h∗∗ in (22) then follows from the pricing formula (21). Let t < h∗∗ ; then the conditional distribution of T ∗∗ is zero in the range [t, h∗∗ ], as even a shift at 0 would be revealed only at ∗∗ ∗∗ h∗∗ . The density is then f (T ∗∗ ) = λe−λ(T −h ) 1(T ∗∗ >h∗∗ ) . Using this density to compute the expectation, we find ∗∗

(51)

Pai,t = (z − G − δ)e Gt

1 − e−(r−G)(h (r − G)

−t)

∗∗

+ ert e(G−r)h Aai λ. QED

Proof of Proposition 6. Let τ ∗ > h∗∗ . There are two equilibria to consider: a reveal equilibrium and a conceal equilibrium. Expressions (24), (25), and (26) contain the expected utilities at τ ∗ under conceal and reveal strategies in the two possible equilibria. conceal,fi reveal and a conceal A reveal equilibrium occurs iff UStock,τ ∗ > U Stock,τ ∗ conceal,ai reveal equilibrium occurs iff UStock,τ ∗ > UStock,τ ∗ . The conditions in the claim are obtained by simple substitution. conceal,ai depends on the price in (51). Finally, if τ ∗ < h∗∗ , then UStock,τ ∗ Details are left to the Online Technical Appendix. QED Proof of Proposition 7. Let t ≥ h∗∗ . In a conceal equilibrium with high effort, Pai,t in (22) with Aai determines the wage wt = λH conceal,ai η Pai,t . Using expression (29) with Uτ ∗ = UStock,τ ∗ , the expected

1818

QUARTERLY JOURNAL OF ECONOMICS

utility under effort e is UStock,t (e) = e

Gt

η Aai λH

1 − c (e) + λ (e) H Stock , β + λ (e ) − G

where λ(e) and c(e) are the intensity and the cost of effort under effort choice e. The condition in the proposition follows from the maximization condition UStock,t (e H ) > UStock,t (e L). Finally, given e H chosen by the manager, indeed λ H applies in equilibrium, a conceal equilibrium occurs at τ ∗ , and thus the price function is Pai,t in (22), concluding the proof. A similar proof holds for t < h∗∗ . The expressions are in the Online Technical Appendix. QED Proof of Propositions 8, 9, and 10. See the Online Technical Appendix. Proof of Proposition 11. a. The two utilities under reveal and conceal strategies are ∗

(52)

UτReveal = ∗

Aa e( Ba +Ca )τ ; β − Ba

∗∗

UτConceal = Abe Bb τ ∗

∗

1 − e−(β−Bb )h . (β − Bb)

≥ UτConceal for all τ ∗ iff conditions (43) are It follows that UτReveal ∗ ∗ satisfied. b. For i = H, L and t < τ ∗ we can compute Abe Bb t 1 − ci λi Aa e(Ca +Ba )t i . + Ut = β + λi − Bb (β − Ba ) β + λi − (Ca + Ba ) Tedious algebra shows that UtH ≥ UtL is satisfied for all t iff conditions (44) are satisfied. c. For all t and τ ∗ we must have Ut ≥ UtO . For t < τ ∗ , the constraints obtained above imply that UtH is the relevant utility. Imposing Bb = Ca + Ba , UtH ≥ AO e BO t iff

e Bb t λ H Aa ≥ AO e BO t . Ab(1 − c H ) + (β − Ba ) β + λ H − Bb This condition is satisfied for all t iff conditions in (45) are satisfied. Similarly, for t ≥ τ ∗ the constraints above imply that Ut = Reveal , given by (52). Thus, the outside option condition is Ut Aa ∗ eCa τ e Ba t ≥ AO e BO t , β − Ba

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1819

which is satisfied for all t and τ ∗ iff conditions (45) are satisfied. QED HARVARD UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH UNIVERSITY OF CHICAGO, CENTER FOR ECONOMIC AND POLICY RESEARCH, AND NATIONAL BUREAU OF ECONOMIC RESEARCH HEBREW UNIVERSITY AND CENTER FOR ECONOMIC AND POLICY RESEARCH

REFERENCES Aghion, Philippe, and Jeremy Stein, “Growth vs. Margins: Destabilizing Consequences of Giving the Stock Market What It Wants,” Journal of Finance, 63 (2008), 1025–1058. Bebchuk, Lucian, and Jesse Fried, Pay without Performance: The Unfulfilled Promise of Executive Compensation (Cambridge, MA: Harvard University Press, 2004). Bebchuk, Lucian, and Lars Stole, “Do Short-Term Objectives Lead to Under- or Overinvestment in Long-Term Projects?” Journal of Finance, 48 (1993), 719– 729. Beneish, Massod, “Incentives and Penalties Related to Earnings Overstatements That Violate GAAP,” Accounting Review, 74 (1999), 425–457. Bergstresser, Daniel, and Thomas Philippon, “CEO Incentives and Earnings Management,” Journal of Financial Economics, 80 (2006), 511–509. Blanchard, Olivier, “Debt, Deficits, and Finite Horizons,” Journal of Political Economy, 93 (1985), 223–247. Bolton, Patrick, Jose Scheinkman, and Wei Xiong, “Executive Compensation and Short-Termist Behavior in Speculative Markets,” Review of Economic Studies, 73 (2006), 577–610. Burns, Natasha, and Simi Kedia, “The Impact of Performance-Based Compensation on Misreporting,” Journal of Financial Economics, 79 (2006), 35–67. Clementi, Gian Luca, and Hugo Hopenhayn, “A Theory of Financing Constraints and Firm Dynamics,” Quarterly Journal of Economics, 121 (2006), 229–265. DeMarzo, Peter, and Michael Fishman, “Agency and Optimal Investment Dynamics,” Review of Financial Studies, 20 (2007), 151–188. Easterbrook, Frank H., “Two Agency-Cost Explanations of Dividends,” American Economic Review, 74 (1984), 650–659. Edmans, Alex, and Xavier Gabaix, “Is CEO Pay Really Inefficient? A Survey of New Optimal Contracting Theories,” European Financial Management, 15 (2009), 486–496. Edmans, Alex, Xavier Gabaix, and Augustin Landier, “A Multiplicative Model of Optimal CEO Incentives in Market Equilibrium,” Review of Financial Studies, 22 (2009), 4881–4917. Eisfeldt, Andrea, and Adriano Rampini, “Managerial Incentives, Capital Reallocation, and the Business Cycle,” Journal of Financial Economics, 87 (2008), 177–199. Fahlenbrach, Rudriger, and Rene M. Stulz, “Bank CEO Incentives and the Credit Crisis,” Charles A. Dice Center for Research in Financial Economics Working Paper No. 2009-13, 2009. Gabaix, Xavier, and Augustin Landier, “Why Has CEO Pay Increased So Much?” Quarterly Journal of Economics, 123 (2008), 49–100. Goldman, Eitan, and Steve Slezak, “An Equilibrium Model of Incentive Contracts in the Presence of Information Manipulation,” Journal of Financial Economics, 80 (2006), 603–626. Graham, John, Campbell Harvey, and Shiva Rajgopal, “The Economic Implications of Corporate Financial Reporting,” Journal of Accounting and Economics, 40 (2005), 3–73. Hall, Brian, and Kevin J. Murphy, “The Trouble with Stock Options,” Journal of Economic Perspectives, 17 (2003), 17, 49–70.

1820

QUARTERLY JOURNAL OF ECONOMICS

Healy, Paul, “The Effect of Bonus Schemes on Accounting Decisions,” Journal of Accounting and Economics, 7 (1985), 85–107. Inderst, Roman, and Holger, Mueller, “CEO Compensation and Private Information: An Optimal Contracting Perspective,” New York University Working Paper, 2006. Jensen, Michael, “Agency Cost of Overvalued Equity,” Financial Management, 34 (2005), 5–19. Johnson, Shane, Harley E. Ryan, Jr., and Yisong S. Tian, “Managerial Incentives and Corporate Fraud: The Sources of Incentives Matter,” Review of Finance, 13 (2009), 115–145. Ke, Bin,

QUARTERLY JOURNAL OF ECONOMICS Vol. CXXV

November 2010

Issue 4

WHAT COMES TO MIND∗ NICOLA GENNAIOLI AND ANDREI SHLEIFER We present a model of intuitive inference, called “local thinking,” in which an agent combines data received from the external world with information retrieved from memory to evaluate a hypothesis. In this model, selected and limited recall of information follows a version of the representativeness heuristic. The model can account for some of the evidence on judgment biases, including conjunction and disjunction fallacies, but also for several anomalies related to demand for insurance.

I. INTRODUCTION Beginning in the early 1970s, Daniel Kahneman and Amos Tversky (hereafter KT 1972, 1974, 1983) published a series of remarkable experiments documenting significant deviations from the Bayesian theory of judgment under uncertainty. Although KT’s heuristics and biases program has survived substantial experimental scrutiny, models of heuristics have proved elusive.1 In this paper, we present a memory-based model of probabilistic inference that accounts for quite a bit of the experimental evidence. Heuristics describe how people evaluate hypotheses quickly, based on what first comes to mind. People may be entirely capable of more careful deliberation and analysis, and perhaps of better ∗ We are deeply grateful to Josh Schwartzstein for considerable input, and to Pedro Bordalo, Shane Frederick, Xavier Gabaix, Matthew Gentzkow, Daniel Hojman, Elizabeth Kensinger, Daniel Kahneman, Lawrence Katz, Scott Kominers, David Laibson, Sendhil Mullainathan, Giacomo Ponzetto, Drazen Prelec, Mathew Rabin, Antonio Rangel, Jesse Shapiro, Jeremy Stein, Richard Thaler, and three anonymous referees for extremely helpful comments. Gennaioli thanks the Spanish Ministerio de Ciencia y Tecnologia (ECO 2008-01666 and Ramon y Cajal grants), the Barcelona GSE Research Network, and the Generalitat de Catalunya for financial support. Shleifer thanks the Kauffman Foundation for research support. 1. Partial exceptions include Griffin and Tversky (1992), Tversky and Koehler (1994), Barberis, Shleifer, and Vishny (1998), Rabin and Schrag (1999), Mullainathan (2000), and Rabin (2002), to which we return in Section III.C. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1399

1400

QUARTERLY JOURNAL OF ECONOMICS

decisions, but not when they do not think things through. We model such quick and intuitive inference, which we refer to as “local thinking,” based on the idea that only some decision-relevant data come to mind initially. We describe a problem in which a local thinker evaluates a hypothesis in light of some data, but with some residual uncertainty remaining. The combination of the hypothesis and the data primes some thoughts about the missing data. We refer to realizations of the missing data as scenarios. We assume that working memory is limited, so that some scenarios, but not others, come to the local thinker’s mind. He makes his judgment in light of what comes to mind, but not of what does not. Our approach is consistent with KT’s insistence that judgment under uncertainty is similar to perception. Just as an individual fills in details from memory when interpreting sensory data (for example, when looking at the duck–rabbit or when judging distance from the height of an object), the decision maker recalls missing scenarios when he evaluates a hypothesis. Kahneman and Frederick (2005) describe how psychologists think about this process: “The question of why thoughts become accessible—why particular ideas come to mind at particular times—has a long history in psychology and encompasses notions of stimulus salience, associative activation, selective attention, specific training, and priming” (p. 271). Our key assumption describes how scenarios become accessible from memory. We model such accessibility by specifying that scenarios come to mind in order of their representativeness, defined as their ability to predict the hypothesis being evaluated relative to other hypotheses. This assumption formalizes aspects of KT’s representativeness heuristic, modeling it as selection of stereotypes through limited and selective recall. The combination of both limited and selected recall drives the main results of the paper and helps account for biases found in psychological experiments. In the next section, we present an example illustrating the two main ideas of our approach. First, the data and the hypothesis being evaluated together prime the recall of scenarios used to represent this hypothesis. Second, the representative scenarios that are recalled need not be the most likely ones, and it is precisely in those instances when a hypothesis is represented by an unlikely scenario that judgment is severely biased. In Section III, we present the formal model and compare it to some earlier theoretical research on heuristics and biases.

WHAT COMES TO MIND

1401

In Section IV, we present the main theoretical results of the paper and establish four propositions. The first two deal with the magnitude of judgment errors. Proposition 1 shows how judgment errors depend on the likelihood of the recalled (representative) scenarios. Proposition 2 then shows how a local thinker reacts to data, and in particular overreacts to data that change his representation of the hypothesis he evaluates. The next two propositions deal with perhaps the most fascinating judgment biases, namely failures of extensionality. Proposition 3 describes the circumstances in which a local thinker exhibits the conjunction fallacy, the belief that a specific instance of an event is more likely than the event itself. Proposition 4 then shows how a local thinker exhibits the disjunction fallacy, the belief that the probability of a broadly described group of events (such as “other”) is lower than the total probability of events in that group when those events are explicitly mentioned. In Section V, we show how the propositions shed light on a range of experimental findings on heuristics and biases. In particular, we discuss the experiments on neglect of base rates and on insensitivity to predictability, as well as on the conjunction and disjunction fallacies. Among other things, the model accounts for the famous Linda (KT 1983) and car mechanic (Fischhoff, Slovic, and Lichtenstein 1978) experiments. In Section VI, we apply the model, and in particular its treatment of the disjunction fallacy, to individual demand for insurance. Cutler and Zeckhauser (2004) and Kunreuther and Pauly (2005) summarize several anomalies in that demand, including overinsurance of specific narrow risks, underinsurance of broad risks, and preference for low deductibles in insurance policies. Our model sheds light on these anomalies. Section VII concludes by discussing some broader conceptual issues. II. AN EXAMPLE: INTUITIVE REASONING IN AN ELECTORAL CAMPAIGN We illustrate our model in the context of a voter’s reaction to a blunder committed by a political candidate. Popkin (1991) argues that intuitive reasoning plays a key role in this context and helps explain the significance that ethnic voters in America attach to candidates’ knowledge of their customs. He further suggests that although in many instances voters’ intuitive assessments work fairly well, they occasionally allow even minor blunders such as the one described below to influence their votes.

1402

QUARTERLY JOURNAL OF ECONOMICS

In 1972, during New York primaries, Senator George McGovern of South Dakota was courting the Jewish vote, trying to demonstrate his sympathy for Israel. As Richard Reeves wrote for New York magazine in August, “During one of McGovern’s first trips into the city he was walked through Queens by city councilman Matthew Troy and one of their first stops was a hot dog stand. ‘Kosher?’ said the guy behind the counter, and the prairie politician looked even blanker than he usually does in big cities. ‘Kosher!’ Troy coached him in a husky whisper. ‘Kosher and a glass of milk,’ said McGovern. (Popkin, 1991, p. 2)

Evidently, McGovern was not aware that milk and meat cannot be combined in a kosher meal. We use this anecdote to introduce our basic formalism and to show how “local thinking” can illuminate the properties of voters’ intuitive assessments. We start with a slightly different example in which intuitive assessments work well and then return to hot dogs. Suppose that a voter only wants to assess the probability that a candidate is qualified. Before the voter hears the candidate say anything, he assesses this probability to be 1/2. Suppose that the candidate declares at a Jewish campaign event that Israel was the aggressor in the 1967 war, an obvious inaccuracy. How does the voter’s assessment change? For a Bayesian voter, the crucial question is the extent to which this statement—which surely signals the candidate’s lack of familiarity with Jewish concerns—is also informative about the candidate’s overall qualification. Suppose that the distribution of candidate types conditional on calling Israel the aggressor is described by Table I.A. Not only is “calling Israel the aggressor in the 1967 war” very informative about a candidate’s unfamiliarity with Jewish concerns (82.5% of the candidates who say this are unfamiliar), but unfamiliarity is in turn very informative about qualification, at least to a Jewish voter (relative to a prior of 1/2 before calling Israel the aggressor). The latter property is reflected in the qualification estimate of a Bayesian voter, which is equal to (1)

Pr(qualified) = Pr(qualified, familiar) + Pr(qualified, unfamiliar) = 0.175,

where we suppress conditioning on “calling Israel aggressor.” The Bayesian reduces his assessment of qualification from 50% to 17.5% because the blunder is so informative. Suppose now that Table I.A rather than being immediately available to the voter, is stored in his associative long-term

1403

WHAT COMES TO MIND

TABLE I.A DISTRIBUTION OF CANDIDATE TYPES: BLUNDER INFORMATIVE ABOUT QUALIFICATION Familiarity with Jewish concerns Calls Israel aggressor in 1967 war

Familiar

Unfamiliar

0.15 0.025

0.025 0.8

Qualification of candidate Qualified Unqualified

memory and that—due to working memory limits—not all candidate types come to mind to aid the voter’s evaluation of the candidate’s qualification.2 We call such a decision maker a “local thinker” because, unlike the Bayesian, he does not use all the data in Table I.A but only the information he obtains by sampling specific examples of qualified and unqualified candidates from memory. Crucially, we assume in KT’s spirit that the candidates who first come to the voter’s mind are representative, or stereotypical, qualified and unqualified candidates. Specifically, the voter’s mind automatically fits the most representative familiarity level—or “scenario”—for each level of qualification of the candidate. We formally define the representative scenario as the familiarity level that best predicts, that is, is relatively more associated with, the respective qualification level. These representative scenarios for a qualified and an unqualified candidate are then given by (2)

s(qualified) =

arg max

Pr(qualified | s),

s∈{familiar, unfamiliar}

(3)

s(unqualified) =

arg max

Pr(unqualified | s).

s∈{familiar, unfamiliar}

In Table I.A, this means that a stereotypical qualified candidate is familiar with Jewish concerns, whereas a stereotypical unqualified one is unfamiliar with such concerns.3 This process reduces the voter’s actively processed information to the diagonal of the bold entries in Table I.B. 2. Throughout the paper, we take the long-term associative memory database (in this example, Table I.A) as given. Section III discusses how, depending on the problem faced by the agent, such a database might endogenously change and what could be some of the consequences for judgments. 3. Indeed, Pr(qualified | familiar) = (.15/(.15 + .025)) = .86 > .14 = (.025/ (.15 + .025)) = Pr(qualified | unfamiliar). The reverse is true for an unqualified candidate.

1404

QUARTERLY JOURNAL OF ECONOMICS

TABLE I.B LOCAL THINKER’S REPRESENTATION: BLUNDER INFORMATIVE ABOUT QUALIFICATION Familiarity with Jewish concerns Calls Israel aggressor in 1967 war Qualification of candidate Qualified Unqualified

Familiar

Unfamiliar

0.15 0.025

0.025 0.8

TABLE I.C LOCAL THINKER’S REPRESENTATION: BLUNDER UNINFORMATIVE ABOUT QUALIFICATION Familiarity with Jewish concerns Drinks milk with a hot dog

Familiar

Unfamiliar

Qualification of candidate Qualified Unqualified

0.024 0.026

0.43 0.52

Because a local thinker considers only the stereotypical qualified and unqualified candidates, his assessment (indicated by superscript L) is equal to (4)

Pr L (qualified) Pr(qualified, familiar) Pr(qualified, familiar) + Pr(unqualified, unfamiliar) ≈ .158.

=

Comparing (4) with (1), we see that a local thinker does almost as well as a Bayesian. The reason is that in Table I.A stereotypes capture a big chunk of the respective hypotheses’ probabilities. Although the local thinker does not recall that some unfamiliar candidates are nonetheless qualified, this is not a big problem for assessment because in reality, and not only in stereotypes, familiarity and qualification largely go together. The same idea suggests, however, that local thinkers sometimes make very biased assessments. Return to the candidate unaware that drinking milk with hot dogs is not kosher. Suppose that, after this blunder, the distribution of candidate types is as shown in Table I.C. As in the previous case, in Table I.C the candidate’s drinking milk with hot dogs is very informative about his unfamiliarity with Jewish concerns, but now such unfamiliarity is extremely

WHAT COMES TO MIND

1405

uninformative about the candidate’s qualifications. Indeed, 95% of the candidates do not know the rules of kashrut, including the vast majority of both the qualified and the unqualified ones. In this example a Bayesian assesses Pr(qualified) = .454; he realizes that drinking milk with a hot dog is nearly irrelevant to qualification. The local thinker, in contrast, still views the stereotypical qualified candidate as one familiar with his concerns and the stereotypical unqualified candidate as unfamiliar. Formally, the scenario “familiar” yields a higher probability of the candidate being qualified (.024/(.024 + .026) = .48) than the scenario “unfamiliar” (.43/(.43 + .52) = .45). Likewise, the scenario “unfamiliar” yields a higher probability of the candidate being unqualified (.55) than the scenario “familiar” (.52). The local thinker then estimates that (5)

Pr L(qualified) Pr(qualified, familiar) Pr(qualified, familiar) + Pr(unqualified, unfamiliar) ≈ .044,

=

which differs from the Bayesian’s assessment by a factor of nearly 10! In contrast to the previous case, the local thinker grossly overreacts to the blunder and misestimates probabilities. Now local thinking generates massive information loss and bias. Why this difference in the examples? After all, in both examples the stereotypical qualified candidate is familiar with the voter’s concerns, whereas the stereotypical unqualified candidate is unfamiliar because, in both cases, familiarity and qualification are positively associated in reality. The key difference lies in how much of the probability of each hypothesis is accounted for by the stereotype. In the initial, more standard, example, almost all qualified candidates are familiar and unqualified ones are unfamiliar, so stereotypical qualified and unqualified candidates are both extremely common. When stereotypes are not only representative but also likely, the local thinker’s bias is kept down. In the second example, in contrast, the bulk of both qualified and unqualified candidates are unfamiliar with the voter’s concerns, which implies that the stereotypical qualified candidate (familiar with these concerns) is very uncommon, whereas the stereotypical unqualified candidate is very common. By focusing on the stereotypical candidates, the local thinker drastically underestimates qualification because he forgets that many qualified candidates

1406

QUARTERLY JOURNAL OF ECONOMICS

are also unfamiliar with the rules of kashrut! When the stereotype for one hypothesis is much less likely than that for the other hypothesis, the local thinker’s bias is large. Put differently, in our examples, after seeing a blunder the local thinker always downgrades qualification by a large amount because the stereotypical qualified candidate is very unlikely to commit any blunder. This process leads to good judgments in situations where the blunder is informative not only about the dimension defining the stereotype (familiarity) but also about qualification (Table I.A), but it leads to large biases when the blunder is informative about the dimension defining the stereotype but not about the target assessment of qualification (Table I.C). We capture this dichotomy with the distinction between the representativeness and likelihood of scenarios. This distinction plays a key role in accounting for the biases generated by the use of heuristics. A further connection of our work to research in psychology is the idea of attribute substitution. According to Kahneman and Frederick (2005, p. 269), “When confronted with a difficult question, people may answer an easier one instead and are often unaware of the substitution.” Instead of answering a hard question, “is the candidate qualified?,” the voter answers an easier one, “is the candidate familiar with my concerns?” We show that such attribute substitutions might occur because, rather than thinking about all possibilities, people think in terms of stereotypical candidates, which associate qualification and familiarity. In many situations, such substitution works, as in our initial example, where familiarity is a good stand-in for qualification. But in some situations, the answer to a substitute question is not the same as the answer to the original one, as when many candidates unfamiliar with the rules of kashrut are nonetheless qualified. It is in those situations that intuitive reasoning leads to biased judgment, as our analysis seeks to show.

III. THE MODEL The world is described by a probability space (X, π ), where X ≡ X1 × · · · × XK is a finite state space generated by the product of K ≥ 1 dimensions and the function π : X → [0, 1] maps each element x ∈ X into a probability π (x) ≥ 0 such that π (x) = 1. In the tables in Section II, the dimensions of X are the candidate’s qualification and his familiarity with voter concerns; that is, K = 2 (conditional on the candidate’s blunder, which is a dimension kept

WHAT COMES TO MIND

1407

implicit), the elements x ∈ X are candidate types, and the entries in the tables represent the probability measure π . An agent evaluates the probability of N > 1 hypotheses h1 , . . . , hN in light of data d. Hypotheses and data are events of X. That is, hr (r = 1, . . . , N) and d are subsets of X. If the agent receives no data, d = X: nothing is ruled out. Hypotheses are exhaustive but may be nonexclusive. Exhaustiveness is not crucial but avoids trivial cases where a hypothesis is overestimated simply because the agent cannot conceive of any alternative to it. In (X, π ), the probability of hr given d is determined by Bayes’ rule as Pr(hr ∩ d) x∈h ∩d π (x) = r . (6) Pr(hr | d) = Pr(d) x∈d π (x) In our example, (1) follows from (6) because in Table I.A the probabilities are normalized by Pr(calls Israel aggressor). As we saw in Section II, a local thinker may fail to produce the correct assessment (6) because he considers only a subset of elements x, those belonging to what we henceforth call his “represented state space.” III.A. The Represented State Space The represented state space is shaped by the recall of elements in X prompted by the hypotheses hr , r = 1, . . . , N. Recall is governed by two assumptions. First, working memory limits the number of elements recalled by the agent to represent each hypothesis. Second, the agent recalls for each hypothesis the most “representative” elements. We formalize the first assumption as follows: A1 (Local Thinking). Given d, let Mr denote the number of elements in hr ∩ d, r = 1, . . . , N. The agent represents each hr ∩ d using a number min(Mr , b) of elements x in hr ∩ d, where b ≥ 1 is the maximum number of elements the agent can recall per hypothesis. The set hr ∩ d includes all the elements consistent with hypothesis hr and with the data d. When b ≥ Mr , the local thinker recalls all of these elements, and his representation of hr ∩ d is perfect. The more interesting case occurs when at least some hypotheses are broad, consisting of Mr > b elements.4 In this case, the agent’s representations are imperfect. 4. A1 is one way to capture limited recall. Our substantive results would not change if we alternatively assumed that the agent discounts the probability of certain elements.

1408

QUARTERLY JOURNAL OF ECONOMICS

In particular, a fully local thinker, with b = 1, must collapse the entire set hr ∩ d into a single element. To do so, he automatically selects what we call a “scenario.” To give an intuitive but still formal definition of a scenario, consider the class of problems where hr and d specify exact values (rather than ranges) for some dimensions of X. In this case, hr ∩ d takes the form (7)

hr ∩ d ≡ {x ∈ X | xi = xˆi }, i ∈ [1, . . . , K]

and

for a given set of xˆi ∈ Xi ,

where xˆi is the exact value taken by the ith dimension in the hypothesis or data. The remaining dimensions are unrestricted. This is consistent with the example in Section II, where each hypothesis specifies one qualification level (e.g., unqualified), but the remaining familiarity dimension is left free (once more leaving the blunder implicit). In this context, a scenario for a hypothesis is a specification of its free familiarity dimension (e.g., unfamiliar). More generally, when a hypothesis hr ∩ d belongs to class (7), its possible scenarios are defined as follows: DEFINITION 1. Denote by Fr the set of dimensions in X left free by hr ∩ d. If Fr is nonempty, a scenario s for hr ∩ d is any event s ≡ {x ∈ X | xt = xt } for all t ∈ Fr . If Fr is empty, the scenario for hr ∩ d is s ≡ X. Sr is the set of possible scenarios for hr ∩ d. A scenario fills in the details missing from the hypothesis and data, identifying a single element in hr ∩ d, which we denote by s ∩ hr ∩ d ∈ X. How do scenarios come to mind? We assume that hypotheses belonging to class (7) are represented as follows: A2 (Recall by Representativeness). Fix d and hr . Then the representativeness of scenario sr ∈ Sr for hr given d is defined as Pr(hr ∩ sr ∩ d) , (8) Pr(hr | sr ∩ d) = Pr(hr ∩ sr ∩ d) + Pr(hr ∩ sr ∩ d) where hr is the complement X\hr in X of hypothesis hr . The agent represents hr with the b most “representative” scenarios srk ∈ Sr , k = 1, . . . , b, where index k is decreasing in representativeness and where we set srk = φ for k > Mr . A2 introduces two key notions. First, A2 defines the representativeness of a scenario for a hypothesis hr as the degree to which that scenario is associated with hr relative to its complement hr . Second, A2 posits that the local thinker represents hr

WHAT COMES TO MIND

1409

by recalling only the b most “representative” scenarios for it. The most interesting case arises when b = 1, as the agent represents hr with the most “representative” scenario sr1 . It is useful to call the intersection of the data, the hypothesis, and that scenario (i.e., sr1 ∩ hr ∩ d ∈ X) the “stereotype” that immediately comes to the local thinker’s mind. Expression (8) then captures the idea that an element of a hypothesis or class is stereotypical not only if it is common in that class, but also—and perhaps especially—if it is uncommon in other classes. In our model, the stereotype for one hypothesis is independent of the other hypotheses being explicitly evaluated by the agent: expression (8) only refers to the relationship between a hypothesis hr and its complement hr in X. From A2, the represented state space is immediately defined as follows: DEFINITION 2. Given data d and hypotheses hr , r = 1, . . . , N, the agent’s representation of any hypothesis hr is defined as h˜ r (d) ≡ k=1,...,b srk ∩ hr ∩ d, and the agent’s represented state ˜ is defined as X ˜ ≡ ˜ space X r=1,...,N hr (d). The represented state space is simply the union of all elements recalled by the agent for each of the assessed hypotheses. Definition 2 applies to hypotheses belonging to the class in (7), but it is easy to extend it to general hypotheses which, rather than attributing exact values, restrict the range of some dimensions of X. The Appendix shows how to do this and to apply our model to the evaluation of these hypotheses as well. The only result in what follows that relies on restricting the analysis to the class of hypotheses in (7) is Proposition 1. As we show in the Appendix, all other results can be easily extended to fully general classes of hypotheses. III.B. Probabilistic Assessments by a Local Thinker In the represented state space, the local thinker computes the probability of ht as (9)

Pr L(ht | d) =

Pr(h˜ t (d)) , ˜ Pr( X)

which is the probability of the representation of ht divided by that ˜ Evaluated at b = 1, (9) is the of the represented state space X. counterpart of expression (4) in Section II.

1410

QUARTERLY JOURNAL OF ECONOMICS

Expression (9) highlights the role of local thinking. If b ≥ ˜ = X ∩ d, h˜ t (d) ≡ ht ∩ d and (9) Mr for all r = 1, . . . , N, then X boils down to Pr(ht ∩ d)/ Pr(d), which is the Bayesian’s estimate of Pr(ht | d). Biases can arise only when the agent’s representations are limited, that is, when b < Mr for some r. When the hypotheses are exclusive (ht ∩ hr = φ ∀t = r), (9) can be written as

(9 )

Pr stk | ht ∩ d Pr(ht ∩ d) , Pr L(ht | d) = N b k r=1 k=1 Pr sr | hr ∩ d Pr(hr ∩ d) b k=1

where Pr(s | hr ∩ d) is the likelihood of scenario s for hr , or the probability of s when hr is true. The bracketed terms in (9 ) measure the share of a hypothesis’ total probability captured by its representation. Equation (9 ) says that if the representations of all hypotheses are equally likely (all bracketed terms are equal), the estimate is perfect, even if memory limitations are severe. Otherwise, biases may arise. III.C. Discussion of the Setup and the Assumptions In our model, the assessed probability of a hypothesis depends on (i) how the hypothesis itself affects its own representation and (ii) which hypotheses are examined in conjunction with it. The former feature follows from assumption A2, which posits that representativeness shapes the ease with which information about a hypothesis is retrieved from memory. KT (1972, p. 431) define representativeness as “a subjective judgment of the extent to which the event in question is similar in essential properties to its parent population or reflects the salient features of the process by which it is generated.” Indeed, KT (2002, p. 23) have a discussion of representativeness related to our model’s definition: “Representativeness tends to covary with frequency: common instances and frequent events are generally more representative than unusual instances and rare events,” but they add that “an attribute is representative of a class if it is very diagnostic; that is the relative frequency of this attribute is much higher in that class than in a relevant reference class.” In other words, sometimes what is representative is not likely. As we show below, the use of representative

WHAT COMES TO MIND

1411

but unlikely scenarios for a hypothesis is what drives several of the KT biases.5 In our model, representative scenarios, or stereotypes, quickly pop to the mind of a decision maker, consistent with the idea— supported in cognitive psychology and neurobiology—that background information is a key input in the interpretation of external (e.g., sensory) stimuli.6 What prevents the local thinker from integrating all other scenarios consistent with the hypothesis, as a Bayesian would do, is assumption A1 of incomplete recall. This crucially implies that the assessment of a hypothesis depends on the hypotheses examined in conjunction with it, as the latter affect recall and thus the denominator in (9). In this respect, our model is related to Tversky and Koehler’s (1994) support theory, which postulates that different descriptions of the same event may trigger different assessments. Tversky and Koehler characterize such nonextensional probability axiomatically, without deriving it from limited recall and representativeness. The central role of hypotheses in priming which information is recalled is neither shared by existing models of imperfect memory (e.g., Mullainathan [2002], Wilson [2002]) nor by models of analogical thinking (Jehiel 2005) or categorization (e.g., Mullainathan [2000], Mullainathan, Schwartzstein, and Shleifer [2008]). In the latter models, it is data provision that prompts the choice of a category, inside which all hypotheses are evaluated.7 This formulation implies that categorical thinking cannot explain the conjunction and disjunction fallacies because inside the chosen category the agent uses a standard probability measure, so that events with larger (equal) extension will be judged more (equally) likely. Although in many situations categorical and local

5. This notion is in the spirit of Griffin and Tversky’s (1992) intuition that agents assess a hypothesis more in light of the strength of the evidence in its favour, a concept akin to our “representativeness,” than in light of such evidence’s weight, a concept akin to our “likelihood.” 6. In the model, background knowledge is summarized by the objective probability distribution π (x). This clearly need not be the case. Consistent with memory research, some elements x in X may get precedence in recall not because they are more frequent but because the agent has experienced them more intensely or because they are easier to recall. Considering these possibilities is an interesting extension of our model. 7. To give a concrete example, in the context of Section II a categorical Jewish voter observing a candidate drinking milk with a hot dog immediately categorizes him as unfamiliar with the voter’s concerns, and within that category the voter estimates the relative likelihood of qualified and unqualified candidates. The voter would make a mistake in assessing qualification, but only a small one when virtually all candidates were unfamiliar.

1412

QUARTERLY JOURNAL OF ECONOMICS

thinking lead to similar assessments, in situations related to KT anomalies they diverge. To focus on the impact of hypotheses on the recall of stereotypes, we have taken the probability space (X, π ) on which representations are created as given. However, the dimensions of X and thus the space of potential stereotypes may depend on the nature of the problem faced and the data received by the agent.8 We leave the analysis of this additional, potentially interesting source of framing effects in our setup to future research. Our model is related to research on particular heuristics, including Barberis, Shleifer, and Vishny (1998), Rabin and Schrag (1999), Rabin (2002), and Schwartzstein (2009). In these papers, the agent has an incorrect model in mind and interprets the data in light of that model. Here, in contrast, the agent has the correct model, but not all parts of it come to mind. Our approach also shares some similarities with models of sampling. Stewart, Chater, and Brown (2006) study how agents form preferences over choices by sampling their past experiences; Osborne and Rubinstein (1998) study equilibrium determination in games where players sample the performance of different actions. These papers do not focus on judgment under uncertainty. More generally, our key innovation is to consider the model in which agents sample not randomly but based on representativeness, leading them to systematically oversample certain specific memories and undersample others.

IV. BIASES IN PROBABILISTIC ASSESSMENTS IV.A. Magnitude of Biases We measure a local thinker’s bias in assessing a generic hypothesis h1 against an alternative hypothesis h2 by deriving from expression (9 ) the odds ratio (10)

Pr L(h1 | d) Pr L(h2 | d)

b =

k=1 b k=1

Pr s1k | h1 ∩ d Pr(h1 | d) , Pr s2k | h2 ∩ d Pr(h2 | d)

8. As an example, Table 1.A could be generated by the following thought process. In a first stage, the campaign renders the “qualification” dimension (the table’s rows) salient to the voter. Then the candidate’s statement about Jewish issues renders the familiarity dimension (the table’s columns) salient, perhaps because the statement is so informative about the candidate’s familiarity with Jewish concerns.

1413

WHAT COMES TO MIND TABLE II DISTRIBUTION OF CANDIDATE TYPES Data d Qualified Unqualified

Familiar

Unfamiliar

π1 π3

π2 π4

where Pr(h1 | d)/Pr(h2 | d) is a Bayesian’s estimate of the odds of h1 relative to h2 . The bracketed term captures the likelihood of the representation of h1 relative to h2 . The odds of h1 are overestimated if and only if the representation of h1 is more likely than that of h2 (the bracketed term is greater than one). In a sense, a more likely representation induces the agent to oversample instances of the corresponding hypothesis, so that biases arise when one hypothesis is represented with relatively unlikely scenarios. When b = 1, expression (10) becomes 1 Pr s1 | h1 ∩ d Pr(h1 | d) Pr s11 ∩ h1 ∩ d = , = (11) Pr s21 ∩ h2 ∩ d Pr s21 | h2 ∩ d Pr(h2 | d) Pr L(h2 | d) Pr L(h1 | d)

which highlights how representativeness and likelihood of scenarios shape biases. Overestimation of h1 is strongest when the representative scenario s11 for h1 is also the most likely one for h1 , whereas the representative scenario s21 for h2 is the least likely one for h2 . In this case, Pr(s11 | h1 ∩ d) is maximal and Pr(s21 | h2 ∩ d) is minimal, maximizing the bracketed term in (11). Conversely, underestimation of h1 is strongest when the representative scenario for h1 is the least likely but that for h2 is the most likely. This analysis illuminates the electoral campaign example of Section II. Consider the general distribution of candidate types after the local thinker receives data d (see Table II). We assume that, irrespective of the data provided, π1 /(π1 + π3 ) > π2 /(π2 + π4 ): being qualified is more likely among familiar than unfamiliar types, so familiarity with Jewish concerns is at least slightly informative about qualification. As in the examples of Section II, then, the representative scenario for h1 = unqualified is always s11 = unfamiliar, whereas that for h2 = qualified is always s21 = familiar. The voter represents h1 with (unqualified, unfamiliar) and h2 with (qualified, familiar), estimating that Pr L (unqualified) = π4 /(π4 + π1 ). The assessed odds ratio is thus equal

1414

QUARTERLY JOURNAL OF ECONOMICS

to π4 /π1 , which can be rewritten as

π1 π3 + π4 Pr L(unqualified) π4 , (12) = L π + π π + π 4 3 1 2 π1 + π2 Pr (qualified) which is the counterpart of (11). The bracketed term is the ratio of the likelihoods of scenarios for low and high qualifications (Pr(unfamilar | unqualified)/ Pr(familiar | qualified)). In Table I.A, where d = calling Israel the aggressor, judgments are good because π2 and π3 are small, which means that representative scenarios are extremely likely. In the extreme case when π2 = π3 = 0, all probability mass is concentrated on stereotypical candidates, local thinking entails no informational loss, and there is no bias. In this case, stereotypes are not only representative but also perfectly informative for both hypotheses. In contrast, in Table I.C (where d = drinks milk with a hot dog), judgments are bad because π1 and π3 are small, whereas π2 and π4 are large. If, at the extreme, π1 is arbitrarily small, the overestimation factor in (12) becomes infinite! Now h2 = qualified is hugely underestimated precisely because its representative “familiar” scenario is very unlikely relative to the “unfamiliar” scenario for h1 = unqualified. The point is that in thinking about stereotypical candidates, for whom qualification is positively associated with familiarity, the local thinker views evidence against “familiarity” as strong evidence against qualification, even if Table I.C tells us that this inference is unwarranted. To see more generally how representativeness and likelihood determine the direction and strength of biases in our model, consider the following proposition, which, along with other results, is proved in Online Appendix 2 and is restricted to the class of hypotheses described in (7): PROPOSITION 1. Suppose that the agent evaluates two hypotheses, h1 and h2 , where the set of feasible scenarios for them is the same, namely S1 = S2 = S. We then have 1. Representation. Scenarios rank in opposite order of representativeness for the two hypotheses, formally s1k = s2M−k+1 for k = 1, . . . , M, where M is the number of scenarios in S. 2. Assessment bias. i. If π (x) is such that Pr(s1k | h1 ∩ d) and Pr(s1k | h2 ∩ d) strictly decrease in k (at least for some k), the representativeness and likelihood of scenarios are positively related for h1 , and negatively related for h2 . The agent

WHAT COMES TO MIND

1415

thus overestimates the odds of h1 relative to h2 for every b < M. One can find a π (x) for which such overestimation is arbitrarily large. The opposite is true if Pr(s1k | h1 ∩ d) and Pr(s1k | h2 ∩ d) strictly increase in k. ii. If π (x) is such that Pr(s1k | h1 ∩ d) decreases and Pr(s1k | h2 ∩ d) increases in k, the representativeness and likelihood of scenarios are positively related for both hypotheses. The agent over- or underestimates the odds of h1 relative to h2 at most by a factor of M/b. Proposition 1 breaks down the roles of assumption A2 and of the probability distribution π (x) in generating biases.9 With respect to representations, A2 implies that when considering two exhaustive hypotheses, the most representative scenarios for h2 are the least representative ones for h1 and vice versa. This property (which does not automatically hold in the case of three or more hypotheses) formally highlights a key aspect of representativeness in A2, namely that stereotypes are selected to maximize the contrast between the representation of different hypotheses. Intuitively, the stereotype of a qualified candidate is very different from that of an unqualified one even when most qualified and unqualified candidates share a key characteristic (e.g., unfamiliarity). What does this property of representations imply for biases? Part 2.i says that this reliance on different stereotypes causes pervasive biases when the most likely scenario is the same under both hypotheses. In this case, the use of a highly likely scenario for one hypothesis precludes its use for the competing hypothesis, leading to overestimation of the former. The resulting bias can be huge, as in Table I.C, and even infinite in the extreme. In contrast, part 2.ii captures the case where the representativeness and likelihood of scenarios go hand in hand for both hypotheses. Biases are now limited (but possibly still large) and the largest estimation bias occurs when the likelihood of one hypothesis is fully concentrated on one scenario, whereas the likelihood of the competing hypothesis is spread equally among its M scenarios. This implies that hypotheses whose distribution is spread out over a larger number of scenarios are more likely to be underestimated, the more so the more local is the agent’s thinking (i.e., the smaller is b). 9. The proof of Proposition 1 provides detailed conditions on classes of problems where S1 = S2 = S holds.

1416

QUARTERLY JOURNAL OF ECONOMICS

IV.B. Data Provision Local thinkers’ biases described in Proposition 1 do not rely in any fundamental way on data provision. However, looking more closely at the role of data in our model is useful for at least two reasons. First, as we show in Section V, the role of data helps illuminate some of the psychological experiments. Second, interesting real-world implications of our setup naturally concern agents’ reaction to new information. To fix ideas, note that for a Bayesian, provision of data d is informative about h1 versus h2 if and only if it affects the odds ratio between them (i.e., if Pr(h1 ∩ d)/ Pr(h2 ∩ d) = Pr(h1 )/ Pr(h2 )). To see how a local thinker reacts to data, denote by si1 the representative scenario for hypothesis hi (i = 1, 2) if no data are 1 , the representative scenario for hi (i = 1, 2), provided and by si,d when d ⊂ X is provided. This notation is useful because the role of data in expression (11) depends on whether d affects the agent’s representation of the hypotheses. We cannot say a priori whether data provision enhances or dampens bias, but the inspection of how expression (11) changes with data provision reveals that the overall effect of the latter combines the two basic effects: PROPOSITION 2. Suppose that b = 1 and the agent is given data d ⊂ X. If d is such that si1 ∩ d = φ and si1 ∩ d = φ for all i, then stereotypes and assessments do not change. In this case, the agent underreacts to d when d is informative. If, in contrast, d is such that si1 ∩ d = φ for some i, then the stereotype for the corresponding hypothesis must change. In this case, the agent may overreact to uninformative d. In the first case, stereotypes do not change with d (i.e., 1 ∩ hi ∩ d for all i), and so data provision affects neisi1 ∩ hi = si,d ther the representation of hypotheses nor—according to (11)— probabilistic assessments. If the data are informative, this effect captures the local thinker’s underreaction because—unlike the Bayesian—the local thinker does not revise his assessment after observing d. In the second case, the representations of one or both hypotheses must change with d. This change can generate overreaction by inducing the agent to revise his assessment even when a Bayesian would not do so. This effect increases overestimation of h1 if the new representation of h1 triggered by d is relatively more likely than that of h2 (if the bracketed term in (11)

WHAT COMES TO MIND

1417

rises). We refer to this effect as data “destroying” the stereotype of the hypothesis whose representation becomes relatively less likely. IV.C. Conjunction Fallacy The conjunction fallacy refers to the failure of experimental subjects to follow the rule that the probability of a conjoined event C&D cannot exceed the probability of event C or event D by itself. For simplicity, we only study the conjunction fallacy when b = 1 and when the agent is provided no data, but the fundamental logic of the conjunction fallacy does not rely on these assumptions. We consider the class of problems in (7), but in Online Appendix 2 we prove that Proposition 3 holds also for general classes of hypotheses. We focus on the so-called “direct tests,” namely when the agent is asked to assess the probability of a conjoined event h1 ∩ h2 and of one of its constituent events such as h1 simultaneously. De1 the scenario used to represent the conjunction h1 ∩ h2 note by s1,2 1 and by s1 the scenario used to represent the constituent event h1 . In this case, the conjunction fallacy obtains in our model if and only if 1 ∩ h1 ∩ h2 ≥ Pr s11 ∩ h1 , (13) Pr s1,2 that is, when the probability of the represented conjunction is higher than the probability of the represented constituent event h1 . Expression (13) is a direct consequence of (9), as in this direct test the denominators are identical and cancel out. The conjunction fallacy then arises only under the following necessary condition: PROPOSITION 3. When b = 1, in a direct test of hypotheses h1 and h1 ∩ h2 , Pr L(h1 ∩ h2 ) ≥ Pr L(h1 ) only if scenario s11 is not the most likely for h1 . The conjunction fallacy arises only if the constituent event h1 prompts the use of an unlikely scenario and thus of an unlikely stereotype. To see why, rewrite (13) as 1 ∩ h2 | h1 ≥ Pr s11 | h1 . (14) Pr s1,2 The conjunction rule is violated when scenario s11 is less likely 1 1 ∩ h2 for hypothesis h1 . Note, though, that s1,2 ∩ h2 is itthan s1,2 1 self a scenario for h1 because s1,2 ∩ h2 ∩ h1 identifies an element of X. Condition (14) therefore only holds if the representative

1418

QUARTERLY JOURNAL OF ECONOMICS

scenario s11 is not the most likely scenario for h1 , which proves Proposition 3.10 IV.D. Disjunction Fallacy According to the disjunction rule, the probability attached to an event A should be equal to the total probability of all events whose union is equal to A. As we discuss in Section V.C, however, experimental evidence shows that subjects often underestimate the probability of residual hypotheses such as “other” relative to their unpacked version. To see under what conditions local thinking can account for this fallacy, compare the assessment of hypothesis h1 with the assessment of hypothesis “h1,1 or h1,2 ,” where h1,1 ∪ h1,2 = h1 (and obviously h1,1 ∩ h1,2 = φ), by an agent with b = 1. It is easy to extend the result to the case where b > 1. Formally, we compare Pr L(h1 ) when h1 is tested against h1 with Pr L(h1,1 ) + Pr L(h1,2 ) obtained when the hypothesis “h1,1 or h1,2 ” is tested against its complement h1 . The agent then attributes a higher probability to the unpacked version of the hypothesis, thus violating the disjunction rule, provided that Pr L(h1,1 ) + Pr L(h1,2 ) > Pr L(h1 ). 1 1 , s1,2 , and s01 to be the representative scenarios Define s11 , s1,1 for hypotheses h1 , h1,1 , h1,2 , and h1 , respectively. Equation (9) then implies that h1 is underestimated when 1 1 ∩ h1,1 + Pr s1,2 ∩ h1,2 Pr s1,1 (15) 1 1 Pr s1,1 ∩ h1,1 + Pr s1,2 ∩ h1,2 + Pr s01 ∩ h1 Pr s11 ∩ h1 > . Pr s11 ∩ h1 + Pr s01 ∩ h1 Equation (15) immediately boils down to 1 1 (15 ) Pr s1,1 ∩ h1,1 + Pr s1,2 ∩ h1,2 > Pr s11 ∩ h1 , meaning that the probability of the representation s11 ∩ h1 of h1 is smaller than the sum of the probabilities of the representations 10. Proposition 3 implies that, if a hypothesis h1 is not represented with the most likely scenario, one can induce the conjunction fallacy by testing h1 against the conjoined hypothesis h∗1 = s1∗ ∩ h1 , where s1∗ is the most likely scenario for h1 and h∗1 ⊂ h1 is the element obtained by fitting such most likely scenario in hypothesis h1 itself. By construction, in this case Pr L(h∗1 ) ≥ Pr L(h1 ), so that the conjunction rule is violated.

WHAT COMES TO MIND

1419

1 1 ∩ h1,1 and s1,2 ∩ h1,2 of h1,1 and h1,2 , respectively. Online s1,1 Appendix 2 proves that this occurs if the following condition holds:

PROPOSITION 4. Suppose that b = 1. In one test, hypothesis h1 is tested against a set of alternatives. In another test, the hypothesis “h1,1 or h1,2 ” is tested against the same set of alternatives as h1 . Then, if s11 is a feasible scenario for h1,1 , h1,2 , or both, it follows that Pr L(h1,1 ) + Pr L(h1,2 ) > Pr L(h1 ). Local thinking leads to underestimation of implicit disjunctions. Intuitively, unpacking a hypothesis h1 into its constituent events reminds the local thinker of elements of h1 that he would otherwise fail to integrate into his representation. The sufficient condition for this to occur (that s11 must be a feasible scenario in the explicit disjunction) is very weak. For example, it is always fulfilled when the representation of the implicit disjunction s11 ∩ h1 is contained in a subresidual category of the explicit disjunction, such as “other.”

V. LOCAL THINKING AND HEURISTICS AND BIASES, WITH SPECIAL REFERENCE TO LINDA We now show how our model can rationalize some of the biases in probabilistic assessments. We cannot rationalize all of the experimental evidence, but rather we show that our model provides a unified account of several findings. At the end of the section, we discuss the experimental evidence that our model cannot directly explain. We perform our analysis in a flexible setup based on KT’s (1983) famous Linda experiment. Subjects are given a description of a young woman, called Linda, who is a stereotypical leftist and in particular was a college activist. They are then asked to check off in order of likelihood the various possibilities of what Linda is today. Subjects estimate that Linda is more likely to be “a bank teller and a feminist” than merely “a bank teller,” exhibiting the conjunction fallacy. We take advantage of this setup to show how a local thinker displays a variety of biases, including base rate neglect and the conjunction fallacy.11 In Section V.D, we examine the disjunction fallacy experiments. 11. In perhaps the most famous base-rate neglect experiment, KT (1974) gave subjects a personality description of a stereotypical engineer, and told them that he came from a group of 100 engineers and lawyers, and the share of engineers in the

1420

QUARTERLY JOURNAL OF ECONOMICS TABLE III PROBABILITY DISTRIBUTION OF ALL POSSIBLE TYPES A. College activists (A)

BT SW

F

NF

(2/3)(τ /4) (9/10)(2σ /3)

(1/3)(τ /4) (1/10)(2σ /3)

B. College nonactivists (NA)

BT SW

F

NF

(1/5)(3τ /4) (1/2)(σ /3)

(4/5)(3τ /4) (1/2)(σ /3)

Suppose that individuals can have one of two possible backgrounds, college activists (A) and nonactivists (NA), be in one of two occupations, bank teller (BT) or social worker (SW), and hold one of two current beliefs, feminist (F) or nonfeminist (NF). The probability distribution of all possible types is described in Table III, Panels A and B. Table III, Panel A, reports the frequency of activist (A) types, Table III, Panel B, the frequency of nonactivist (NA) types. (This full distribution of types is useful to study the effects of providing data d = A.) τ and σ are the base probabilities of a bank teller and a social worker in the population, respectively Pr(BT) = τ , Pr(SW) = σ . Table III builds in two main features. First, the majority of college activists are feminist, whereas the majority of nonactivists are nonfeminist, irrespective of their occupations (Pr(X, F, A) ≥ Pr(X, NF, A) and Pr(X, F, NA) ≤ Pr(X, NF, NA) for X = BT, SW). Second, social workers are relatively more feminist than bank tellers, irrespective of their college background (e.g., among activists, nine out of ten social workers are feminists, whereas only two out of three bank tellers are feminists; among nonactivists, half of social workers are feminists, whereas only one out of five bank tellers are feminists). Suppose that a local thinker with b = 1 is told that Linda is a former activist, d = A, and asked to assess probabilities that Linda

group. In assessing the odds that this person was an engineer or a lawyer, subjects mainly focused on the personality description, barely taking the base rates of the engineers in the group into account. The parallel between this experiment and the original Linda experiment is sufficiently clear to allow us to analyze base-rate neglect and the conjunction fallacy in the same setting.

WHAT COMES TO MIND

1421

is a bank teller (BT), a social worker (SW), or a feminist bank teller (BT, F). What comes to his mind? Because social workers are relatively more feminist than bank tellers, the agent represents a bank teller with a “nonfeminist” scenario and a social worker with a “feminist” scenario. Indeed, Pr(BT | A, NF) = (τ/12)/[(τ/12) + (2σ/30)] > Pr(BT | A, F) = (2τ/12)/[(2τ/12) + (9σ/15)], and Pr(SW | A, NF) < Pr(SW | A, F). Thus, after the data that Linda was an activist are provided, “bank teller” is represented by (BT, A, NF), and “social worker” by (SW, A, F). The hypothesis of “bank teller and feminist” is correctly represented by (BT, A, F) because it leaves no gaps to be filled. Using equation (11), we can then compute the local thinker’s odds ratios for various hypotheses, which provide a parsimonious way to study judgment biases. V.A. Neglect of Base Rates Consider the odds ratio between the local thinker’s assessment of “bank teller” and “social worker.” In the represented state space, this is equal to

(16)

1/3 3 τ Pr(BT, A, NF) = . = Pr(SW, A, F) 9/10 8 σ Pr L(SW | A) Pr L(BT | A)

As in (11), the rightmost term in (16) is the Bayesian odds ratio, whereas the bracketed term is the ratio of the two representations’ likelihoods. The bracketed term is smaller than one, implying not only that the local thinker underestimates the odds of Linda being a bank teller, but also that he neglects some of the information contained in the population odds of a bank teller, τ /σ . The local thinker underweights the base rate by a factor of (1/3)/(9/10) = 10/27 relative to a Bayesian. Neglect of base rates arises here because the local thinker represents the bank teller as a nonfeminist, a low-probability scenario given the data d = A. With this representation, he forgets that many formerly activist bank tellers are also feminists, which is base-rate neglect. The use of an unlikely scenario for “bank teller” renders biases more severe, but it is not necessary for baserate neglect, which is rather a natural consequence of the local thinker’s use of limited, stereotypical information and can also arise when both hypotheses are represented by the most likely scenario.

1422

QUARTERLY JOURNAL OF ECONOMICS

V.B. Conjunction Fallacy Consider now the local thinker’s odds ratio between “bank teller” and “bank teller and feminist.” Using parameter values in Table III, Panel A, this is equal to (17)

1/3 3 1 Pr(BT, A, M) = = < 1. = L Pr(BT, A, F) 1 2 2 Pr (BT, F | A) Pr L(BT | A)

The conjunction rule is violated because the local thinker represents the constituent event “bank teller” with a scenario, “nonfeminist,” which is unlikely given that Linda is a former activist. Why does the agent fail to realize that among former activists many bank tellers are feminists? Our answer is that the term “bank teller” brings to mind a representation that excludes feminist bank tellers, because “feminist” is a characteristic disproportionately associated with social workers, which does not match the image of a stereotypical bank teller. One alternative explanation of the conjunction fallacy discussed in KT (1983) holds that the subjects substitute the target assessment of Pr(d | h) for that of Pr(h | d).12 In our Linda example, this error can indeed yield the conjunction fallacy because Pr(A | BT) = 1/4 < Pr(A | F, BT) = 10/19. Intuitively, being feminist (on top of being a bank teller) can increase the chance of being Linda. KT (1983) addressed this possibility in some experiments. In one of them, subjects were told that the tennis player Bjorn Borg had reached the Wimbledon final and then asked to assess whether it was more likely that in the final Borg would lose the first set or whether he would lose the first set but win the match. Most subjects violated the conjunction rule by stating that the second outcome was more likely than the first. As we show in Online Appendix 3 using a model calibrated with actual data, our approach can explain this evidence, but a mechanical assessment of Pr(d | h) cannot. The reason, as KT point out, is that Pr(Borg has reached the final | Borg’s score in the final) is always equal to one, regardless of the final score. Most important, the conjunction fallacy explanation based on the substitution of Pr(d | h) for Pr(h | d) relies on the provision 12. In a personal communication, Xavier Gabaix proposed a “local prime” model complementary to our local thinking model. Such a model exploits the above intuition about the conjunction fallacy. Specifically, in the local prime model,

an agent assessing h1 , . . . , hn evaluates Pr L (hi | d) = Pr(d | hi )/[Pr(h1 | d) + · · · + Pr(hn | d)].

WHAT COMES TO MIND

1423

of data d. This story cannot thus explain the conjunction rule violations that occur in the absence of data provision. To see how our model can account for those, consider another experiment from KT (1983). Subjects are asked to compare the likelihoods of “A massive flood somewhere in North America in which more than 1,000 people drown” with that of “An earthquake in California causing a flood in which more than 1,000 people drown.” Most subjects find the latter event, which is a special case of the former, to be nonetheless more likely. We discuss this example formally in Online Appendix 3, but the intuition is straightforward. When earthquakes are not mentioned, massive floods are represented by an unlikely scenario of disastrous storms, as storms are a stereotypical cause of floods. In contrast, when earthquakes in California are explicitly mentioned, the local thinker realizes that these can cause much more disastrous floods, changes his representation, and attaches a higher probability to the outcome because earthquakes in California are quite common. This example vividly illustrates the key point that it is the hypothesis itself, rather than the data, that frames both the representation and the assessment. The general idea behind these types of conjunction fallacy is that either the data (Linda is a former activist) or the question itself (floods in North America) brings to mind a representative but unlikely scenario. This general principle can help explain other conjunction rule violations. For example, Kahneman and Frederick (2005) report that subjects estimate the annual number of murders in the state of Michigan to be smaller than that in the city of Detroit, which is in Michigan. Our model suggests that this might be explained by the fact that the stereotypical location in Michigan is rural and nonviolent, so subjects forget that the more violent city of Detroit is in the state of Michigan as well. V.C. The Role of Data and Insensitivity to Predictability Although base-rate neglect and the conjunction fallacy do not rely on data provision, the previous results illustrate the effects of data in our model. Suppose that a local thinker assesses the probabilities of bank teller, social worker, and feminist bank teller before being given any data. From Table III, Panel B, “social worker” is still represented by (SW, A, F) and “bank teller and feminist” by (BT, A, F). Crucially, however, “bank teller” is now represented by (BT, NA, NF). This is the only representation that changes after

1424

QUARTERLY JOURNAL OF ECONOMICS

d = A is provided. Before data are provided, then, we have (18) (19)

Pr L(BT)

Pr(BT, NA, M) (2/3)(6τ/8) 5τ = = , Pr(SW, A, F) (9/10)(2σ/3) 6σ Pr (SW)

3/5 60 18 Pr L(BT) Pr(BT, NA, M) = = > 1. = L Pr(BT, A, F) 10/19 19 5 Pr (BT, F) L

=

Biases now are either small or outright absent. Expression (18) gives an almost correct unconditional probability assessment for the population odds ratio of τ /σ . In expression (19), not only does the conjunction rule hold, but also the odds of “bank teller” are overestimated. So what happens when data are provided? As in Proposition 2, this is a case where data provision “destroys” the stereotype of only one of the hypotheses, “bank teller.” Before Linda’s college career is described, a bank teller is “nonactivist, nonfeminist.” This stereotype is very likely. However, after d = A is provided, the representation of “bank teller” becomes an unlikely one, because even for bank tellers it is extremely unlikely to have become “nonfeminist” after having been “activist.” The probability of Linda being a bank teller is thus underestimated, generating both severe base-rate neglect and the conjunction fallacy. This analysis illustrates the role of data not only in the Linda setup but also in the electoral campaign example. In both cases, the agent is given data (d = A or d = drink milk with hot dog) that is very informative about an attribute defining stereotypes (political orientation or familiarity). By changing the likelihood of the stereotype, such data induce drastic updating, even when the data themselves are scarcely informative about the target assessment (occupation or qualification). This overreaction to scarcely informative data provides a rationalization for the “insensitivity to predictability” displayed by experimental subjects. We formally show this point in Online Appendix 3 based on a famous KT experiment on practice talks. In sum, a local thinker’s use of stereotypes provides a unified explanation for several KT biases. To account for other biases, we need to move beyond the logic of representativeness as defined here. For instance, our model cannot directly reproduce the Cascells, Schoenberger, and Graboys (1978) evidence on physicians’ interpretations of clinical tests or the blue versus green cab experiment (KT 1982). KT themselves (1982, p. 154) explain why these biases cannot be directly attributed to representativeness.

WHAT COMES TO MIND

1425

We do not exclude the possibility that these biases are a product of local thinking, but progress in understanding different recall processes is needed to establish the connection. V.D. Disjunction and Car Mechanics Revisited Fischhoff, Slovic, and Lichtenstein (1978) document the violation of the disjunction rule experimentally. They asked car mechanics, as well as lay people, to estimate the probabilities of different causes of a car’s failure to start. They document that on average the probability assigned to the residual hypothesis—“The cause of failure is something other than the battery, fuel system, or the engine”—went up from .22 to .44 when that hypothesis was broken up into more specific causes (e.g., the starting system, the ignition system). Respondents, including experienced car mechanics, discounted hypotheses that were not explicitly mentioned. The underestimation of implicit disjunctions has been documented in many other experiments and is the key assumption behind Tversky and Koehler’s (1994) support theory. Proposition 4 allows us to consider the following model of the car mechanic experiment. There is only one dimension, the cause of a car’s failure to start (i.e., K = 1), so that X ≡ {battery, fuel, ignition}, where fuel stands for “fuel system” and ignition stands for “ignition system.” Assume without loss of generality that Pr(battery) > Pr(fuel) > Pr(ignition) > 0. This case meets the conditions of Proposition 4 because now no dimension is left free, so all hypotheses share the same scenario s = X. The agent is initially asked to assess the likelihood that the car’s failure to start is not due to battery troubles. That is, he is asked to assess the hypotheses h1 = {fuel, ignition}, h2 = {battery}. Because K = 1, there are no scenarios to fit. Yet, because the implicit disjunction h1 = {fuel, ignition} does not pin down an exact value for the car’s failure to start, by criterion (8 ) in the Appendix the agent represents it by selecting its most likely element, which is fuel. When hypotheses share no scenarios, the local thinker picks the most likely element within each hypothesis. He then attaches the probability (20)

Pr L(h1 ) =

Pr(fuel) Pr(fuel) + Pr(battery)

to the cause of the car’s failure to start being other than battery when this hypothesis is formulated as an implicit disjunction.

1426

QUARTERLY JOURNAL OF ECONOMICS

Now suppose that the implicit disjunction h1 is broken up into its constituent elements, h1,1 = fuel and h1,2 = ignition (e.g., the individual is asked to assess the likelihoods that the car’s failure to start is due to ignition troubles or to fuel system troubles separately). Clearly, the local thinker represents h1,1 by fuel and h1,2 by ignition. As before, he represents the other hypothesis h2 by battery. The local thinker now attaches greater probability to the car’s failure to start being other than the battery because (21) Pr(ignition) + Pr(fuel) Pr(ignition) + Pr(fuel) + Pr(battery) Pr(fuel) > Pr L(h1 ) = . Pr(fuel) + Pr(battery)

Pr L(ignition) + Pr L(fuel) =

In other words, we can account for the observed disjunction fallacy. The logic is the same as that of Proposition 4: the representation of the explicit disjunction adds to the representation of the implicit disjunction (x = fuel) an additional element (x = ignition), which boosts the assessed probability of the explicit disjunction. VI. AN APPLICATION TO DEMAND FOR INSURANCE Buying insurance is supposed to be one of the most compelling manifestations of economic rationality, in which risk-averse individuals hedge their risks. Yet both experimental and field evidence, summarized by Cutler and Zeckhauser (2004) and Kunreuther and Pauly (2005), reveal some striking anomalies in individual demand for insurance. Most famously, individuals vastly overpay for insurance against narrow low-probability risks, such as those of airplanes crashing or appliances breaking. They do so especially after the risk is brought to their attention, but not when risks remain unmentioned. In a similar vein, people prefer insurance policies with low deductibles, even when the incremental cost of insuring small losses is very high (Johnson et al. 1993; Sydnor 2009). Meanwhile, Johnson et al. (1993) present experimental evidence that individuals are willing to pay more for insurance policies that specify in detail the events being insured against than they do for policies insuring “all causes.” Our model, particularly the analysis of the disjunction fallacy, may shed light on this evidence. Suppose that an agent with a concave utility function u(.) faces a random wealth stream due to

WHAT COMES TO MIND

1427

probabilistic realizations of various accidents. For simplicity, we assume that at most one accident occurs. There are three contingencies, s = 0, 1, 2, each occurring with an ex ante probability πs . Contingency s = 0 is the status quo or no-loss contingency. In this state, the agent’s wealth is at its baseline level w0 . Contingencies 1 and 2 correspond to the realizations of distinct accidents, which entail wealth levels ws < w0 for s = 1, 2. A contingency s = 1, 2 then represents the income loss caused by a car accident, a specific reason for hospitalization, or a plane crash from a terrorist attack. We assume that π0 > max(π1 , π2 ), so that the status quo is the most likely event. We first show that a local thinker in this framework exhibits behavior consistent with Johnson et al.’s (1993) experiments. The authors find, for example, that, in plane crash insurance, subjects are willing to pay more in total for insurance against a crash caused by “any act of terrorism” plus insurance against a crash caused by “any non–terrorism related mechanical failure” than for insurance against a crash for “any reason” (p. 39). Likewise, subjects are willing to pay more in total for insurance policies paying for hospitalization costs in the events of “any disease” and “any accident” than for a policy that pays those costs in the event of hospitalization for “any reason” (p. 40). As a starting point, note that the maximum amount P that a rational thinker is willing to pay to insure his status quo income against “any risks” is given by (22)

u[w0 − P] = E[u(w)].

The rational thinker would pay the same amount P for insurance against any risk as for insurance against either s = 1 or s = 2 occurring, because he keeps all the outcomes in mind. Suppose now that a local thinker of order one (b = 1) considers the maximum price he is willing to pay to insure against any risk (i.e., against the event s = 0). For a local thinker, only one (representative) risk comes to mind. Suppose without loss of generality that π1 > π2 . Then, just as in the car mechanic example, only the more likely event s = 1 comes to the local thinker’s mind. As a consequence, he is willing to pay up to P L for coverage against any risk, defined by (23)

u[w0 − P L] = E[u(w) | s = 0, 1].

1428

QUARTERLY JOURNAL OF ECONOMICS

The local thinker’s maximum willingness to pay directly derives from his certainty-equivalent wealth conditional on the state belonging to the event “s = 0 or 1.” If, in contrast, the local thinker is explicitly asked to state the maximum willingness-to-pay for insuring against either s = 1, 2 occurring, then both events come to mind and his maximum price is identical to the rational thinker’s price of P. Putting these observations together, it is easy to show that the local thinker is willing to pay more for the unpacked coverage whenever u(w2 ) ≤ E[u(w) | s = 0, 1].

(24)

That is, condition (24) and thus the Johnson et al. (1993) experimental findings would be confirmed when, as in the experiments, the two events entailed identical losses so that w1 = w2 (plane crash due to one of two possible causes). In this case, insurance against s = 2 is valuable, and therefore the local thinker is willing to pay less for coverage against “any accident” than when all the accidents are listed because, in the former case, he does not recall s = 2. This partial representation of accidents leads the agent to underestimate his demand for insurance relative to the case in which all accidents are spelled out. The same logic illuminates overinsurance against specific risks, such as a broken appliance or small property damage, as documented by Cutler and Zeckhauser (2004) and Sydnor (2009). A local thinker would in fact pay more for insurance against a specific risk than a rational thinker. Consider again insurance against the wealth loss in state s = 1. A rational thinker’s reservation price P1 to insure against s = 1 is given by (25)

(π0 + π1 )

u[w0 − P1 ] + π2 u[w2 − P1 ] = E[u(w)].

Consider now a local thinker. When prompted to insure against s = 1, the local thinker perfectly represents this state; at the same time, he represents the state where no accident occurs with the status quo s = 0 due to the fact that π0 > π2 . A useful (but not important) consequence in this example is that a local thinker’s reservation price turns out to be given by the same condition (23) as his price for insurance against any risk. It follows immediately that P L > P1 ; the local thinker is willing to pay more for insurance against a specific risk than the rational thinker. Intuitively, with narrow accidents, the no-accident event becomes the residual

WHAT COMES TO MIND

1429

category. The disjunction fallacy implies that the local thinker underestimates the total probability of the residual category, which covers states in which such narrow insurance is not valuable. As a consequence, the local thinker pays more for narrow insurance than a rational agent would. This logic also illustrates the observation of Cutler and Zeckhauser (2004) and Kunreuther and Pauly (2005) that individuals do not insure low-probability risks, such as terrorism or earthquakes, under ordinary circumstances but buy excessive amounts of such insurance immediately after an accident (or some other reminder) that brings the risks to their attention. In our model, low-probability or otherwise nonsalient events are the least likely to be insured against because they are not representative, and hence do not come to mind. Unless explicitly prompted, a local thinker considers either the status quo or highprobability accidents that come to mind. Once an unlikely event occurs, however, or is explicitly brought to the local thinker’s attention, it becomes part of the representation of risky outcomes and is overinsured. Local thinking can thus provide a unified explanation of two anomalous aspects of demand for insurance: overinsurance against narrow and well-defined risks and underinsurance against broad or vaguely defined risks. The model might also help explain other insurance anomalies, such as the demand for life insurance rather than for annuities by the elderly parents of welloff children (Cutler and Zeckhauser 2004). We leave a discussion of these issues to future work. VII. CONCLUSIONS We have presented a simple model of intuitive judgment in which the agent receives some data and combines them with information retrieved from memory to evaluate a hypothesis. The central assumption of the model is that, in the first instance, information retrieval from memory is both limited and selective. Some, but not all, of the missing scenarios come to mind. Moreover, what primes the selective retrieval of scenarios from memory is the hypothesis itself, with scenarios most predictive of that hypothesis—the representative scenarios—being retrieved first. In many situations, such intuitive judgment works well and does not lead to large biases in probability assessments. But in situations where the representativeness and likelihood of scenarios

1430

QUARTERLY JOURNAL OF ECONOMICS

diverge, intuitive judgment becomes faulty. We showed that this simple model accounts for a significant number of experimental results, most of which are related to the representativeness heuristic. In particular, the model can explain the conjunction and disjunction fallacies exhibited by experimental subjects. The model also sheds light on some puzzling evidence concerning demand for insurance. To explain the evidence, we took a narrow view of how recall of various scenarios takes place. In reality, many other factors affect recall. Both availability and anchoring heuristics described by Kahneman and Tversky (1974) bear on how scenarios come to mind but through mechanisms other than those we elaborated. At a more general level, our model relates to the distinction, emphasized by Kahneman (2003), between System 1 (quick and intuitive) and System 2 (reasoned and deliberate) thinking. Local thinking can be thought of as a formal model of System 1. However, from our perspective, intuition and reasoning are not so radically different. Rather, they differ in what is retrieved from memory to make an evaluation. In the case of intuition, the retrieval is not only quick but also partial and selective. In the case of reasoning of the sort studied by economists, retrieval is complete. Indeed, in economic models, we typically think of people receiving limited information from the outside world, but then combining it with everything they know to make evaluations and decisions. The point of our model is that, at least in making quick decisions, people do not bring everything they know to bear on their thinking. Only some information is automatically recalled from passive memory, and—crucially to understanding the world—the things that are recalled might not even be the most useful. Heuristics, then, are not limited decisions. They are decisions like all others but based on limited and selected inputs from memory. System 1 and System 2 are examples of the same mode of thought; they differ in what comes to mind. APPENDIX: LOCAL THINKING WITH GENERAL HYPOTHESES AND DATA Hypotheses and data may constrain some dimensions of the state space X without restricting them to particular values, as we assumed in (7). Generally, (7 )

h ∩ d ≡ {x ∈ X | xi ∈ Hi } ,

for some i ∈ I,

WHAT COMES TO MIND

1431

where I ⊆ {1, . . . , K} is the set of dimensions constrained by h ∩ d, and Hi ⊂ Xi are the sets they specify for each i ∈ I. Dimensions i∈ / I are left free. The class of hypotheses in (7) is a special case of that in (7 ) when the sets Hi are singletons. To generalize the definition of representation of a hypothesis, we assume that agents follow a three-stage procedure. First, each hypothesis h ∩ d is decomposed into all of its constituent “elementary hypotheses,” defined as those that fix one exact value for each dimension in I. Second, for each elementary hypothesis, agents then consider all possible scenarios, according to Definition 1. Finally, agents order the set of elementary hypotheses together with the respective feasible scenarios according to their conditional probabilities.13 An agent with b = 1 would simply solve (8 )

max Pr [xI | s ∩ d] , xI ,s

where xI ≡ {x ∈ X : xi = xˆi }, where xˆi ∈ Hi , ∀i ∈ I. Thus, conditional on fixing xI , scenario s is the exact equivalent of the scenario in Definition 1. A solution to problem (8 ) always exists due to the finiteness of the problem. 1 ∩ d for hyThis procedure generates a representation sr1 ∩ xI,r pothesis hr , which is the general counterpart of the representation sr1 ∩ hr ∩ d used in the class of problems in (7). Accordingly, (8 ) k of hr that yields a ranking of all possible representations srk ∩ xI,r in turn ranks all elements in hr ∩ d in terms of their order of recall. Formula (9) can now be directly applied to calculate the local thinker’s probabilistic assessment. In the case of exhaustive hypotheses in the general class (7 ), that assessment can be written as k b k Pr s ∩ x | h ∩ d Pr(ht ∩ d) t t k=1 I,t . (9

) Pr L(ht | d) = N b k ∩ x k | h ∩ d Pr(h ∩ d) Pr s r r r r=1 k=1 I,r Expression (9

) is an immediate generalization of (9 ). Except for Proposition 1, which is proved only for problems in (7), 13. This assumption captures the idea that dimensions explicitly mentioned in the hypothesis are selected to maximize the probability of the latter. We could assume that filling gaps in hypotheses taking form (7 ) is equivalent to selecting scenarios, in the sense that the agent maximizes (8) subject to scenarios s ∈ h ∩ d. Our main results would still hold in this case, but all scenarios s ∈ hr ∩ d would be equally representative, as expression (8) would always be equal to 1. Assumption (8 ) captures the intuitive idea that the agent also orders the representativeness of elements belonging to ranges explicitly mentioned in the hypothesis itself.

1432

QUARTERLY JOURNAL OF ECONOMICS

all the results in the paper generalize to hypotheses of type (7 ). The only caveat is that in this case element srk ∩ hr ∩ d should be read as the intersection of the set of specific values chosen by the agent for representing hr with the data and the chosen scenario, k ∩ d, which is the kth ranked term according to that is, as srk ∩ xI,r

objective (8 ). UNIVERSITAT POMPEU FABRA, CREI, UNIVERSITAT POMPEU FABRA, CEPR HARVARD UNIVERSITY

REFERENCES Barberis, Nicholas, Andrei Shleifer, and Robert Vishny, “A Model of Investor Sentiment,” Journal of Financial Economics, 49 (1998), 307–343. Cascells, Ward, Arno Schoenberger, and Thomas Graboys, “Interpretations of Physicians of Clinical Laboratory Results,” New England Journal of Medicine, 299 (1978), 999–1001. Cutler, David, and Richard Zeckhauser, “Extending the Theory to Meet the Practice of Insurance,” in Brookings–Wharton Papers on Financial Services, Robert Litan and Richard Herring, eds. (Washington, DC: Brookings Institution, 2004). Fischhoff, Baruch, Paul Slovic, and Sarah Lichtenstein, “Fault Trees: Sensitivity of Assessed Failure Probabilities to Problem Representation,” Journal of Experimental Psychology: Human Perceptions and Performance, 4 (1978), 330–344. Griffin, Dale, and Amos Tversky, “The Weighing of Evidence and the Determinants of Confidence,” Cognitive Psychology, 24 (1992), 411–435. Jehiel, Philippe, “Analogy-Based Expectation Equilibrium,” Journal of Economic Theory, 123 (2005), 81–104. Johnson, Eric, John Hershey, Jacqueline Meszaros, and Howard Kunreuther, “Framing, Probability Distortions, and Insurance Decisions,” Journal of Risk and Uncertainty, 7 (1993), 35–51. Kahneman, Daniel, “Maps of Bounded Rationality: Psychology for Behavioral Economics,” American Economic Review, 93 (2003), 1449–1476. Kahneman, Daniel, and Shane Frederick, “A Model of Heuristic Judgment,” in The Cambridge Handbook of Thinking and Reasoning, Keith J. Holyoake and Robert G. Morrison, eds. (Cambridge, UK: Cambridge University Press, 2005). Kahneman, Daniel, and Amos Tversky, “Subjective Probability: A Judgment of Representativeness,” Cognitive Psychology, 3 (1972), 430–454. ——, “Judgment under Uncertainty: Heuristics and Biases,” Science, 185 (1974), 1124–1131. ——, “Evidential Impact of Base-Rates,” in Judgement under Uncertainty: Heuristics and Biases, Daniel Kahneman, Paul Slovic, and Amos Tversky, eds. (Cambridge, UK: Cambridge University Press, 1982). ——, “Extensional vs. Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review, 91 (1983), 293–315. ——, “Extensional versus Intuitive Reasoning,” in Heuristics and Biases: The Psychology of Intuitive Judgments, Thomas Gilovich, Dale W. Griffin, and Daniel Kahneman, eds. (Cambridge, UK: Cambridge University Press, 2002). Kunreuther, Howard, and Mark Pauly, “Insurance Decision-Making and Market Behavior,” Foundations and Trends in Microeconomics, 1 (2005), 63–127. Mullainathan, Sendhil, “Thinking through Categories,” Mimeo, Massachusetts Institute of Technology, 2000. ——, “A Memory-Based Model of Bounded Rationality,” Quarterly Journal of Economics, 117 (2002), 735–774. Mullainathan, Sendhil, Joshua Schwartzstein, and Andrei Shleifer, “Coarse Thinking and Persuasion,” Quarterly Journal of Economics, 123 (2008), 577–620.

WHAT COMES TO MIND

1433

Osborne, Martin, and Ariel Rubinstein, “Games with Procedurally Rational Players,” American Economic Review, 88 (1998), 834–847. Popkin, Samuel, The Reasoning Voter (Chicago: University of Chicago Press, 1991). Rabin, Matthew, “Inference by Believers in the Law of Small Numbers,” Quarterly Journal of Economics, 117 (2002), 775–816. Rabin, Matthew, and Joel L. Schrag, “First Impressions Matter: A Model of Confirmatory Bias,” Quarterly Journal of Economics, 114 (1999), 37–82. Schwartzstein, Joshua, Selective Attention and Learning, Unpublished Manuscript, Harvard University, 2009. Stewart, Neil, Nick Chater, and Gordon Brown, “Decision by Sampling,” Cognitive Psychology, 53 (2006), 1–26. Sydnor, Justin, “(Over)insuring Modest Risks” (available at http://wsomfaculty .case.edu/sydnor/deductibles.pdf, 2009). Tversky, Amos, and Derek Koehler, “Support Theory: A Nonextensional Representation of Subjective Probability,” Psychological Review, 101 (1994), 547–567. Wilson, Andrea, Bounded Memory and Biases in Information Processing, Unpublished Manuscript, Princeton University, 2002.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS∗ DEAN P. FOSTER AND H. PEYTON YOUNG We show that it is very difficult to devise performance-based compensation contracts that reward portfolio managers who generate excess returns while screening out managers who cannot generate such returns. Theoretical bounds are derived on the amount of fee manipulation that is possible under various performance contracts. We show that recent proposals to reform compensation practices, such as postponing bonuses and instituting clawback provisions, will not eliminate opportunities to game the system unless accompanied by transparency in managers’ positions and strategies. Indeed, there exists no compensation mechanism that separates skilled from unskilled managers solely on the basis of their returns histories.

I. BACKGROUND Incentives for financial managers are coming under increased scrutiny because of their tendency to encourage excessive risk taking. In particular, the asymmetric treatment of gains and losses gives managers an incentive to increase leverage and take on other forms of risk without necessarily increasing expected returns for investors. Various changes in the incentive structure have been proposed to deal with this problem, including postponing bonus payments, clawing back bonus payments if later performance is poor, and requiring managers to hold an equity stake in the funds that they manage. These apply both to managers of financial institutions, such as banks, and to managers of private investment pools, such as hedge funds.1 The purpose of this paper is to show that, although these and related reforms may moderate the incentives to game the system, gaming cannot be eliminated. The problem is especially acute when there is no transparency, so that investors cannot see the trading strategies that are producing the returns for which ∗ This paper is a contribution to the Santa Fe Institute’s research program on incentives and the functioning of economic institutions. We are indebted to Gabriel Kreindler, Pete Kyle, Andrew Lo, Andrew Patton, Tarun Ramadorai, Krishna Ramaswamy, Neil Shephard, Robert Stine, and two anonymous referees for helpful suggestions. An earlier version was entitled “The Hedge Fund Game: Incentives, Excess Returns, and Performance Mimics,” Working Paper 07-041, Wharton Financial Institutions Center, University of Pennsylvania, 2007. 1. For a general discussion of managerial incentives in the financial sector, see Bebchuk and Fried (2004) and Bebchuk and Spamann (2009). The literature on incentives and risk taking by portfolio managers will be discussed in greater detail in Section II. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1435

1436

QUARTERLY JOURNAL OF ECONOMICS

managers are being rewarded. In this setting, where managerial compensation is based solely on historical performance, we establish two main results. First, if a performance-based compensation contract does not levy out-of-pocket penalties for underperformance, then managers with no superior investment skill can capture a sizable amount of the fees that are intended to reward superior managers by mimicking the latter’s performance. The potential magnitude of fee capture has a concise analytical expression. Second, if a compensation contract imposes penalties that are sufficiently harsh to deter risk-neutral mimics, then it will also deter managers of arbitrarily high skill levels. In other words, there exist no performance-based compensation schemes that screen out risk-neutral mimics while rewarding managers who generate excess returns. This contrasts with statistical measures of performance, some of which can discriminate in the long run between “expert” and “nonexpert” managers.2 Our results are proved using a combination of game theory, probability theory, and elementary principles of mechanism design. One of the novel theoretical elements is the concept of performance mimicry. This is analogous to a common biological strategy known as “mimicry” in which one species sends a signal, such as a simulated mating call, in order to lure potential mates, which are then devoured. An example is the firefly Photuris versicolor, whose predaceous females imitate the mating signals of females from other species in order to attract passing males, some of which respond and are promptly eaten (Lloyd 1974). Of course, the imitation may be imperfect and the targets are not fooled all of the time, but they are fooled often enough for the strategy to confer a benefit on the mimic.3 In this paper we apply a variant of this idea to modeling the competition for customers in financial markets. We show that 2. There is a substantial literature on statistical tests that discriminate between true experts and those who merely pretend to be experts. In finance, Goetzmann et al. (2007) propose a class of measures of investment performance that we discuss in greater detail in Section VI. Somewhat more distantly related is the literature on how to distinguish between experts who can predict the probability of future events and imposters who manipulate their predictions in order to look good (Lehrer 2001; Sandroni 2003; Sandroni, Smorodinsky, and Vohra 2003; Olszewski and Sandroni 2008). Another paper that is thematically related is Spiegler (2006), who shows how “quacks” can survive in a market because of the difficulty that customers have in distinguishing them from the real thing. 3. Biologists have documented a wide range of mimicking repertoires, including males mimicking females and harmless species mimicking harmful ones in order to deter predators (Alcock 2005).

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1437

portfolio managers with no private information or special investment skills can generate returns over an extended period of time that look just like the returns that would be generated by highly skilled managers; moreover, they can do so without any knowledge of how the skilled managers actually produce such returns.4 Of course, a mimic cannot reproduce a skilled manager’s record forever; instead he or she reproduces it with a certain probability and pays for it by taking on a small probability of a large loss. In practice, however, this probability is sufficiently small so that the mimic can get away with the imitation for many years (in expectation) without being discovered. Our framework allows us to derive precise analytical expressions for (i) the probability with which an unskilled manager can mimic a skilled one over any specified length of time and (ii) the minimum amount the mimic can expect to earn in fees as a function of the compensation structure. The paper is organized as follows. In the next section we review the prior theoretical and empirical literature on performance manipulation. In Section III we introduce the model, which allows us to evaluate a very wide range of compensation contracts and different ways of manipulating them. Section IV shows how much fee capture is possible under any compensation arrangement that does not assess personal financial penalties on the manager. In Section V we explore the implications of this result through a series of concrete examples. Section VI discusses manipulationproof performance measures and why they do not solve the problem of designing manipulation-proof compensation schemes. In Section VII we derive an impossibility theorem, which shows that there is essentially no compensation scheme that is able to reward skilled managers and screen out unskilled managers based solely on their “track records.” Section VIII shows how to extend these results to allow for the inflow and outflow of money based on prior performance. Section IX concludes. 4. It should be emphasized that mimicry is not the same as cloning or replication (Kat and Palaro 2005; Hasanhodzic and Lo 2007). These strategies seek to reproduce the statistical properties of a given fund or class of funds, whereas mimicry seeks to fool investors into thinking that returns are being generated by one type of distribution when in fact they are being generated by a different (and less desirable) distribution. Mimicry is also distinct from strategy stealing, which is a game-theoretic concept that involves one player copying the entire strategy of another (Gale 1974). In our setting the performance mimic cannot steal the skilled manager’s investment strategy because if he or she knew the strategy then he or she too would be skilled.

1438

QUARTERLY JOURNAL OF ECONOMICS

II. RELATED LITERATURE The fact that standard compensation contracts give managers an incentive to manipulate returns is not a new observation; indeed there is a substantial prior literature on this issue. In particular, the two-part fee structure that is common in the hedge fund industry has two perverse features: the fees are convex in the level of performance, and gains and losses are treated asymmetrically. These features create incentives to take on increased risk, a point that has been discussed in both the empirical and theoretical finance literature (Starks 1987; Carpenter 2000; Lo 2001; Hodder and Jackwerth 2007). The approach taken here builds on this work by considering a much more general class of compensation contracts and by deriving theoretical bounds on how much manipulation is possible. Of the prior work on this topic, that of Lo (2001) is the closest to ours because he focuses explicitly on the question of how much money a strategic actor can make by deliberately manipulating the returns distribution using options trading strategies. Lo examines a hypothetical situation in which a manager takes short positions in S&P 500 put options that mature in one to three months and shows that such an approach would have generated very sizable excess returns relative to the market in the 1990s. (Of course this strategy could have lost a large amount of money if the market had gone down sufficiently.) The present paper builds on Lo’s approach by examining how far this type of manipulation can be taken and how much fee capture is theoretically possible. We do this by explicitly defining the strategy space that is available to potential entrants, and how they can use it to mimic high-performance managers. A related strand of the literature is concerned with the potential manipulation of standard performance measures, such as the Sharpe ratio, the appraisal ratio, and Jensen’s alpha. It is well known that these and other measures can be “gamed” by manipulating the returns distribution without generating excess returns in expectation (Lhabitant 2000; Ferson and Siegel 2001). It is also known, however, that one can design performance measures that are immune to many forms of manipulation. These take the form of constant relative risk aversion utility functions averaged over the returns history (Goetzmann et al. 2007). We shall discuss these connections further in Section VI. Our main conclusion, however, is that a similar possibility theorem does not hold for compensation mechanisms. At first this may seem surprising:

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1439

for example, why would it not suffice to pay fund managers according to a linear increasing function of one of the manipulation-proof measures mentioned above? The difficulty is that a compensation mechanism not only must reward managers according to their actual ability, but also must screen out managers who have no ability. In other words, the mechanism must create incentives for skilled managers to participate and for unskilled managers not to participate. This turns out to be considerably more demanding, because managers with different skill levels have different opportunity costs and therefore different incentive-compatibility constraints. III. THE MODEL Performance-based compensation contracts rely on two types of inputs: the returns generated by the fund manager and the returns generated by a benchmark portfolio that serves as a comparator. Consider first a benchmark portfolio that generates a sequence of returns in each of T periods. Throughout we shall assume that returns are reported at discrete intervals, say at the end of each month or each quarter (though the value of the asset may evolve in continuous time). Let rft be the risk-free rate in period t and let Xt be the total return of the benchmark portfolio in period t, where Xt is a nonnegative random variable whose distribution may depend on the prior realizations x1 , x2 , . . . , xt−1 . A fund that has initial value s0 > 0 and ispassively invested in the benchmark will therefore have value s0 1≤t≤T Xt by the end of the T th period. If the benchmark asset is risk-free then Xt = 1 + rft . Alternatively, Xt may represent the return on a broad market index such as the S&P 500, in which case it is stochastic, though we do not assume stationarity. Let the random variables Yt ≥ 0 denote the period-by-period returns generated by a particular managed portfolio, 1 ≤ t ≤ T . A compensation contract is typically based on a comparison between the returns Yt and the returns Xt generated by a suitably chosen benchmark. It will be mathematically convenient to express the returns of the managed portfolio as a multiple of the returns generated by the benchmark asset. Specifically, let us assume that Xt > 0 in each period t, and consider the random variable Mt ≥ 0 such that (1)

Yt = Mt Xt .

1440

QUARTERLY JOURNAL OF ECONOMICS

A compensation contract over T periods is a vector-valued 2T → RT +1 that specifies the payment to the manfunction φ: R+ ager in each period t = 0, 1, 2, . . . , T as a function of the amount of money invested and the realized sequences x = (x1 , x2 , . . . , xT ) and m = (m1 , m2 , . . . , mT ). We shall assume that the payment in period t depends only on the realizations x1 , . . . , xt and m1 , . . . , mt . We shall also assume that the payment is made at the end of the period, and cannot exceed the funds available at that point in time. (Payments due at the start of a period can always be taken out at the end of the preceding period, so this involves no real loss of generality. The payment in period zero, if any, corresponds to an upfront management fee.) This formulation is very general, and includes standard incentive schemes as well as commonly proposed reforms, such as “postponement” and “clawback” arrangements, in which bonuses earned in prior periods can be offset by penalties in later periods. These and a host of other variations are embedded in the x), can depend on assumption that the payment in period t, φt (m, the entire sequence of returns through period t. Let us consider a concrete example. Suppose that the contract calls for a 2% management fee that is taken out at the end of each year plus a 20% performance bonus on the return generated during the year in excess of the risk-free rate. Let the initial x), let st = size of the fund be s0 . Given a pair of realizations (m, x) be the size of the fund at the start of year t after any st (m, upfront fees have been deducted. Then the management fee at the end of the first year will be 0.02 m1 x1 s1 and the bonus will be 0.2 (m1 x1 − 1 − rf 1 )+ s1 . Hence (2)

φ1 = [0.02 m1 x1 + 0.2 (m1 x1 − 1 − rf 1 )+ ]s1 .

Letting s2 = m1 x1 s1 − φ1 and continuing recursively, we find that in each year t, (3)

φt = [0.02 mt xt + 0.2 (mt xt − 1 − rft )+ ]st .

Alternatively, suppose that the contract specifies a 2% management fee at the end of each year plus a one-time 20% performance bonus that is paid only at the end of T years. In x) = this case the size of the fund at the start of year t is st (m, s0 (0.98)t−1 1≤s≤t−1 ms xs . The management fee in the tth year

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1441

equals ⎡ (4)

x) = 0.02st (m, x)mt xt = ⎣0.02(0.98)t−1 φt (m,

⎤ ms xs ⎦ s0 .

1≤s≤t

The final performance bonus equals 20% of the cumulative excess to the risk-free rate, which comes to return relative 0.2[ 1≤t≤T mt xt − 1≤t≤T (1 + rft )]+ s0 . IV. PERFORMANCE MIMICRY We say that a manager has superior skill if, in expectation, he or she delivers excess returns relative to a benchmark portfolio (such as a broad-based market index), through either private information, superior predictive powers, or access to payoffs outside the benchmark payoff space. A manager has no skill if he or she cannot deliver excess returns relative to the benchmark portfolio. Investors should not be willing to pay managers with no skill, because the investors can obtain the same expected returns by investing passively in the benchmark. We claim, however, that under any performance-based compensation contract, either the unskilled managers can capture some of the fees intended for the skilled managers, or else the contract is sufficiently unattractive that both the skilled and unskilled managers will not wish to participate. We begin by examining the case where the contract calls only x) ≥ 0 for all t, m, x. (In for nonnegative payments, that is, φt (m, x) < 0 for Section VII we shall consider the situation where φt (m, some realizations m and x.) Note that nonnegative payments are perfectly consistent with clawback provisions, which reduce prior bonuses but do not normally lead to net assessments against the manager’s personal assets. Given realized sequences m and x, define the manager’s cut in period t to be the fraction of the available funds at the end of the period that the manager takes in fees, namely, (5)

x) = φt (m, x)/mt xt st (m, x). ct (m,

By assumption the fees are nonnegative and cannot exceed the funds available; hence 0 ≤ ct (m, x) ≤ 1 for all m, x. (If mt xt st (m, x) = x) = 1 and assume that the fund closes down.) The 0 we let ct (m,

1442

QUARTERLY JOURNAL OF ECONOMICS

2T cut function is the vector-valued function c : R+ → [0, 1]T +1 such x), c1 (m, x), . . . , cT (m, x)) for each pair (m, x). that c(m, x) = (c0 (m, In our earlier example with a 2% end-of-period management fee and a 20% annual bonus, the cut function is 1 + rft (6) c0 (m, x) = 0 and ct (m, x) = 0.02 + 0.2 1 − mt xt + for 1 ≤ t ≤ T .

PROPOSITION 1. Let φ be a nonnegative compensation contract over T periods that is benchmarked against a portfolio gener = (X1 , X2 , . . . , XT ) > 0, and let c be the associating returns X ated cut function. Given any target sequence of excess returns 0 (m) there exists a mimicking strategy M that delivers m ≥ 1, zero expected excess returns in every period (E[Mt0 ] = 1), such = x of the benchmark asset, the that for every realization X mimic’s expected fees in period t (conditional on x) are at least (7)

ct (m, x)[(1 − c0 (m, x)) · · · (1 − ct−1 (m, x))][x1 · · · xt ]s0 .

x)) · · · (1 − Note that, in this expression, the factor [(1 − c0 (m, ct−1 (m, x))] is the fraction left over after the manager has taken out his or her cut in previous periods. Hence the proposition says that in expectation, the mimic’s cut in period t is at least as large as the cut of a skilled manager who generates the excess returns sequence m with certainty. The difference is that the mimic’s cut is assessed on a fund that is compounding at the rate of the benchcut is based mark asset ( 1≤s≤t xs ), whereas the skilled manager’s on a portfolio compounding at the higher rate 1≤s≤t ms xs . It follows that the skilled manager will earn more than the mimic in expectation. The key point, however, is not that skilled managers earn more than mimics, but that mimics can earn a great deal compared to the alternative, which is not to enter the market at all. To understand the implications of this result, let us work through a simple example. Suppose that the benchmark asset consists of risk-free government bonds growing at a fixed rate of 4% per year. Consider a skilled manager who can deliver 10% over and above this every year, and is paid according to the standard two and twenty contract: a bonus equal to 20% of the excess return plus a management fee of 2%.5 In this case the excess annual 5. Of course it is unlikely that anyone would generate the same return year after year, but this assumption keeps the computations simple.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1443

return is (1.10)(1.04) − 1.04 = 0.104, so the performance bonus is 0.20(0.104) = 0.0208 per dollar in the fund at the start of the period. This comes to about 0.0208/[(1.10)(1.04)] = 0.0182 per dollar at the end of the period. By assumption the management fee is 0.02 per dollar at the end of the period. Therefore the cut, which is the total fee per dollar at the end of the period, is 0.0382 or 3.82%. Proposition 1 says that a manager with no skill has a mimicking strategy that in expectation earns at least 3.82% per year of a fund that is compounding at 4% per year before fees, and 0.027% after fees (1.04(1 − 0.0382) = 1.00027). As t becomes large, the probability goes to one that the fund will go bankrupt before then. However, the mimic’s expected earnings in any given year t are actually increasing with t, because in expectation the fund is compounding at a higher rate (4%) than the manager is taking out in fees (3.82%). The key to proving Proposition 1 is the following result. LEMMA . Consider any target sequence of excess returns m = (m1 , . . . , mT ) ≥ (1, 1, . . . , 1). A mimic has a strategy 0 (m) that, for every realized sequence of returns x of M the benchmark portfolio, generates the returns sequence (m1 x1 , . . . , mT xT ) with probability at least 1/ 1≤t≤T mt . We shall sketch the idea of the proof here; in the Appendix we show how to execute the strategy using puts and calls on standard market indices with Black–Scholes pricing. Proof Sketch. Choose a target excess-returns sequence m1 , m2 , . . . , mT ≥ 1. At the start of period 1 the mimic has capital equal to s0 . Assume that he or she invests it entirely in the benchmark asset. He or she then uses the capital as collateral to take a position in the options market. The options position amounts to placing a fair bet that bankrupts the fund with low probability (1 − 1/m1 ) and inflates it by the factor m1 with high probability (1/m1 ) by the end of the period. If the high-probability outcome occurs, the mimic has end-of-period capital equal to m1 x1 s0 , whereas if the low-probability outcome occurs, the fund goes bankrupt. The mimic repeats this construction in each successive period t using the corresponding value mt as the target. By the end of the T th period the strategy will have generated the returns sequence (m1 x1 , . . . , mT xT ) with probability 1/ 1≤t≤T mt , and this holds for every realization x of the benchmark portfolio. This concludes the outline of the proof of Lemma 1.

1444

QUARTERLY JOURNAL OF ECONOMICS

Proposition 1 is now proved as follows. Choose a particular Under the mimicking strategy sequence of excess returns m ≥ 1. defined in the Lemma, for every realization x and every period t, the mimic generates excess returns m1 , m2 , . . . , mt ≥ 1 with probability at least 1/m1 m2 · · · mt . With this same probability he or she earns x)[(1 − c0 (m, x)) · · · (1 − ct−1 (m, x))][x1 · · · xt ][m1 · · · mt ]s0 . ct (m, Thus, because his or her earnings are always nonnegative, expected earnings in period t must be at least ct (m, x)[(1 − x)) · · · (1 − ct−1 (m, x))][x1 · · · xt ]s0 . This concludes the proof of c0 (m, Proposition 1.6 V. DISCUSSION Mimicking strategies are straightforward to implement using standard derivatives, and they generate returns that look good for extended periods while providing no value-added to investors. (Recall that the investors can earn the same expected returns with possibly much lower variance by investing passively in the benchmark asset.) Similar strategies can be used to mimic distributions of returns as well as particular sequences of returns.7 In fact, however, there is no need to mimic a distribution of returns. Managers are paid on the basis of realized returns, not distributions. Hence all a mimic needs to do is target some particular sequence of excess returns that might have arisen from a distribution (and that generates high fees). Proposition 1 shows that he or she will earn at least as high a cut in expectation as a skilled manager would have earned had he or she generated the same sequence. Of course, the fund’s investors would not necessarily approve if they could see what the mimic was doing. The point of the analysis, however, is to show what can happen when investors 6. This construction is somewhat reminiscent of the doubling-up strategy, in which one keeps doubling one’s stake until a win occurs (Harrison and Kreps 1979). Our setup differs in several crucial respects, however: the manager only enters into a finite number of gambles and he or she cannot borrow to finance them. More generally, the mimicking strategy is not a method for beating the odds in the options markets; it is a method for manipulating the distribution of returns in order to earn large fees from investors. 7. Indeed, let Mt be a nonnegative random variable with expectation E[Mt ] = m ¯ t > 1 . Suppose that a mimic wishes to produce the distribution Mt Xt in period t, ¯ t )Mt where Xt is the return from the benchmark. The random variable Mt = (1/m represents a fair bet. The mimic can therefore implement Mt Xt with probability at least 1/m ¯ t by first placing the fair bet Mt and then inflating the fund by the factor m ¯ t using the strategy described in the Lemma.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1445

cannot observe the managers’ underlying strategies—a situation that is quite common in the hedge fund industry. Performance contracts that are based purely on reported returns, and that place no restrictions on managers’ strategies, are highly vulnerable to manipulation. Expression (7) in Proposition 1 shows how much fee capture is possible, and why it is very difficult to eliminate this problem by restructuring the compensation contract. One common proposal, for example, is to delay paying a performance bonus for a substantial period of time. To be concrete, let us suppose that a manager can only be paid a performance bonus after five years, at which point he or she will earn 20% of the total return from the fund in excess of the risk-free rate compounded over five years. For example, with a risk-free rate of 4%, he or she will earn a performance bonus equal to 0.20[s5 − (1.04)5 s0 ]+ , where s5 is the value of the fund at the end of year 5. Consider a hypothetical manager who earns multiplicative excess returns equal to 1.10 each year. Under the above contract, his or her bonus in year 5 would be 0.20[(1.10)5 (1.04)5 s0 − (1.04)5 s0 ] ≈ 0.149s0 , that is, about 15% of the amount initially invested. Let us compare this to the expected earnings of someone who generates apparent 10% excess returns using the mimicking strategy. The mimic’s strategy runs for five years with probability (1.10)−5 = 0.621; hence his or her expected bonus is about (0.621)(0.149)s0 = 0.0925s0 . Thus, with a five-year postponement, the mimic earns an expected bonus equal to more than 9% of the amount initially invested. Now consider a longer postponement, say ten years. The probability that the mimic’s strategy will run this long is (1.10)−10 ≈ 0.386. However, the bonus will be calculated on a larger base. Namely, if the mimic’s fund does keep running for ten years, the bonus will be 0.20[(1.10)10 (1.04)10 − (1.04)10 ]s0 ≈ 0.472s0 . Therefore the expected bonus will be approximately (0.386)(0.472s0 ) = 0.182s0 or about 18% of the amount initially invested. Indeed, it is straightforward to show that under this particular bonus scheme, the expected payment to the mimic increases the longer the postponement is.8 8. Per dollar of initial investment, the bonus in the final period T is 0.20[(1.10)T (1.04)T − (1.04)T ] and the probability of earning it is (1.10)−T . Hence the expected bonus is 0.20[(1.10)T (1.04)T − (1.04)T ]/(1.10)T = 0.20[1 − (1.1)−T ][1.04]T , which is increasing in T .

1446

QUARTERLY JOURNAL OF ECONOMICS

It is, of course, true that the longer the postponement, the greater the risk that the fund will go bankrupt before the mimic can collect his bonus. Thus postponement may act as a deterrent for mimics who are sufficiently risk-averse. However, this does not offer much comfort for several reasons. First, as we have just seen, the postponement must be quite long to have much of an impact. Second, not all mimics need be risk-neutral; it suffices that some of them are. Third, there is a simple way for a risk-averse mimic to diversify away his or her risk: run several funds in parallel (under different names) using independent mimicking strategies. Suppose, for example, that a mimic runs n independent funds of the type described above, each yielding 10% annual excess returns with probability 1/1.1 = 0.091. The probability that at least one of the funds survives for T years or more is 1 − (1 − 1/1.1T )n. This can be made as close to one as we like by choosing n to be sufficiently large.9 VI. PERFORMANCE MEASURES VERSUS PERFORMANCE PAYMENTS The preceding analysis leaves open the possibility that performance contracts with negative payments might solve the problem. Before turning to this case, however, it will be useful to consider the relationship between statistical measures of performance and performance-based compensation contracts. Some standard measures of performance, such as Jensen’s alpha or the Sharpe ratio, are easily gamed by manipulating the returns distribution. Other measures avoid some forms of manipulation, but (as we shall see) they do not solve the problem of how to pay for performance. Consider, for example, the following class of measures proposed by Goetzmann et al. (2007). Let u(x) = (1 − ρ)−1 x 1−ρ be a constant relative risk aversion utility function with ρ > 1. If a fund delivers the sequence of returns Mt (1 + rft ), 1 ≤ t ≤ T , one can define the performance measure ⎡ ⎤

1−ρ (8) G(m) = (1 − ρ)−1 ln ⎣(1/T ) mt ⎦ , ρ > 1. 1≤t≤T

9. A related point is that, in any large population of funds run by mimics, the probability is high that at least one of them will look extremely good, perhaps better than many funds run by skilled managers (though not necessarily better than the best of the funds run by skilled managers). Correcting for multiplicity poses quite a challenge in testing for excess returns in financial markets; for a further discussion of this issue, see Foster, Stine, and Young (2008).

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1447

A variant of this approach that is used by the rating firm Morningstar (2006) is ⎤−1/2 ⎡

(9) G∗ (m) = ⎣(1/T ) 1/m2t ⎦ − 1. 1≤t≤T

These and related measures rank managers according to their ability to generate excess returns in expectation. But to translate these (and other) statistical measures into monetary payments for performance leads to trouble. First, payments must be made on realized returns; one cannot wait forever to see whether the returns are positive in expectation. Second, if the payments are always nonnegative, then the mimic can capture some of them, as Proposition 1 shows. On the other hand, if the payments are allowed to be negative, then they are constrained by the managers’ ability to pay them.10 In the next section we will show that this leads to an impossibility theorem: if the penalties are sufficient to screen out the mimics, then they also screen out skilled managers of arbitrarily high ability. VII. PENALTIES Consider a general compensation mechanism φ that some x) < 0 for some values of times imposes penalties; that is, φt (m, m, x, and t. To simplify the exposition, we assume throughout this section that the benchmark asset is risk-free; that is, xt = 1 + rft for all t. Suppose that a fund starts with an initial amount s0 , which we can assume without loss of generality is s0 = 1. To illustrate the issues that arise when penalties are imposed, let us begin by considering the one-period case. Let (1 + rf 1 )m ≥ 0 be the fund’s total return in period 1, and let φ(m) be the manager’s fee as a function of m. The worst-case scenario (for the investors) is that m = 0. Assume that in this case the manager suffers a penalty φ(0) < 0. There are two cases to consider: (i) the penalty arises because the manager holds an equity stake of size |φ(0)| in the fund, which he or she loses when the fund goes bankrupt; or (ii) the penalty is held in escrow in a safe asset earning the risk-free rate, and is paid out to the investors if the fund goes bankrupt. The first case—the equity stake—would be an effective deterrent provided the mimic was sufficiently risk-averse and was 10. Note that if payments are linear and increasing in the performance measure (8), then arbitrarily large penalties will be imposed when mt is close to zero.

1448

QUARTERLY JOURNAL OF ECONOMICS

prevented from diversifying his or her risk across different funds. But an equity stake will not deter a risk-neutral mimic, because the expected return from the mimic’s strategy is precisely the riskfree rate, so his or her stake actually earns a positive amount in expectation, namely (1 + rf 1 )|φ(0)|, and in addition he or she earns positive fees from managing the portion of the fund that he or she does not own. Now consider the second case, in which future penalties are held in an escrow account earning the risk-free rate of return. For our purposes it suffices to consider the penalty when the fund goes bankrupt. To cover this event, the amount placed in escrow must be at least b = −φ(0)/(1 + rf 1 ) > 0. Fix some m∗ ≥ 1 and consider a risk-neutral mimic who generates the return m∗ (1 + rf 1 ) with probability 1/m∗ and goes bankrupt with probability 1 − 1/m∗ . To deter such a mimic, the fees earned during the period must be nonpositive in expectation; that is, (10)

φ(m∗ )/m∗ + φ(0)(1 − 1/m∗ ) ≤ 0.

Because a mimic can target any such m∗ , (10) must hold for all m∗ ≥ 1. Now consider a skilled manager who can generate the return m∗ with certainty. This manager must also put the amount b in escrow, because ex ante all managers are treated alike and the investors cannot distinguish between them. However, this involves an opportunity cost for the skilled manager, because by investing b in his or her own private fund he or she could have generated the return m∗ (1 + rf 1 )b. The resulting opportunity cost for the skilled manager is m∗ (1 + rf 1 )b − (1 + rf 1 )b = −(m∗ − 1)φ(0). Assuming that utility is linear in money (i.e., the manager is risk-neutral), he or she will not participate if the opportunity cost exceeds the fee, that is, if (10 )

φ(m∗ ) + (m∗ − 1)φ(0) ≤ 0.

Dividing (10 ) by m∗ , we see that it follows immediately from (10), which holds for all m∗ ≥ 1. We have therefore shown that, if a one-period contract deters all risk-neutral mimics, it also deters any risk-neutral manager who generates excess returns. The following generalizes this result to the case of multiple periods and randomly generated return sequences. PROPOSITION 2. There is no compensation mechanism that separates skilled from unskilled managers solely on the basis of

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1449

their returns histories. In particular, any compensation mechanism that deters unskilled risk-neutral mimics also deters all skilled risk-neutral managers who consistently generate returns in excess of the risk-free rate. Proof. Let xt = 1 + rft be the risk-free rate of return in period t. To simplify the notation we shall drop the xt s and let φt (m) denote the payment (possibly negative) in period t when the manager delivers the excess return sequence m. The previous argument shows why holding an equity stake in the fund itself does not act as a deterrent for a risk-neutral mimic. We shall therefore restrict ourselves to the situation where future penalties must be held in escrow. Let the Consider an arbitrary excess returns sequence m ≥ 1. 0 be constructed so that it stays in business mimic’s strategy M (m) through period t with probability exactly 1/(m1 · · · mt ). Consider some period t ≤ T . The probability that the fund survives to the start of period t without going bankrupt is 1/(m1 · · · mt−1 ). At the with probability 1/mt and end of period t, the mimic earns φt (m) φt (m1 , . . . , mt−1 , 0, . . . , 0) with probability (mt − 1)/mt . Hence the net present value of the period-t payments is (11)

φt (m) (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) . + (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft )

To deter a risk-neutral mimic, the net present value V 0 (m) of all payments must be nonpositive:

φt (m) (12) V 0 (m) = (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) 1≤t≤T (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) + ≤ 0. (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) (Although some of these payments may have to be held in escrow, this does not affect their net present value to the mimic, because they earn the risk-free rate until they are paid out.) Now consider a skilled manager who can deliver the sequence with certainty. (We shall consider distributions over such m ≥1 sequences in a moment.) Let B(m) be the set of periods t in which a penalty must be paid if the fund goes bankrupt during that period

1450

QUARTERLY JOURNAL OF ECONOMICS

and not before: (13)

B(m) = {t: φt (m1 , . . . , mt−1 , 0, . . . , 0) < 0}.

For each t ∈ B(m), let (14)

= −φt (m1 , . . . , mt−1 , 0, . . . , 0)/(1 + rft ) > 0. bt (m)

This is the amount that must be escrowed during the tth period to ensure that the investors can be paid if the fund goes bankrupt by the end of the period. The skilled manager evaluates the present value of all future fees, penalties, and escrow payments using his or her personaldiscount factor, which for period-t payments is = 1/ 1≤s≤t ms (1 + rfs ). δt = δt (m) at the end Consider any period t ∈ B(m). To earn the fee φt (m) in escrow at the start of of period t, the manager must put bt (m) the period (if not before).11 Conditional on delivering the sequence m, he or she will get this back with interest at the end of period For the skilled t, that is, will get back the amount (1 + rft )bt (m). manager, the present value of this period-t scenario is (15)

+ δt (1 + rft )bt (m) − δt−1 bt (m) δt φt (m) + δt [(mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0)] = δt φt (m) φt (m) = (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) . + (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft )

Now consider a period t ∈ / B(m). This is a period in which the manager earns a nonnegative fee even if the fund goes bankrupt; hence nothing must be held in escrow. The net present value of the fees in any such period is φt (m)/[(m 1 · · · mt ) (1 + rf 1 ) · · · (1 + rft )]. Thus, summed over all periods, the net 11. If penalties must be escrowed more than one period in advance, the opportunity cost to the skilled manager will be even greater and the contract even more unattractive; hence our conclusions still hold.

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1451

present value of the fees for the skilled manager comes to

φt (m) (16) V (m) = (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) t∈B(m) (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) + (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft )

φt (m) + . (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) t∈B( / m)

/ B(m), Because mt ≥ 1 and φt (m1 , . . . , mt−1 , 0, . . . , 0) ≥ 0 for all t ∈ we know that

(mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) ≥ 0. (17) (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) t∈B( / m)

From (16) and (17) it follows that

φt (m) (18) V (m) ≤ (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) 1≤t≤T (mt − 1)φt (m1 , . . . , mt−1 , 0, . . . , 0) + . (m1 · · · mt )(1 + rf 1 ) · · · (1 + rft ) But the right-hand side of this expression must be nonpositive to deter the risk-neutral mimics (see expression (12)). It follows that any contract that is unattractive for the risk-neutral mimics is also unattractive for any risk-neutral skilled manager no matter what he or she generates. Because this excess returns sequence m ≥1 statement holds for every excess returns sequence, it also holds for any distribution over excess return sequences. This concludes the proof of Proposition 2. VIII. ATTRACTING NEW MONEY The preceding analysis shows that any compensation mechanism that rewards highly skilled portfolio managers can be gamed by mimics without delivering any value-added to investors. To achieve this, however, the mimic takes a calculated risk in each period that his fund will suffer a total loss. A manager who is concerned about building a long-term reputation may not want to take such risks; indeed he or she may make more money in the long run if his or her returns are lower and he or she stays in business longer, because this strategy will attract a steady inflow of new

1452

QUARTERLY JOURNAL OF ECONOMICS

money. However, although there is empirical evidence that past performance does affect the inflow of new money to some extent, the precise relationship between performance and flow is a matter of debate.12 Fortunately we can incorporate flow–performance relationships into our framework without committing ourselves to a specific model of how they work, and the previous results remain essentially unchanged. To see why, consider a benchmark asset generating returns and a manager who delivers excess returns M relative series X to X. Let Zt = Zt (m1 , . . . , mt−1 ; x1 , . . . , xt−1 ) be a random variable that describes how much net new money flows into the fund at the start of period t as a function of the returns in prior periods. In keeping with our general setup, we assume that Zt is a multiplicative random variable; that is, its realization zt represents the proportion by which the fund grows (or shrinks) at the start of period t compared to the amount that was in the fund at the end of period t − 1. Thus, if a fund starts at size 1, its total value at the start of period t is (19) Zt Ms Xs Zs . 1≤s≤t−1

over T years, a Given any excess returns sequence m ≥ 1 mimic can reproduce it with probability 1/ 1≤t≤T mt for all realizations of the benchmark returns. Because by hypothesis the flow of new money depends and x, it follows that the proba only on m bility is at least 1/ 1≤t≤T mt that the mimic will attract the same amount of new money into the fund as the skilled manager. What patterns of returns attract the largest inflow of new money is an open question that we shall not attempt to address here. However, there is some evidence to suggest that investors are attracted to returns that are steady even though they are not spectacular. Consider, for example, a fund that grows at 1% per month year in and year out. (The recent Ponzi scheme of Bernard Madoff grew to some $50 billion by offering returns of about this magnitude.) This can be generated by a mimic who delivers a monthly return of 0.66% on top of a risk-free rate of 0.33%. The probability that such a fund will go under in any given year is 1 − (1.0066)−12 = 0.076 or about 7.6%. In expectation, such a fund 12. See, for example, Gruber (1996); Massa, Goetzman, and Rouwenhorst (1999); Chevalier and Ellison (1997); Sirri and Tufano (1998); Berk and Green (2004).

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1453

will stay in business and continue to attract new money for about thirteen years. One could of course argue that portfolio managers might not want to take the risk involved in such schemes if they cared sufficiently about their reputations. Some managers might want to stay in business much longer than thirteen years; others might be averse to the damage that bankruptcy would do to their personal reputations or self-esteem. We do not deny that these considerations may serve as a deterrent for many people. But our argument only requires the existence of some people for whom the prospect of high expected earnings outweighs such concerns. The preceding results show that it is impossible to keep these types of managers out of the market without keeping everyone out. IX. CONCLUSIONS In this paper we have shown how mimicry can be used by portfolio managers to game performance fees. The framework allows us to estimate how much a mimic can earn under different incentive structures; it also shows that commonly advocated reforms of the incentive structure cannot be relied upon to screen out unskilled risk-neutral managers who do not deliver excess returns to investors. The analysis is somewhat unconventional from a game-theoretic standpoint, because we did not identify the set of players, their utility functions, or their strategy spaces. The reason is that we do not know how to specify any of these components with precision. To write down the players’ utility functions, for example, we would need to know their discount factors and degrees of risk aversion, and we would also need to know how their track records generate inflows of new money. Although it might be possible to characterize the equilibria of a fully specified game among investors and managers of different skill levels, this poses quite a challenge that would take us well beyond the framework of the present paper. The advantage of the mimicry argument is that we can draw inferences about the relationship between different players’ earnings without knowing the details of their payoff functions or how their track records attract new money. The argument is that, if someone is producing returns that earn large fees in expectation, then someone else (with no skill) can mimic the first type and also earn large fees in expectation without knowing anything about how the first type is actually doing it. In this paper we have shown how to apply this idea to financial markets.

1454

QUARTERLY JOURNAL OF ECONOMICS

We conjecture that it may prove useful in other situations where there are many players, the game is complex, and the equilibria are difficult to pin down precisely. APPENDIX Here we shall show explicitly how to implement the mimicking strategy that was described informally in the text, using puts and calls. We shall consider two situations: (i) the benchmark asset is risk-free, such as U.S. Treasury bills; (ii) the benchmark asset is a market index, such as the S&P 500. We shall call the first case the risk-free model and the second case the market index model. As is customary in the finance literature, we shall assume that the price of the market index evolves in continuous time τ according to a stochastic process of the form (20)

dPτ = μPτ dτ + σ Pτ dWτ ;

that is, Pτ is a geometric Brownian motion with mean μ and variance σ 2 . The reporting of results is done at discrete time periods, such as the end of a month or a quarter. Let t = 1, 2, 3 . . . denote these periods, and let rft denote the risk-free rate during period t. Similarly, let r˜ft denote the continuous-time risk-free rate during period t, which we shall assume is constant during the period and satisfies r˜ft ≤ μ. Without loss of generality, we may assume that each period is of length one, in which case er˜ft = 1 + rft . Mimicking strategies will be implemented using puts and calls on the market index, whose prices are determined by the Black–Scholes formula (see, for example, Hull [2009]). LEMMA . Consider any target sequence of excess returns m = (m1 , . . . , mT ) ≥ (1, 1, . . . , 1) relative to a benchmark asset, which can be either risk-free or a market index. A mimic has 0 (m) that, for every realized sequence of returns a strategy M x of the benchmark asset, generates the returns sequence (m1 x1 , . . . , mT xT ) with probability at least 1/ 1≤t≤T mt . Proof. The options with which one implements the strategy depend on whether the benchmark asset is risk-free or the market index. We shall treat the risk-free case first. Fix a target sequence of excess returns m = (m1 , . . . , mT ) ≥ (1, 1, . . . , 1). We need to show that the mimic has a strategy that

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1455

in each period t delivers the return mt (1 + rft ) with probability at least 1/mt . At the start of period t, the mimic invests everything in the risk-free asset (e.g., U.S. Treasury bills). He then writes (or shorts) a certain quantity q of cash-or-nothing puts that expire before the end of the period. Each such option pays one dollar if the market index is below the strike price at the time of expiration. Let be the length of time to expiration and let s be the strike price divided by the index’s current price; without loss of generality, we may assume that the current price is 1. Let denote the cumulative normal distribution function. Then (see Hull [2009, Section 24.7]) the option’s present value is e−˜rft v, where (21)

√ v = [(ln s − r˜ft + σ 2 /2)/σ ],

and the probability that the put will be exercised is (22)

√ p = [(ln s − μ + σ 2 /2)/σ ].

Assume that the value of the fund at the start of the period is w dollars. By selling q options, the mimic collects an additional e−˜rft vq dollars. By investing everything (including the proceeds from the options) in the risk-free asset, he or she can cover up to q options when they are exercised, provided that er˜ft w + vq ≥ q. Thus the maximum number of covered options the mimic can write is q = wer˜ft /(1 − v). The mimic chooses the time to expiration and the strike price s so that v satisfies v = 1 − 1/mt . With probability p the options are exercised and the fund is entirely cleaned out (i.e., paid to the option-holders). With probability 1 − p the options expire without being exercised, in which case the fund has grown by the factor mt er˜ft over the time interval . The mimic enters into this gamble only once per period, and the funds are invested in the risk-free asset during the remaining time. Hence the total return during the period is mt (1 + rft ) with probability 1 − p and zero with probability p. We claim that p ≤ v; indeed this follows immediately from (21) and (22) and the assumption that r˜ft ≤ μ. Therefore, if the mimic had wt−1 > 0 dollars in the fund at the start of period t, then by the end of the period he would have mt (1 + rft )wt−1 dollars with probability at least 1/mt = 1 − v and zero dollars with probability at most 1 − 1/mt . Therefore after T periods he would

1456

QUARTERLY JOURNAL OF ECONOMICS

have generated the target sequence of excess returns (m1 , . . . , mT ) with probability at least 1/ 1≤t≤T mt , as asserted in the Lemma. Next we consider the case where the benchmark asset is the market index. The basic idea is the same as before, except that in this case the mimic invests everything in the market index (rather than in Treasury bills), and he or she shorts asset-or-nothing options rather than cash-or-nothing options. (An asset-or-nothing option pays the holder one share if the market index closes above the strike price in the case of a call, or below it in the case of a put; otherwise the payout is zero.) As before, the mimic shorts the maximum number of options that he or she can cover, where the strike price and time to expiration are chosen so that the probability that they are exercised is at most 1 − 1/mt . With probability at least 1/mt , this strategy increases the number of shares of the market index held in the fund by a factor mt . Hence, with probability at least 1/mt , it delivers a total return equal to mt (Pt /Pt−1 ) = mt xt for every realization of the market index. It remains to be shown that the strike price s and time to expiration can be chosen so that the preceding conditions are satisfied. There are two cases to consider: μ − r˜ft ≥ σ 2 and μ − r˜ft < σ 2 . In the first case the mimic shorts an asset-or-nothing put, whose present value is √ (23) v = (ln s − r˜ft − σ 2 /2)/σ and whose probability of being exercised is √ (24) p = (ln s − μ + σ 2 /2)/σ . (See Hull [2009, Section 24.7].) From our assumption that μ − r˜ft ≥ σ 2 , it follows that p ≤ v, which is the desired conclusion. If, on the other hand, μ − r˜ft < σ 2 , then the mimic shorts asset-ornothing calls instead of asset-or-nothing puts. In this case the analog of formulas (23) and (24) ensures that p ≤ v (Hull [2009, Section 24.7]). Given any target sequence (m1 , m2 , . . . , mT ) ≥ (1, 1, . . . , 1), this strategy produces returns (m1 x1 , . . . , mT xT ) with probabil ity at least 1/ 1≤t≤T mt for every realization x of the benchmark asset. This concludes the proof of the Lemma. We remark that the probability bound 1/ 1≤t≤T mt is conservative. Indeed, the proof shows that the probability that the options are exercised may be strictly less than is required for the

GAMING PERFORMANCE FEES BY PORTFOLIO MANAGERS

1457

conclusion to hold. Furthermore, in practice, the pricing formulas are not completely accurate for out-of-the-money options, which tend to be overvalued (the so-called “volatility smile”). This implies that the seller can realize an even larger premium for a given level of risk than is implied by the Black–Scholes formula. Of course, there will be some transaction costs in executing these strategies, and these will work in the opposite direction. Although it is beyond the scope of this paper to try to estimate such costs, the fact that the mimicking strategy requires only one trade per period in a standard market instrument suggests that these costs will be very low. In any event, it is easy to modify the argument to take such costs into account. Suppose that the cost of taking an options position is some fraction ε of the option’s payoff. In order to inflate the fund’s return in a given period t by the factor mt ≥ 1 after transaction costs, one would have to inflate the return by the factor mt = mt /(1 − ε) before transaction costs. To illustrate: the transaction cost for an out-of-the-money option on the S&P 500 is typically less than 2% of the option price. Assuming the exercise probability is around 10%, the payout, if it is exercised, will be about ten times as large, so ε will be about 0.2% of the option’s payoff. Thus, in this case, the mimicking strategy would achieve a given target mt net of costs with probability 0.998/mt instead of with probability 1/mt . WHARTON SCHOOL, UNIVERSITY OF PENNSYLVANIA UNIVERSITY OF OXFORD AND THE BROOKINGS INSTITUTION

REFERENCES Alcock, John, Animal Behavior: An Evolutionary Approach, 8th ed. (Sunderland, MA: Sinauer Associates, 2005). Bebchuk, Lucian A., and Jesse Fried, Pay without Performance (Cambridge, MA: Harvard University Press, 2004). Bebchuk, Lucian A., and Holger Spamann, “Regulating Bankers’ Pay,” Harvard Law and Economics Discussion Paper No. 641, 2009. Berk, Jonathan B., and Richard C. Green, “Mutual Fund Flows and Performance in Rational Markets,” Journal of Political Economy, 112 (2004), 1269–1295. Carpenter, Jennifer N., “Does Option Compensation Increase Managerial Risk Appetite?” Journal of Finance, 55 (2000), 2311–2331. Chevalier, Judith, and Glenn Ellison, “Risk Taking by Mutual Funds as a Response to Incentives,” Journal of Political Economy, 105 (1997), 1167–1200. Ferson, Wayne E., and Andrew F. Siegel, “The Efficient Use of Conditioning Information in Portfolios,” Journal of Finance, 3 (2001), 967–982. Foster, Dean P., Robert Stine, and H. Peyton Young, “A Martingale Test for Alpha,” Wharton Financial Institutions Center Working Paper 08-041, 2008. Gale, David, “A Curious Nim-Type Game,” American Mathematical Monthly, 81 (1974), 876–879.

1458

QUARTERLY JOURNAL OF ECONOMICS

Goetzmann, William, Jonathan Ingersoll, Matthew Spiegel, and Ivo Welch, “Portfolio Performance Manipulation and Manipulation-Proof Performance Measures,” Review of Financial Studies, 20 (2007), 1503–1546. Gruber, Martin J., “Another Puzzle: The Growth in Actively Managed Mutual Funds,” Journal of Finance, 51 (1996), 783–810. Harrison, J. Michael, and David M. Kreps, “Martingales and Arbitrage in Multiperiod Securities Markets,” Journal of Economic Theory, 20 (1979), 381–408. Hasanhodzic, Jasmina, and Andrew W. Lo, “Can Hedge-Fund Returns be Replicated? The Linear Case,” Journal of Investment Management, 5 (2007), 5–45. Hodder, James E., and Jens C. Jackwerth, “Incentive Contracts and Hedge Fund Management,” Journal of Financial and Quantitative Analysis, 42 (2007), 811–826. Hull, John C., Options, Futures and Other Derivatives, 7th ed. (New York: PrenticeHall, 2009). Kat, Harry M., and Helder P. Palaro, “Who Needs Hedge Funds: A Copula-Based Approach to Hedge Fund Return Replication,” Cass Business School, City University London, 2005. Lehrer, Ehud, “Any Inspection Rule Is Manipulable,” Econometrica, 69 (2001), 1333–1347. Lhabitant, Francois-Serge, “Derivatives in Portfolio Management: Why Beating the Market Is Easy,” Derivatives Quarterly, 6 (2000), 39–45. Lloyd, James E., “Aggressive Mimicry in Photuris Fireflies: Signal Repertoires by Femmes Fatales,” Science, 187 (1974), 452–453. Lo, Andrew W., “Risk Management for Hedge Funds: Introduction and Overview,” Financial Analysts’ Journal, 2001, 16–33. Massa, Massimo, William N. Goetzmann, and K. Geert Rouwenhorst, “Behavioral Factors in Mutual Fund Inflows,” Yale ICF Working Paper 00-14, 1999. Morningstar (http://corporate.morningstar.com, 2006). Olszewski, Wojciech, and Alvaro Sandroni, “Manipulability and FutureIndependent Tests,” Econometrica, 76 (2008), 1437–1480. Sandroni, Alvaro, “The Reproducible Properties of Correct Forecasts,” International Journal of Game Theory, 32 (2003), 151–159. Sandroni, Alvaro, Rann Smorodinsky, and Rakesh Vohra, “Calibration with Many Checking Rules,” Mathematics of Operations Research, 28 (2003), 141–153. Sirri, Erik R., and Peter Tufano, “Costly Search and Mutual Fund Inflows,” Journal of Finance, 53 (1998), 1589–1622. Spiegler, Ran, “The Market for Quacks,” Review of Economic Studies, 73 (2006), 1113–1131. Starks, Laura T., “Performance Incentive Fees: An Agency Theoretic Approach,” Financial and Quantitative Analysis, 22 (1987), 17–32.

DOES TERRORISM WORK?∗ ERIC D. GOULD AND ESTEBAN F. KLOR This paper examines whether terrorism is an effective tool for achieving political goals. By exploiting geographic variation in terror attacks in Israel from 1988 to 2006, we show that local terror attacks cause Israelis to be more willing to grant territorial concessions to the Palestinians. These effects are stronger for demographic groups that are traditionally right-wing in their political views. However, terror attacks beyond a certain threshold cause Israelis to adopt a less accommodating position. In addition, terror induces Israelis to vote increasingly for right-wing parties, as the right-wing parties move to the left in response to terror. Hence, terrorism appears to be an effective strategy in terms of shifting the entire political landscape to the left, although we do not assess whether it is more effective than non-violent means.

I. INTRODUCTION Terrorism is one of the most important, and yet complex, issues facing a large number of countries throughout the world. In recent years, several papers have analyzed the underlying causes and consequences of terrorism, as well as the strategies used by terror organizations in the pursuit of their goals.1 However, very little attention has been given to the question of whether terrorism works or not with respect to coercing the targeted country to grant political and/or territorial concessions. The lack of research on this subject is surprising, given that the answer to this question is critical to understanding why terror exists at all, and why it appears to be increasing over time in many parts of the world. This paper is the first to analyze whether terrorism is an effective strategy using a large sample of micro data and paying ∗ We thank Robert Barro, Larry Katz, Omer Moav, Daniele Paserman, and the anonymous referees for helpful comments and suggestions. We also benefited from the comments of audiences at Boston University, Tel Aviv University, the University of Chicago, the NBER 2009 Summer Institute (Economics of National Security Group), the conference on “The Political Economy of Terrorism and Insurgency” at UC San Diego, and the 2010 meetings of the American Economic Association. Noam Michelson provided expert research assistance. Esteban Klor thanks the NBER and Boston University for their warm hospitality while he was working on this project. The authors thank the Maurice Falk Institute for Economic Research and the Israeli Science Foundation for financial support. 1. For the causes of terrorism, see Krueger and Maleckova (2003), Li (2005), Abadie (2006), Berrebi (2007), Krueger and Laitin (2008), and Piazza (2008). For the consequences of terrorism, see the recent surveys by Enders and Sandler (2006) and Krueger (2007), as well as Becker and Rubinstein (2008) and Gould and Stecklov (2009), among many others. For the strategies of terrorist groups, see Kydd and Walter (2002, 2006), Berman and Laitin (2005, 2008), Bloom (2005), Bueno de Mesquita (2005b), Berrebi and Klor (2006), Benmelech and Berrebi (2007), Bueno de Mesquita and Dickson (2007), Rohner and Frey (2007), Baliga and Sj¨ostr¨om (2009), and Benmelech, Berrebi, and Klor (2009). C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1459

1460

QUARTERLY JOURNAL OF ECONOMICS

particular attention to establishing causality.2 To do this, we exploit variation in a large number of terror attacks over time and across locations in Israel from 1988 to 2006, and examine whether local terror attacks cause Israeli citizens to become more willing to grant territorial concessions to the Palestinians. In addition, we examine whether terror attacks cause Israelis to change their preferences over political parties, attitudes toward establishing a Palestinian state, and whether or not they define themselves as being “right-wing.”3 Our results indicate that terror attacks have pushed Israelis leftward in their political opinions by making them more likely to support granting concessions to the Palestinians. As a result, this paper presents the first comprehensive analysis showing that terrorism can be an effective strategy. However, our findings also indicate that terrorism beyond a certain threshold can backfire on the political goals of terrorist factions, by reducing the targeted population’s willingness to make political and/or territorial concessions. As stated above, the existing evidence on the effectiveness of terrorism is sparse. In the political science literature, there are currently two opposing views on this issue. The first one claims that terrorism is rising around the world simply because it works. Most notably, Pape (2003, 2005) claims that terrorists achieved “significant policy changes” in six of the eleven terrorist campaigns that he analyzed. In addition, Pape (2003, 2005) argues that terrorism is particularly effective against democracies because the electorate typically is highly sensitive to civilian casualties from terrorist acts, which induces its leaders to grant concessions to terrorist factions. Authoritarian regimes, in contrast, are responsive only to the preferences of the ruling elite, and therefore are less likely to accede to terrorist demands in response to civilian casualties.4 2. As Abrahms (2007) points out, the effectiveness of terrorism can be measured in terms of its “combat effectiveness” and its “strategic effectiveness.” The former refers to the amount of physical damage and the number of human casualties resulting from terror activity, whereas the latter refers to whether terror is able to achieve political goals. The focus of our research is to assess the “strategic effectiveness” of terror. 3. The main policy difference between the “left-wing” and “right-wing” in the Israeli context is related to the conflict with the Palestinians, and their attitudes toward the Arab world, rather than to social and economic issues. 4. To provide empirical support for this theory, Pape (2003, 2005) shows that democracies are disproportionately more likely to be the victim of an international suicide terror attack. Karol and Miguel (2007) provide empirical support for voters’ sensitivity to casualties by showing that American casualties in Iraq caused George Bush to receive significantly fewer votes in several key states in the 2004

DOES TERRORISM WORK?

1461

The opposing view argues not only that there is very little evidence that terrorism is effective (Abrahms 2006), but that in fact terror is not an effective tool.5 Abrahms (2006) examined 28 terrorist groups, and argues that terrorists achieved their political goals only 7% of the time (in contrast to the more than 50% success rate reported in Pape [2003] with a different sample). Moreover, he argues that terrorism against democracies is ineffective because democracies are more effective in counterterrorism. In support of this claim, Abrahms (2007) presents evidence that democracies are less likely to be the target of terror activities than autocratic regimes, and that democracies are less likely to make territorial or ideological concessions. Using the Worldwide Incidents Tracking System database of international and domestic terror incidents from 2004 to midway through 2005, Abrahms (2007) shows that terror incidents decline with the level of a country’s “freedom index,” and that the freedom index is uncorrelated with the level of casualties from terror. In particular, Abrahms (2007) shows that, among the ten countries with the highest numbers of terror casualties, only two are “free countries” (India and Philippines), whereas the rest are “not free” (Iraq, Afghanistan, Russia, and Pakistan) or “partially free” (Nigeria, Nepal, Colombia, and Uganda).6 This evidence leads Abrahms (2007) to conclude that terrorism is not an effective strategy against democratic countries. Thus, a summary of the literature reveals that very few studies have even attempted to analyze the strategic effectiveness of terrorism, and there is little agreement among those that have. Thus, whether terror is effective or not is not only important in terms of understanding why terrorism exists, it is still very much an open question in terms of the evidence. Furthermore, as described above, the existing evidence is based on analyzing a small sample of countries and making assessments about the success of

elections. Weinberg and Eubank (1994) and Eubank and Weinberg (2001) also show that democracies are more likely to host terror groups and be the target of terror attacks. They claim that terrorism is particularly effective against democracies due to constitutional constraints that limit policies of retaliation against terrorism in these types of regimes. 5. Abrahms (2007) criticizes the analysis in Pape (2003) for being based on very few countries. Out of the eleven terrorist campaigns that Pape (2003) analyzed, six were based in Israel, whereas four of the remaining five were in Turkey or Sri Lanka. 6. The evidence in Abrahms (2007) notwithstanding, the findings of Abadie (2006), Blomberg and Hess (2008), and Krueger and Laitin (2008) suggest that political reforms toward more democratic forms of government are associated with an increase in terrorist activity. Jackson Wade and Reiter (2007) dispute this claim.

1462

QUARTERLY JOURNAL OF ECONOMICS

terror campaigns against them (Pape 2003, 2005; Abrahms 2006, 2007). However, comparisons across countries are problematic for a number of reasons. First, it is difficult to control for all the factors that may be correlated with the level of terrorism, political stability, level of freedom, etc. All of these factors are most likely to be endogenously determined, and jointly influenced by geography, colonial history, ethnic composition, and religious affiliation. Second, terrorist groups may be emerging endogenously in certain countries according to the success rate of other strategies, and according to the expected success rate of terrorist strategies (Iyengar and Monten 2008). In addition, one cannot ignore the fact that most of the countries (listed above) that suffer high levels of terror are near each other geographically and share similar characteristics in terms of long-standing border conflicts intermixed with ethnic and religious tensions. Controlling for these factors is difficult to do in a cross section of countries, making it problematic to infer causality from the existing evidence. Finally, it is often difficult to assess whether terror is effective when the goals of the terrorists are not even clear. For example, it is not easy to define the political goals of the September 11 attacks (Byman 2003). Therefore, it is nearly impossible to apply a standard definition of “success” in comparing terrorist groups across different countries. In this paper, we overcome the empirical obstacles mentioned above by focusing on the Israeli–Palestinian conflict and using individual-level data on the political attitudes of Israelis toward making concessions to the Palestinians. Our focus on one conflict allows us to abstract from the empirical difficulties associated with controlling for all the factors that could be influencing the presence and effectiveness of terror across countries. In addition, restricting our analysis to one conflict enables us to avoid the difficult task of trying to create objective and consistent measures for whether terror seems to be effective across different conflicts, which are often not comparable to one another.7 7. Terror factions are intricate and multifaceted organizations. There is a growing consensus that terror organizations strategically choose their targets and their operatives (Berman and Laitin 2005, 2008; Bueno de Mesquita 2005b; Benmelech and Berrebi 2007; Benmelech, Berrebi, and Klor 2009). The main goals behind terror campaigns, however, are not always clear or well defined, and seem to differ not only across conflicts, but even over time for any given terror organization. Alternative goals notwithstanding, the main objective of the majority of terror campaigns is to impose costs on the targeted population to pressure a government into granting political and/or territorial concessions. A large number of articles can be cited in support of this claim. For formal theoretical models, see Lapan and Sandler (1993), Bueno de Mesquita (2005a), and Berrebi and Klor (2006).

DOES TERRORISM WORK?

1463

Using repeated cross sections of Jewish Israelis (not including those in the West Bank or Gaza Strip) from 1988 to 2006, we control for subdistrict fixed effects and aggregate year effects, and test whether variation in the level of terror across subdistricts over time can explain variation across subdistricts over time in political attitudes. We pay particular attention to distinguishing between political attitudes and party preferences, which is important because the platforms of parties could be endogenously changing in response to terror. Our results show that terrorism significantly affects the preferences and attitudes of Jewish Israelis. In particular, local terror attacks induce the local population to exhibit a higher willingness to grant territorial concessions. However, the effects of terrorism are nonlinear—terror makes Israelis more accommodating up to a certain point, but beyond this threshold, more terror attacks harden the stance of Israelis toward making concessions. That said, the level of terror fatalities rarely reaches the critical threshold in any given locality. Out of 102 subdistrict–year combinations in our data set, there are only seven cases where the marginal effect was negative (Jerusalem in 1996 and 2003 and Afula, Hadera, Sharon, Tel Aviv, and Zefat in 2003), and only one case (Jerusalem in 2003) where the estimated total effect is negative. As a result, the total effect of terror on the preferences of the Israeli population is almost always toward moderation. Hence, terror attacks appear to be strategically effective in coercing Israelis to support territorial concessions. At the same time, our analysis shows that terror increases the likelihood that voters support right-wing parties (similar to that of Berrebi and Klor [2008]). This result does not contradict our finding that terror causes moderation. The evidence suggests that terrorism brought about a leftward shift of the entire political map in Israel over the last twenty years, including the positions of right-wing parties that are traditionally less willing to grant territorial concessions to the Palestinians. This finding highlights how critical it is to distinguish between the effects of terror on political attitudes and its effects on party preferences, because the platforms of parties are moving endogenously in response to terrorism. Therefore, our overall results show that terrorism has been an effective tool by shifting the entire political landscape toward a more accommodating position. Although we cannot determine whether terrorism is an optimal strategy, these findings suggest that terrorism may be increasing over time and spreading to other regions precisely because

1464

QUARTERLY JOURNAL OF ECONOMICS

it appears to be a successful strategy for achieving political goals.

II. THE DATA II.A. Data on the Political Attitudes of Israeli Citizens Our analysis uses data on the political attitudes of Jewish Israeli citizens (who do not reside in Gaza and the West Bank) along with data on the occurrences of terror attacks.8 The data on the attitudes of Israeli citizens come from the Israel National Election Studies (INES), which consist of surveys carried out before every parliamentary election in Israel since 1969.9 These surveys are based on a representative sample of Israeli voters, and focus on a wide array of substantive issues affecting Israeli society. For example, the surveys include questions about economic and political values, trust in government, social welfare, and the desired relationship between state and religion. In addition, there are several questions regarding the political preferences of the respondent and his or her policy position regarding the Israeli–Palestinian conflict. Because our goal is to understand changes over time in the political opinions of Israelis, our analysis focuses on the questions that appear repeatedly across surveys for the six parliamentary elections from 1988 until 2006. These include questions regarding which party the voter is supporting in the upcoming elections and his or her self-described political tendency (from right wing to left wing). In addition, the survey asks whether the respondent favors granting territorial concessions to the Palestinians as part of a peace agreement, and whether Israel should agree to the establishment of a Palestinian state. The surveys also contain a rich set of demographic information, such as gender, age, education level, ethnicity, immigrant status, monthly expenditures, and notably, the location of residence for each respondent. This geographic information is particularly important for our identification strategy because we do not 8. We omit Arab Israelis and Jewish Israeli citizens residing in Gaza and the West Bank because these populations where not consistently included in the surveys. The survey includes Arab Israelis only starting from 1996 and Jewish settlers only since 1999. 9. The INES questionnaires and data are available online at the INES website (www.ines.tau.ac.il). See Arian and Shamir (2008) for the latest edited volume of studies based on the INES data.

DOES TERRORISM WORK?

1465

want to rely on aggregate time trends to identify the causal effect of terror on political attitudes. Instead, we control for aggregate time trends and exploit the geographic variation in terror attacks across eighteen different subdistricts to explain the changes in political attitudes across locations. Table I presents summary statistics for the attitudes of Jewish Israeli citizens, computed separately for each sample year. The main variable of interest refers to the respondent’s willingness to make territorial concessions to the Palestinians. This question appears in every survey, though not in the same format. In the surveys from 1988 and 1992, individuals were asked to consider three options regarding the long-term solution for the West Bank and Gaza Strip. We coded the person as being willing to make concessions if he or she chose the option: “In return for a peace agreement, a return of most of Judea, Samaria and the Gaza Strip.”10 In the surveys from 1996 and 1999, individuals were asked to rank from one to seven how much they agree (one refers to “strongly disagree” and seven refers to “strongly agree”) with the statement “Israel should give back territories to the Palestinians for peace.” We coded individuals as being willing to make concessions if they chose five or above. Finally, in 2003 and 2006, individuals were given four options regarding their opinion on “To what extent do you agree or disagree to exchange territories for peace?” The four options were strongly agree, agree, disagree, and strongly disagree. We coded individuals as being willing to make concessions if they responded with “agree” or “strongly agree.” This variable is our main variable of interest because it unequivocally captures the respondent’s willingness to grant territorial concessions to the Palestinians, which is consistent with the goals of the terrorist factions. For example, Abdel Karim Aweis, a leader of the Al Aksa Martyrs Brigades (one of the factions linked to the Fatah movement), asserted in an interview with the New York Times that the “goal of his group was to increase losses in Israel to a point at which the Israeli public would demand a withdrawal from the West Bank and Gaza Strip” (Greenberg 2002). Table I shows an upward trend over time in the willingness of Israelis to make concessions—from 39% in 1988 to 57% in 2006. However, because there were changes in the structure 10. The other two options available in the survey are “Annexation of Judea, Samaria and the Gaza Strip” and “Status quo, keeping the present situation as it is.”

0.498 (0.500) 0.290 (0.454) 0.427 (0.495) 0.443 (0.497) 1192 78 11 0.0179 0.0025

0.395 (0.489) 0.260 (0.439) 0.504 (0.500) 0.529 (0.499) 873 25 6 0.0063 0.0015

1992

0.0146

141 71 0.0290

0.428 (0.495) 0.482 (0.500) 0.393 (0.489) 0.438 (0.496) 1168

1996

0.0004

44 2 0.0077

0.502 (0.500) 0.554 (0.497) 0.389 (0.488) 0.380 (0.486) 1060

1999

0.0422

408 275 0.0626

0.550 (0.498) 0.486 (0.500) 0.517 (0.500) 0.463 (0.499) 1058

2003

0.0028

198 19 0.0294

0.568 (0.495) 0.668 (0.471) 0.413 (0.493) 0.328 (0.469) 1505

2006

Notes: Entries in the first four rows of the table represent the averages of the respective variables for each survey. Standard deviations appear in parentheses. The number of observations refers to the total numbers of Israeli Jewish individuals (who do not reside in Gaza or the West Bank) interviewed in each survey. The exact number of observations for each variable varies slightly because not all respondents answered each question. Source. Israeli National Elections Study (INES). The last four rows report the number of fatalities from terror attacks, and the number of fatalities per capita (per 1,000 individuals) from terror attacks. Source. B’tselem.

N Terror fatalities Number of terror fatalities since previous elections Number of terror fatalities within a year of elections Number of terror fatalities per capita since previous elections (per 1,000 individuals) Number of terror fatalities per capita within a year of elections (per 1,000 individuals)

Vote for right bloc of political parties

Agree to the establishment of a Palestinian state in the territories as part of a peace settlement Right-wing political tendency

Political attitudes Agree to territorial concessions to the Palestinians

1988

TABLE I ATTITUDES TOWARD THE CONFLICT, SUPPORT FOR DIFFERENT POLITICAL PARTIES, AND TERROR FATALITIES, BY YEAR

1466 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1467

of the question over time, we employ several strategies to show that our results do not come from those changes. First, all of the regressions control for year effects, which should neutralize aggregate year-specific differences in how individuals interpreted the question. Second, because the major change to the wording occurred between 1992 and 1996, we show that the results are virtually identical when all of the surveys are used (1988–2006), or when the analysis is restricted to periods when there was very little change in the question (1996–2006) or no change at all (2003– 2006). Third, it is not entirely clear whether those who responded with a “four” on the seven-point scale in 1996 and 1999 should be considered willing to make concessions or not. Therefore, we show that the results are very similar if we code them as being willing to make concessions or unwilling to make concessions. Table I also shows the evolution over time of the other variables used to measure the reaction of Israelis to terror attacks. One measure is the person’s willingness to agree to the establishment of a Palestinian state in the territories as part of a peace settlement. This question included four options (strongly agree, agree, disagree, and strongly disagree) regarding the person’s willingness to “establish a Palestinian state as part of a permanent solution to the conflict.” We divided the sample into two groups by coding together individuals who agree or strongly agree with the creation of a Palestinian state, versus individuals who disagree or strongly disagree with this position. Table I shows that the proportion of individuals who agree or strongly agree with the creation of a Palestinian state increases from 0.26 in 1988 to 0.67 in 2006.11 The third variable in Table I refers to the respondent’s selfclassification across the left–right political spectrum. If the respondent defined himself or herself as being on the “right” or “moderate right” end of the spectrum, then he or she was coded as identifying with a right-wing political tendency.12 Table I depicts a generally downward trend in the percentage of self-described 11. One possible caveat of this question is that the survey does not provide a clear definition of “territories.” As a consequence, respondents may interpret territories as areas already under the control of the Palestinian Authority. If that is the case, for these respondents, the creation of a Palestinian state does not really entail any further territorial concessions. In our sample, 25% of the individuals who agree to the establishment of a Palestinian state do not agree to further territorial concessions. They compose 12% of the entire sample. 12. The exact wording of the question is, “With which political tendency do you identify?” It included seven possible answers: left, moderate left, center, moderate right, right, religious, and none of them. We classified an individual as identifying

1468

QUARTERLY JOURNAL OF ECONOMICS

“right-wingers” between 1988 and 2006, although there was a short-lived increase from 1999 to 2003. Finally, our last outcome measure depicts whether the individual intends to vote for a party belonging to the “right-wing bloc” in the upcoming parliamentary elections. The surveys ask every individual: “If the elections for the Knesset (Israeli parliament) were held today, for which party would you vote?” We assign parties to the right bloc following the categorization developed by Shamir and Arian (1999). According to their definition, the right bloc of parties includes the Likud party, all of the religious parties, all of the nationalist parties (Tzomet, Moledet, National Union), and parties identified with Russian immigrants. We choose to focus on the right bloc instead of on separate parties because the number of small parties and the electoral system were not constant across each election period.13 Table I shows that support for the right bloc fluctuates over time in a fashion similar to the self-described right-wing political tendency. We observe a steady decrease in the support for the right bloc between 1988 and 1999, with an increase in 2003 followed by a sharp decrease in 2006 (due to the appearance of Kadima, a new centrist party, in those elections). Table II depicts the political attitudes of respondents tabulated by their demographic characteristics. The table shows that (i) men and women share similar political preferences; (ii) the willingness to make concessions (and other left-leaning views) increases with age, with education, and with the degree of being secular (versus religious); (iii) individuals with an Asian–African family background (Sephardic Jews) are more likely to oppose concessions and support parties in the right bloc; and (iv) there are no clear differences between the attitudes of immigrants and of native-born Israelis. Overall, the data display a tendency for Israelis to become more accommodating in their views over time—more willing to grant concessions, less “right-wing” in their own self-description, with the right-wing political tendency if the individual’s answer to this question was “moderate right” or “right.” 13. In contrast to the other elections, the parliamentary elections of 1996 and 1999 allowed split-ticket voting, whereby each voter cast a ballot in support of a political party for the parliamentary elections and a different ballot for the elections for Prime Minister. This different system may have had an effect on the relative support obtained by the different parties. Consequently, political preferences in these elections may not be directly comparable at the party level to voter preferences in the parliamentary elections of 1988, 1992, 2003, and 2006.

0.473 0.47 0.47 0.41 0.46 0.54 0.40 0.58 0.55 0.25 0.49 0.46 0.37 0.53 0.44 0.46 0.54

0.48 0.51 0.43 0.51 0.55 0.44 0.58 0.57 0.28 0.48 0.51 0.41 0.54 0.44 0.50 0.57

Agree to Palestinian state

0.497

Note. Entries in the table show the means over the entire sample period. Source. Authors’ calculations using survey data from INES.

All Gender Male Female Age 15–29 30–45 46 and older Years of schooling Elementary and secondary Higher education Religiosity Secular Observant Place of birth Immigrant Native Israeli Ethnic background African–Asian ethnicity Non-African–Asian ethnicity Household expenditures Less than average About average More than average

Agree to territorial concessions

0.47 0.45 0.39

0.54 0.37

0.42 0.45

0.37 0.64

0.49 0.36

0.48 0.45 0.39

0.43 0.46

0.440

Right-wing political tendency

TABLE II POLITICAL ATTITUDES BY DEMOGRAPHIC CHARACTERISTICS

0.46 0.43 0.36

0.53 0.36

0.42 0.42

0.33 0.69

0.48 0.35

0.47 0.43 0.37

0.41 0.43

0.421

Vote for right bloc of political parties

0.39 0.34 0.27

0.37 0.63

0.39 0.61

0.74 0.26

0.57 0.43

0.32 0.30 0.37

0.51 0.49

1.00

Share of sample population

DOES TERRORISM WORK?

1469

1470

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I Agree to Concessions over Time: All Right-Leaning Israelis

and more amenable to the creation of a Palestinian state. Interestingly, an increase in the willingness to grant concessions occurred even within individuals that consider themselves “rightwing.” This pattern is shown in Figures I and II. Although there were changes in the composition of people who define themselves as being right-wing over time, these figures highlight the general shift in the political landscape over time toward a more accommodating position regarding Palestinian demands. The question we address is whether this shift is causally related to the terrorist tactics employed by various Palestinian factions. II.B. Data on Israeli Fatalities in Terror Attacks Information on Israeli fatalities from terror attacks is taken from B’tselem, an Israeli human rights organization. B’tselem’s data (thought to be accurate, reliable, and comprehensive) are widely used in studies focusing on the Israeli–Palestinian conflict (Becker and Rubinstein 2008, Jaeger and Paserman 2008, Jaeger et al. 2008, Gould and Stecklov 2009, and others). The data include information on the date, location, and circumstances of each terror attack, which allows us to classify every Israeli fatality according

DOES TERRORISM WORK?

1471

FIGURE II Agree to Concessions over Time: All Left-Leaning Israelis

to the subdistrict where the incident took place. Our measure of fatalities includes only civilian (noncombatant) casualties that did not occur in the West Bank or Gaza Strip. There is substantial time series and geographic variation in Israeli fatalities, which has been used in many of the papers cited above to identify the effect of terror on other outcomes. Figure III and Table A.1 in the Online Appendix depict the total number of fatalities across subdistricts, and show that terror factions especially targeted the most populated subdistricts of the country (Jerusalem, Tel Aviv, and Haifa). In addition, subdistricts such as Hadera and Afula, which are close to particularly radical cities under the control of the Palestinian Authority (e.g., Jenin), suffer from a higher than average level of terror fatalities. Table I presents the number of Israeli fatalities over time. The most violent period occurred between 1999 and 2003, which coincided with the outbreak of the second Palestinian uprising. Overall, there seems to be an upward trend in terror activity over time, and this coincided with the shift in the political landscape toward a higher willingness to make concessions. However, these two trends are not necessarily causally related. For this reason,

1472

QUARTERLY JOURNAL OF ECONOMICS

FIGURE III Distribution of Terror Fatalities across Subdistricts Total number of terror fatalities across subdistricts between July 23, 1984 (date of 1984 parliamentary elections), and March 28, 2006 (date of 2006 parliamentary elections). Adapted by permission from the Central Bureau of Statistics, Israel, map.

DOES TERRORISM WORK?

1473

FIGURE IV Agree to Concessions and Terror Fatalities: Changes from 1988 to 2003

our strategy is to exploit geographic variation in the trends over time across locations, rather than looking at the aggregate trends. Figure IV presents a first look at whether the increase in fatalities per capita within a subdistrict between 1988 and 2003 (the peak year of terror) is correlated with the average change in political views within each subdistrict. The line in Figure IV is the fitted quadratic curve estimated by OLS using the sample of subdistricts depicted in the figure. The figure displays a positive relationship between the change in fatalities per capita in a subdistrict and the change in the average willingness to grant concessions within that subdistrict. However, the relationship seems to get weaker at higher levels of terror. This nonlinear pattern is also found in Figures V and VI, which show the relationship between changes in local terror fatalities per capita and changes in the other outcomes: support for a Palestinian state and support for the right-wing parties. These patterns are all consistent with the idea that terror has induced Israelis to become more accommodating to Palestinian interests, while increasing their support for the right bloc. But these figures are just a cursory look at the

1474

QUARTERLY JOURNAL OF ECONOMICS

FIGURE V Support for Palestinian State and Terror Fatalities: Changes from 1988 to 2003

broad patterns in the data. The next section presents our main empirical strategy. III. EMPIRICAL STRATEGY Our empirical strategy is designed to identify the causal effect of terrorism on the political preferences of the Jewish Israeli population. Our unit of observation is the individual, and we model his or her views as a function of his or her personal characteristics and location of residence, the survey year, and the level of recent terror activity in the individual’s subdistrict. Specifically, we estimate the following linear regression model: (1) viewi jt = α1 · terror jt + α2 · terror2jt + β · xi jt + γt + μ j + εi jt , where viewi jt is a dummy variable equal to one if individual i who lives in subdistrict j in year t holds an accommodating position toward Palestinian demands, and zero otherwise; terror jt is the number of terror fatalities per capita in subdistrict j before the elections in t; γ t is a fixed effect for each election year that controls for aggregate trends in political preferences; μ j is

DOES TERRORISM WORK?

1475

FIGURE VI Support for Right Bloc of Political Parties and Terror Fatalities: Changes from 1988 to 2003

a fixed effect unique to subdistrict j; and xi jt is a vector of individual and subdistrict-level characteristics. These characteristics include all the characteristics listed in Table II—the individual’s gender, age (and age squared), years of schooling, schooling interacted with age, level of religious observance, immigrant status, ethnicity (Asian–African origin versus all other groups), level of expenditures (designed to control for income), number of persons in the household, and number of rooms in the individual’s house (another proxy for income). The xi jt vector also includes characteristics that vary at the subdistrict–year level, and are presented in Table III (computed from the Israeli Labor Force Survey). These subdistrict characteristics include the unemployment rate and demographic variables such as the population share by gender, education levels, religiosity, immigrant status, ethnicity, age groups, household size, and marital status. Unobserved determinants of the individual’s views are captured by the error term, εi jt . The goal of the proposed econometric specification is to identify α 1 and α 2 , which represent the causal effect of local terror activity on an individual’s political attitudes. Identification of α 1

Higher education

−0.6252 [0.71] 4.2688 [5.48] .6807 102

Terror fatalities per capita within a year of the survey Linear effect −0.0610 0.6862 [0.31] [0.51] Quadratic effect 1.0685 −5.3373 [2.80] [4.62] P-value for effect of terrorism .9175 .4053 N 102 102

Asian–African ethnicity

Population size

1.8236 [1.36] −11.0021 [14.09] .0860 102

Below 14 years old 1.0316 [1.47] −14.3044 [11.59] .2889 102

Above 14 years old

2.834∗ [1.50] −25.033 [16.5] .1441 102

Total

0.4502 −178.699 [0.49] [658.3] −3.6838 2,577.698 [4.84] [7,768.0] .6219 .9366 102 102 Average number of individuals in the household

−0.3086 [0.65] 0.0702 [5.11] .5499 102

Immigrants

−0.22459 [0.41] −0.08029 [3.23] .3254 102

Married

−0.07139 [0.14] 0.25239 [1.53] .7055 102

Unemployment

Notes. Each column presents the results of a separate OLS regression where the dependent variable, obtained from the Israeli Labor Force Survey, appears at the top of each column. In addition to terror fatalities per capita within a year before the survey, all regressions include subdistrict and year fixed effects. Robust standard errors, adjusted for clustering at the subdistrict–year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Above 45

30 to 45

Below 30

0.0796 [0.17] −0.9097 [1.50] .7585 102

Ultra-orthodox Jews

Partition by age

Terror fatalities per capita within a year of the survey Linear effect −0.0846 0.0832 [0.20] [0.48] Quadratic effect 1.2400 0.3925 [1.94] [5.32] P-value for effect of terrorism .4645 .8753 N 102 102

Male

TABLE III THE EFFECT OF LOCAL TERROR FATALITIES ON OBSERVABLE CHARACTERISTICS OF THE LOCAL POPULATION

1476 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1477

and α 2 is based on the idea that terror attacks differentially affect the political views of individuals in relation to their proximity to the attack. This may happen because the salience of the conflict could depend on a person’s proximity to terror attacks, or it may be the case that terror attacks impose a higher cost on the local population than on the rest of the country.14 For example, terror attacks pose a greater threat to the personal security of local versus nonlocal residents. Also, terror attacks typically cause local residents to alter their daily routine (modes of transportation, leisure activities, etc.) in costly ways due to the perceived changes in their personal security (Gordon and Arian 2001).15 Based on the concave relationship observed in Figures IV to VI, we allow for a nonlinear effect in equation (1) by including a quadratic term for the local level of terror, but we also estimate models that assume a linear relationship (i.e., restricting α 2 = 0). There are several reasons that terror may affect a person’s political views. Terror attacks increase the cost of denying terrorist groups what they are seeking, and therefore could cause individuals to become more accommodating toward terrorist demands. On the other hand, terror attacks could increase hatred for the other side or make a peaceful solution appear less plausible, leaving individuals less willing to adopt an accommodating position. Therefore, terror could theoretically produce either a softening or a hardening of one’s stance regarding the goals of a terrorist faction, and the effect could be nonlinear if an increase in attacks changed the way an individual viewed the conflict or dealt with terror. For example, the impact of initial attacks, which tend to be more shocking and unexpected, could be substantively different than attacks which occur after individuals have already dealt with several previous attacks. Additionally, attacks beyond a certain threshold could alter an individual’s views about the goals 14. In addition, there is convincing evidence that the local effect of violence is amplified by the coverage of the local media (Karol and Miguel 2007; Sheafer and Dvir-Gvirsman 2010). However, in the Israeli context, where almost all media are at the national level, this mechanism is unlikely to generate substantially different effects across geographic areas. 15. Becker and Rubinstein (2008) show that terror attacks induce a significant decline in bus tickets sold, and in expenditures in restaurants, coffee shops, and pubs. They also find an increase in expenditures on taxis, particularly in large cities, after a bus bombing. Similarly, Spilerman and Stecklov (2009) find that sales in a popular chain of Jerusalem coffee shops decline in the days following attacks, particularly in locations more open to attacks such as those in city centers. Moreover, this decline in sales is larger after more fatal attacks. Hence, the evidence consistently suggests that the effect of terror attacks varies according to the proximity and severity of the attacks.

1478

QUARTERLY JOURNAL OF ECONOMICS

and rationality of the other side, thus changing a person’s willingness to make concessions to terror groups. By including fixed effects for each subdistrict and survey year, we are essentially examining whether changes over time in terror activity within a subdistrict are correlated with the changes over time in political views within that subdistrict, after controlling for the national trend and a rich set of personal and subdistrict-level characteristics. Our identifying assumption in (1), therefore, is that local terror attacks are not correlated with omitted variables that affect political attitudes, and that terrorist groups are not choosing targets based on the trend in local political attitudes (i.e., no reverse causality). To understand this identifying assumption, we sketch out the following conceptual framework for a terrorist group’s decisionmaking process over how much terror to produce in subdistrict j during year t.16 Because terror varies at the subdistrict–year level, we start by aggregating our empirical model, (1), to that level by using the mean of each variable by subdistrict and year: (2)

view jt = α1 · terror jt + α2 · terror2jt + β · x jt + γt + μ j + ε jt ,

where view jt is the share of individuals in subdistrict j at time t who hold an accommodating position toward Palestinian demands, and the coefficients α 1 , α 2 , and β are assumed to be fixed over time and across subdistricts. We assume that the cost of producing terror in subdistrict j increases with the number of terror attacks per capita and the local population size, N jt : cost jt = N jt λ0 jt + λ1 jt · (terror attack) jt (3) + λ2 jt · (terror attack)2jt , where the coefficients λ0 jt , λ1 jt , and λ2 jt may vary across subdistricts and over time.17 The relationship between terror attacks per capita and terror fatalities per capita is given by (4)

terror jt = δ · (terror attack) jt + v jt ,

16. We thank Robert Barro for suggesting the following framework. 17. The assumption that the cost of producing terror attacks increases with a subdistrict’s population size is consistent with the observation that terror organizations assign more highly skilled terrorists to attack more populated areas. The strategic assignment of terrorists may reflect differences in the value of attacking each target (Benmelech and Berrebi 2007), or may be indicative of the terrorists’ response to an optimal counterterrorism policy that raises the failure probability of attacks to valuable targets (Powell 2007).

DOES TERRORISM WORK?

1479

where the random term v jt captures the idea that the relationship between terror attempts and the resulting number of fatalities is not predetermined. Terrorists care about the total number of individuals willing to make concessions to them at time t, net of the costs of producing terror. Formally, they maximize j [θt N jt · view jt − cost jt ], where θ t > 0 captures the idea that the overall payoff to producing terror may change over time due to changes in political developments. The optimal level of terror fatalities in subdistrict j at time t is obtained by equating the marginal cost to the marginal benefit of terror, represented by (5) terror jt = max{δ · (θt · α1 · δ − λ1 jt )/[2 · (λ2 jt − δ 2 · θt · α2 )] + η jt , 0}, where η jt = λ2 jt · v jt /(λ2 jt − δ 2 · θt · α2 ) is a stochastic shock. Therefore, the optimal level of terror in subdistrict j at time t is determined not only by the parameters governing the political response in (2) that we want to estimate (α 1 and α 2 ), but also by the cost parameters of producing terror in subdistrict j over time (λ1 jt and λ2 jt ) and the random outcome of planned attacks (η jt ). Assuming that η jt is independently determined by the random circumstances surrounding each attack, estimation of our parameters of interest (α 1 and α 2 ) in (1) and (2) yields consistent coefficients if the unobserved political preferences in subdistrict j, ε jt , are uncorrelated with changes over time in the costs of producing terror, λ1 jt and λ2 jt. The costs of producing terror in a given area could be changing over time due to the building of the security wall between Israel and the Palestinian territories, or due to policy changes regarding border closures, police presence, and the deployment of security guards at restaurants, schools, and buses. To the extent that these changes are occurring at the aggregate level, they will be absorbed by the aggregate time effects in (2). However, some of these preventive efforts may be differentially changing over time across subdistricts according to the local level of terror. For example, it is likely that Israeli authorities would set a higher priority to beefing up security in areas that historically have been targeted more frequently (i.e., Jerusalem, Tel Aviv, Haifa, Netanya, etc.). It is not clear why these changes would be systematically correlated with changes in unobserved political preferences within an area, which would violate our identifying assumption, but one possibility is that individuals with certain views may be differentially

1480

QUARTERLY JOURNAL OF ECONOMICS

migrating away from heavily targeted areas to safer areas, or there might be a correlation purely by coincidence. In order to address this issue, we perform a set of balancing tests to examine whether there is a systematic relationship between the observable characteristics of the local population and the local level of terror. More specifically, we test whether there is a linear or nonlinear relationship between terror jt and the variables contained in x jt . If there is no relationship between terror and observable factors that affect political preferences, then it seems reasonable to assume that terror jt is not correlated with unobservable political preferences, ε jt , which is the condition needed to obtain consistent estimates of α 1 and α 2 . Table III presents this analysis by regressing various characteristics of the local population on the local level of recent terror activity, while controlling for subdistrict and year fixed effects. The inclusion of fixed effects by year and subdistrict allows us to test whether changes in the characteristics of the local population over time vary systematically with the local level of recent terror activity. The subdistrict-level characteristics used in Table III capture the main demographic and economic characteristics of the local population available in the Israel Labor Force Survey of each election year. In addition, Table III examines whether local terror is related to the size of the local population, which sheds light on whether terror induces overall out-migration from areas with high levels of attacks. The results in Table III show that terror is not significantly related to population size, which suggests that terror does not induce Israelis to migrate to calmer areas. In addition, terrorism is not correlated with changes in the demographic composition or unemployment rate of the subdistrict.18 In particular, recent levels of terrorism are not correlated with the percentage of the subdistrict’s population that is ultra-orthodox or from an Asian– African background—two groups that are typically more rightwing in their views. The fact that terror is not correlated with observable characteristics that are strong predictors of political views supports our assumption that terror is not correlated with unobservable factors

18. The unemployment rate is defined as the number of Jewish males over the age of 24 who are in the labor force but have not worked for the past twelve months. Similar insignificant results are obtained if we include those out of the labor force (but not in school) as being unemployed.

DOES TERRORISM WORK?

1481

that affect an individual’s political preferences.19 In addition, the evidence in Table III provides support for our assumption that there is no reverse causality in equation (1)—terror groups do not target areas according to changes in local demographic characteristics that affect political preferences, and therefore, it seems unlikely that terror groups are targeting areas based on the local trends in political preferences.20 To provide further support for our assumption that there is no reverse causality, Table IV examines whether the political views of individuals within a locality are correlated with local levels of terror in the next election cycle: 2

(6) terror jt+1 = π1 · view jt + π2 · view jt + π3 · x jt + γt + μ j + τ jt , where terror jt+1 measures the number of fatalities per capita in subdistrict j between parliamentary elections in years t and t + 1; view jt is the share of residents with an accommodating view in subdistrict j in the survey taken before parliamentary elections in year t; γ t is a fixed effect for each election year; μ j is a fixed effect unique to subdistrict j; and x jt is a vector of demographic characteristics in subdistrict j before the elections in year t. The estimation of equation (6) is done in Table IV, which tests for a linear or nonlinear effect of current views on attacks in the next period, as well as testing the robustness of the relationship to the inclusion of additional controls. Also, the last column regresses terror jt+1 on the first difference in political views (view jt − view jt−1 ) and its square, in order to test whether terror groups target areas that recently underwent a specific type of change in political attitudes. Table IV performs this analysis using all five ways of measuring political views and preferences, and shows that there is no significant relationship between changes in local political views and terror attacks in the next period. Given that terror groups are not using recent changes 19. The analysis in Berrebi and Klor (2008), based on 240 municipalities and local councils, is consistent with this conclusion. They show that terrorism did not affect net migration across localities or political participation of the electorate during the period at issue. 20. Reverse causality also appears unlikely based on theoretical and practical grounds. Theoretically, it is not clear why terror groups would target particular areas based on the contemporaneous changes in their political attitudes. In practice, it would be very hard to do so, given that the information on the trends in political attitudes is not widespread, and only becomes available in the future after an election, and the election survey, takes place. This makes it hard for groups to target areas based on current changes in local political views.

P-value for effect of political attitudes N

Quadratic effect

P-value for effect of political attitudes N Right wing political tendency Linear effect

Quadratic effect

P-value for effect of political attitudes N Support for creation of a Palestinian state Linear effect

Quadratic effect

Support for granting territorial concessions Linear effect

0.0046 [0.113] 0.0180 [0.132] .7629 87 0.0337 [0.079] 0.0335 [0.094] .1302 87

−0.0019 [0.092] 0.0497 [0.120] .1974 87 −0.0173 [0.075] 0.0364 [0.073] .7287 87

0.0405 [0.024]

0.0158 [0.027]

87

87

87

0.0383 [0.103] −0.0732 [0.123] .6237 87

Adding survey data (3)

0.0450 [0.118] −0.0272 [0.160] .5534 87

Nonlinear specification (2)

0.0227 [0.027]

Linear specification (1)

0.1313 [0.129] −0.0631 [0.141] .0656 87

−0.0150 [0.118] 0.0059 [0.127] .9636 87

−0.0415 [0.101] −0.0271 [0.129] .3407 87

Adding subdistrict characteristics (4)

TABLE IV THE EFFECT OF LOCAL POLITICAL ATTITUDES ON FUTURE LOCAL LEVELS OF TERROR FATALITIES PER CAPITA

0.0239 [0.174] −0.0301 [0.153] .9458 67

−0.0899 [0.071] 0.0651 [0.102] .3365 67

−0.2148∗ [0.118] 0.2329 [0.138] .2075 67

First differences (5)

1482 QUARTERLY JOURNAL OF ECONOMICS

Nonlinear specification (2) −0.0285 [0.017] 0.0245 [0.031] .2494 87 −0.0555 [0.132] 0.0803 [0.142] .7892 87

−0.0365 [0.137] 0.0492 [0.136] .902 87

Adding subdistrict characteristics (4)

−0.0105 [0.015] 0.0357 [0.027] .4138 87

Adding survey data (3)

−0.0094 [0.145] 0.0182 [0.147] .9663 67

−0.0051 [0.014] 0.0087 [0.028] .8597 67

First differences (5)

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is the number of terror fatalities per capita in the next election cycle. In addition to the respective proxy for the preferences of the subdistrict’s population listed at the top of each panel, column (1) includes year and subdistrict fixed effects. Column (2) adds to column (1) a quadratic effect of the political preferences. Column (3) adds to column (2) the subdistrict’s average for age, schooling, schooling interacted with age, expenditures, number of persons in the household, number of rooms in the household’s apartment, religiosity, and percentage of males, immigrants, individuals coming from former Soviet bloc of countries, and individuals with Sephardic ethnicity. Column (4) adds to the specification in column (3) subdistrict-specific time trends and the subdistricts’ characteristics obtained from the LFS (specified in the note to Table VI). Column (5) presents a regression where all the explanatory variables used in column (4) are first-differenced. Robust standard errors, adjusted for clustering at the subdistrict level, appear in brackets. The P-value for the effect of political attitudes tests the hypothesis that the joint effect of all variables measuring political attitudes included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Factor analysis using support for Palestinian state and right-wing tendency Linear effect 0.0086 0.0103 [0.014] [0.015] Quadratic effect 0.0209 [0.026] P-value for effect of political attitudes .6886 N 87 87 Vote for a party in the right bloc Linear effect −0.0229 −0.0906 [0.039] [0.166] Quadratic effect 0.0668 [0.146] P-value for effect of political attitudes .8095 N 87 87

Linear specification (1)

TABLE IV (CONTINUED)

DOES TERRORISM WORK?

1483

1484

QUARTERLY JOURNAL OF ECONOMICS

in political views to choose their targets, it seems reasonable to assume that they are not using contemporaneous changes in political views to choose their targets either. Overall, the analysis in Tables III and IV shows that terror is not systematically related to observable factors that affect political views, and political attitudes in the current period are not related in any way to terror attacks in the next period. These findings provide support for the identifying assumption that local terror attacks are not correlated with omitted factors that affect the local trend in political views, and that terrorist groups are not choosing their targets based on the contemporaneous changes in local political preferences (i.e., no reverse causality). Furthermore, we will make our identifying assumption less restrictive by including subdistrict-specific linear time trends in (1). The inclusion of subdistrict-specific time trends means that we do not have to assume that terror is uncorrelated with omitted factors which affect the linear trend in local political preferences. Rather, our assumption will be that terror is not correlated with factors that produce deviations from the local linear trend in unobserved political views. In this manner, we will examine whether the results are robust to alternative sources of identification. IV. MAIN RESULTS FOR THE EFFECT OF TERROR ON POLITICAL ATTITUDES We now analyze the effect of terror on our main outcome variable: a person’s willingness to make territorial concessions to the Palestinians. For most of the analysis, we measure the local level of terror by the number of fatalities per capita in the twelve months prior to the elections in year t for each subdistrict j. However, we also present evidence regarding the robustness of the results to alternative ways of defining the local level of terror activity. Table V presents the effect of terrorism on a person’s willingness to make territorial concessions, estimated using a linear specification. The sensitivity of the coefficient is examined by adding more control variables to the specification in successive columns. For example, the first column does not include any other controls, the next three columns include fixed effects for each year and subdistrict by themselves and together, the fifth column adds a rich set of personal characteristics, the sixth column adds additional controls at the subdistrict level (those used in Table III), and the final column adds subdistrict-specific linear time trends.

Less than average

Expenditures (base category: more than average): About average

Religiously observant

From former Soviet bloc

African–Asian ethnicity

Immigrant

Years of schooling × age

Years of schooling

Male

Age square

Terror fatalities per capita within a year before the survey Personal characteristics Age

Variable 0.5144 [0.901]

(1) 0.2617 [0.872]

(2) 0.5666 [0.500]

(3) 0.3392 [0.498]

(4)

−0.0323∗∗ [0.0153] −0.0741∗∗∗ [0.0173]

0.0155∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0125 [0.0126] 0.0347∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0337∗ [0.0184] −0.0775∗∗∗ [0.0184] −0.1313∗∗∗ [0.0276] −0.2252∗∗∗ [0.0187]

0.0154∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0123 [0.0127] 0.0349∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0310∗ [0.0182] −0.0762∗∗∗ [0.0185] −0.1363∗∗∗ [0.0277] −0.2245∗∗∗ [0.0190] −0.0319∗∗ [0.0152] −0.0745∗∗∗ [0.0171]

0.2874 [0.452]

(6)

0.4112 [0.452]

(5)

TABLE V THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS—LINEAR SPECIFICATION

−0.0331∗∗ [0.0152] −0.0726∗∗∗ [0.0175]

0.0152∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0116 [0.0126] 0.0342∗∗∗ [0.0078] −0.0004∗∗∗ [0.000] −0.0334∗ [0.0186] −0.0751∗∗∗ [0.0182] −0.1331∗∗∗ [0.0277] −0.2255∗∗∗ [0.0189]

0.3129 [0.445]

(7)

DOES TERRORISM WORK?

1485

No No No No 6,494 .001

(1) No Yes No No 6,494 .016

(2) Yes No No No 6,494 .029

(3) Yes Yes No No 6,494 .043

(4) Yes Yes No No 6,098 .1421

(5)

Yes Yes Yes No 6,098 .1467

(6)

Yes Yes Yes Yes 6,098 .1499

(7)

Notes. Estimated using OLS. The dependent variable is an indicator for agreeing to territorial concessions to Palestinians. Columns (5) through (7) also include dummy variables for each number of individuals in the household and number of rooms in the household’s residence. The subdistrict time varying characteristics used in columns (6) and (7) were calculated from the Israel Labor Survey, and include the unemployment rate, the mean number of children below the age of 14 in a household, mean number of individuals above the age of 14 in a household, mean number of individuals per household, percent married, percent male, percentage of individuals between the ages of 30 and 45, percentage of individuals above 45 years old, percentage of individuals with higher education, percent ultra-orthodox Jews, percent immigrants, and percent African–Asian ethnicity. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. ∗ indicates statistically significant at 10% level, ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Subdistricts fixed effects Years fixed effects Subdistrict time varying characteristics Subdistrict-specific linear time trends N R2

Variable

TABLE V (CONTINUED)

1486 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1487

The results in Table V suggest that there is no linear effect of terror activity on a person’s willingness to grant concessions to the Palestinians. This result is consistently found as additional control variables are added for personal and subdistrict-level characteristics. In contrast, many of the coefficients on the other controls are highly significant: the willingness to grant concessions increases with income, education, and age (up to a point), and is also higher for natives versus immigrants, secular versus religious, and individuals who did not immigrate from the former Soviet Union and do not come from an Asian–African ethnic background. However, due to the concave pattern exhibited in Figure IV, we now include a quadratic term for the local level of terror in order to see whether the treatment effect is nonlinear. As shown in Table VI, the results are very different now, with the linear term and the quadratic term highly significant across specifications.21 The coefficients suggest that terrorism increases an individual’s willingness to grant concessions up to a point, and then further terror attacks reduce the willingness to grant concessions. This pattern is found in the simple specification that includes no other controls, and also in subsequent specifications, which gradually add a rich array of personal and subdistrict characteristics, fixed effects for each year, fixed effects for each subdistrict, and subdistrict-specific linear time trends. The robustness of the results to the inclusion or exclusion of so many other factors suggests that local terror activity is uncorrelated with a large variety of personal and subdistrict-level characteristics that affect political preferences. This pattern is consistent with our balancing tests in Tables III and IV, and supports our assumption that terror is an exogenous event. Furthermore, the results appear to be robust across a variety of identifying assumptions, which change with each specification in Table VI. The magnitudes of the coefficients in column (5) of Table VI imply that the total effect of terror fatalities, relative to the case where there is no terror activity at all, is positive until 0.105 local casualties per capita are reached, whereas terror activity beyond that threshold makes Israelis adopt a more hard-line stance toward the Palestinians. Similarly, the marginal effect of terror on 21. The standard errors account for clustering at the subdistrict–year level. Clustering only at the subdistrict level yields very similar conclusions regarding the indicated significance levels. These results are available from the authors upon request.

(1)

(2)

(3)

Religiously observant

From former Soviet bloc

African–Asian ethnicity

Immigrant

Years of schooling × age

Years of schooling

Male

Age square

Terror fatalities per capita within a year before the survey Linear effect 4.285∗ 5.538∗∗∗ 2.029 [2.26] [1.89] [1.75] Quadratic effect −43.459∗∗ −59.346∗∗∗ −16.590 [19.74] [15.48] [16.63] Personal characteristics Age

Variable 3.526∗∗∗ [1.27] −34.180∗∗∗ [12.23]

(4)

0.0155∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0140 [0.0128] 0.0351∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0308∗ [0.0183] −0.0768∗∗∗ [0.0184] −0.1358∗∗∗ [0.0280] −0.2230∗∗∗ [0.0191]

3.648∗∗∗ [1.18] −34.683∗∗∗ [10.80]

(5)

0.0157∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0140 [0.0127] 0.0350∗∗∗ [0.0078] −0.0004∗∗∗ [0.0001] −0.0336∗ [0.0185] −0.0778∗∗∗ [0.0183] −0.1307∗∗∗ [0.0277] −0.2238∗∗∗ [0.0187]

3.943∗∗∗ [0.93] −38.044∗∗∗ [9.07]

(6)

(7)

0.0153∗∗∗ [0.0028] −0.0001∗∗∗ [0.0000] −0.0134 [0.0128] 0.0343∗∗∗ [0.0078] −0.0004∗∗∗ [0.000] −0.0328∗ [0.0186] −0.0743∗∗∗ [0.0182] −0.1332∗∗∗ [0.0278] −0.2255∗∗∗ [0.0189]

5.585∗∗∗ [1.19] −56.648∗∗∗ [12.46]

TABLE VI THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS—NONLINEAR SPECIFICATION

1488 QUARTERLY JOURNAL OF ECONOMICS

(1)

No Yes No No 6,494 .021 .0001

No No No

No

6,494 .004 .0754

(2)

6,494 .029 .4555

No

Yes No No

(3)

6,494 .0448 .0219

No

Yes Yes No

(4)

6,098 .1436 .0076

No

−0.0321∗∗ [0.0153] −0.0755∗∗∗ [0.0170] Yes Yes No

(5)

6,098 .1484 .0002

No

−0.0325∗∗ [0.0153] −0.0748∗∗∗ [0.0172] Yes Yes Yes

(6)

6,098 .1518 .0001

Yes

−0.0321∗∗ [0.0153] −0.0715∗∗∗ [0.0175] Yes Yes Yes

(7)

Notes. Estimated using OLS. The dependent variable is an indicator for agreeing to territorial concessions to Palestinians. Columns (5) through (7) also include dummy variables for each number of individuals in the household and number of rooms in the household’s residence. The subdistrict time varying characteristics used in columns (6) and (7) were calculated from the Israel Labor Survey, and include the unemployment rate, the mean number of children below the age of 14 in a household, mean number of individuals above the age of 14 in a household, mean number of individuals per household, percent married, percent male, percentage of individuals between the ages of 30 to 45, percentage of individuals above 45 years old, percentage of individuals with higher education, percent ultra-orthodox Jews, percent immigrants, and percent Asian–African ethnicity. Robust standard errors, adjusted for clustering at the subdistrict–year level, appear in brackets. The P-value on the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Subdistricts fixed effects Years fixed effects Subdistrict time varying characteristics Subdistrict-specific linear time trends N R2 P-value for effect of terrorism

Less than average

Expenditures (base category: more than average): About average

Variable

TABLE VI (CONTINUED)

DOES TERRORISM WORK?

1489

1490

QUARTERLY JOURNAL OF ECONOMICS

granting concessions is positive until 0.053 casualties per capita are reached, and then additional casualties reduce the willingness to concede territory. That is, a moderate amount of terror is effective, but then it can backfire on the terrorist group. For 102 cases where data are available by subdistrict and year, there are only seven cases that reach levels high enough so that the marginal effect is negative (Jerusalem in 1996 and 2003, and Afula, Hadera, Sharon, Tel Aviv, and Zefat in 2003), and only one case where the estimated total effect is negative (Jerusalem had 0.11 fatalities per capita in 2003), thereby hardening the stance of Jerusalem residents toward the Palestinians in 2003. Given that most of the observations lie within the region where the marginal effect is positive, the estimated nonlinear effect is mostly indicative of a declining effect of terrorism on views, rather than a reversal of the sign of the effect. This overall pattern suggests that individuals respond differently to initial attacks versus later attacks, which may be caused by the declining shock value of attacks, or may be due to changes in the probability that an individual places on the rationality, and perhaps the ultimate goals, of the terrorist group.22 To describe the magnitude of the coefficients, we computed the predicted value for each person’s willingness to grant concessions in 2003 using the actual local level of fatalities in the previous twelve months, and compared that to the scenario whereby there were no attacks in the year prior to those elections. The difference in these predicted effects represents the change in a person’s views due to the attacks leading up to the 2003 elections. The median value of this predicted change in views (using column (5) in Table VI) for the whole sample suggests that the median individual became 4.2 percentage points more likely to support granting territorial concessions. This is more than half of the effect of being from an Ashkenazi background versus an Asian–African background, for which the estimated effect is 7.7 percentage points. In addition, the effect is larger than the estimated effect of being native versus an immigrant (3.1 percentage points), and almost 22. Several theoretical studies show that a nonlinear pattern may emerge when Israelis (or the Israeli government) do not have complete information on the level of extremism of terror factions (Kydd and Walter 2002; Bueno de Mesquita 2005a; Berrebi and Klor 2006). According to these studies, at low levels of attacks, Israelis increase their support for concessions in order to placate terror factions. For higher levels of attacks, however, Israelis start to believe with higher probability that they are facing an extremist faction that cannot be placated with partial concessions. Therefore, in this range of attacks, Israelis start adopting a nonconciliatory position toward terror factions.

DOES TERRORISM WORK?

1491

one-fifth of the effect of being religiously observant (22.3 percentage points). Therefore, these findings are significant not only in the statistical sense, but also in terms of the magnitudes. Table VII explores whether the results are heterogeneous across different subsamples of the population according to gender, age, level of expenditures, education, religious observance, immigrant status, and ethnicity. Estimates are presented from our two main specifications, which appear in columns (5) and (7) in Table VI. Both specifications include a rich set of personal characteristics and fixed effects for each year and subdistrict, but the latter specification includes subdistrict-specific linear time trends and additional controls that vary by subdistrict and year. The estimates in Table VII are generally significant for all groups, with larger effects for Israelis who are younger, religious, female, non-native, less educated, poorer (lower expenditures), and from an Asian–African background. Table II showed that these groups, except for the gender and immigrant categories, tend to be more right-wing than their counterparts. Therefore, Table VII suggests that the effect of terror is larger for particular groups which typically support right-wing political parties.23 Figure VII demonstrates this more concretely by graphing the predicted nonlinear relationship in the upper panels of Table VII for several subgroups. This figure shows that the predicted effect is larger for the whole range of observed attacks for particularly right-wing groups (religious, Asian–African origin, less educated, and younger Israelis). These findings illustrate how the political map is changing over time as the right wing is shifting to the left in response to terror. V. ROBUSTNESS TESTS AND ALTERNATIVE SPECIFICATIONS V.A. Alternative Definitions of Local Terror and Willingness to Grant Concessions Table VIII presents results using three alternative definitions for the local level of terror in each subdistrict around the time of the election. The first column in the upper panel uses 23. This pattern is more pronounced for the specification without subdistrictspecific linear time trends. However, the same conclusions are reached in Table A.4 in the Online Appendix, which uses the number of terror fatalities as the treatment variable (instead of fatalities per capita), and also in Table X, which examines the effect of terror on alternative outcomes.

Males

Below 30

30 to 45

Partition by age Above 45

Effect of terror fatalities per capita using only survey data Linear effect 4.6758∗∗∗ 2.7568∗ 4.2452∗∗∗ 2.8184∗ 2.8471 [1.36] [1.43] [1.50] [1.68] [1.75] Quadratic effect −38.113∗∗∗ −32.607∗∗∗ −40.018∗∗∗ −20.643 −30.317∗ [13.53] [12.45] [14.23] [15.56] [16.14] P-value for effect of terrorism 0.0024 0.0047 0.0189 0.1658 0.1618 Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 7.6632∗∗∗ 4.4340∗∗∗ 5.7143∗∗∗ 4.6022∗∗∗ 5.4123∗∗∗ [1.46] [1.87] [2.08] [1.78] [2.15] Quadratic effect −77.189∗∗∗ −46.567∗∗ −53.679∗∗∗ −49.005∗∗∗ −55.192∗∗∗ [14.97] [20.04] [19.35] [18.70] [20.41] P-value for effect of terrorism .0000 .0655 .0223 .0349 .0300 N 2,991 3,107 1,995 1,890 2,213

Females

Partition by gender

Above average

3.7498∗∗∗ 2.9905∗ [1.50] [1.71] −33.246∗∗ −29.743∗ [14.60] [15.38] 0.0487 0.1607

Average

8.4501∗∗∗ 3.9925∗∗∗ 5.0405∗∗∗ [2.37] [1.56] [2.16] −98.509∗∗∗ −31.267∗∗ −49.181∗∗ [25.14] [15.04] [22.23] .0008 .0386 .0713 2,277 2,122 1,699

3.1391∗ [1.88] −34.774∗∗ [17.00] 0.1034

Below average

Partition by expenditures

TABLE VII THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS, BY SUBGROUPS

1492 QUARTERLY JOURNAL OF ECONOMICS

Academic education

6.5823∗∗∗ [1.73] −76.013∗∗∗ [19.38] .0007 2,287

3.8223∗∗∗ [1.44] −40.823∗∗∗ [15.09] .0291 3,749

3,811

4.9349∗∗∗ [1.43] −43.977∗∗∗ [13.94] .0036

2.0451∗ [1.05] −22.686∗∗∗ [9.37] 0.0475

Other

Note. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for agreeing to territorial concessions to Palestinians. In addition to terror fatalities per capita within a year before the survey, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict–year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

6.6334∗∗∗ [1.96] −61.902∗∗∗ [18.04] 0.0036

African–Asian

Partition by ethnicity

2.3811∗ [1.31] −23.580∗ [12.89] 0.1858

Native Israeli

Immigrant

Secular

Religious

Partition by country of birth

Partition by religiosity

Effect of terror fatalities per capita using only survey data Linear effect 4.7347∗∗∗ 2.0906 2.4068∗ 6.1954∗∗∗ 5.9583∗∗∗ [1.44] [1.60] [1.38] [1.71] [1.83] Quadratic effect −37.560∗∗∗ −29.565∗∗ −24.580∗ −56.509∗∗∗ −58.674∗∗∗ [14.14] [14.68] [13.22] [17.00] [17.45] P-value for effect of 0.0026 0.0105 0.1830 0.0023 0.0047 terrorism Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 5.8537∗∗∗ 5.7778∗∗∗ 5.9378∗∗∗ 4.8865∗∗ 9.1530∗∗∗ [1.57] [2.03] [1.53] [2.20] [2.40] Quadratic effect −59.425∗∗∗ −56.762∗∗∗ −56.964∗∗∗ −64.616∗∗∗ −94.835∗∗∗ [17.19] [19.96] [15.56] [22.03] [22.83] P-value for effect of .0015 .0195 .0010 .0029 .0004 terrorism N 3,514 2,584 4,530 1,568 2,349

Below academic

Partition by education

TABLE VII (CONTINUED)

DOES TERRORISM WORK?

1493

1494

QUARTERLY JOURNAL OF ECONOMICS

FIGURE VII Effect of Terror Fatalities on Willingness to Grant Concessions by Subgroups Each graph depicts the nonlinear relationship between terror fatalities per capita and a person’s willingness to grant territorial concessions using the estimated coefficients in the upper panel of Table VII.

the number of attacks per capita in the last twelve months, rather than the number of fatalities. The number of attacks is defined as the number of incidents with at least one fatality, which essentially gives equal weight to all fatal attacks regardless of the number killed. The results using this measure are very similar, in the sense of displaying a highly significant concave pattern. The second and third columns in the upper panel of Table VIII use terror “since the last elections” rather than “in the last twelve months” as the treatment variable. The estimates reveal the same nonlinear pattern using “total fatalities” or “total attacks” as the measure of local terror, and are significant in three out of four specifications. (The coefficients are not significant for the specification that uses “total fatalities since the previous election” without subdistrict-specific time trends.) However, the estimates are smaller in magnitude, which suggests that recent terror activity has a bigger impact than attacks occurring in the more distant

Attacks since previous elections

Fatalities since previous elections

Effect of terror fatalities per capita using only survey data 6.059∗ 0.5682 Linear effect 15.2331∗∗∗ [5.45] [3.10] [0.68] Quadratic effect −619.6∗∗ −171.5∗∗∗ −3.282 [289] [72] [4.51] P-value for effect of terrorism .0177 .0489 .7077 Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 25.486∗∗∗ 10.387∗∗∗ 2.2995∗∗∗ [5.37] [3.58] [0.60] Quadratic effect −1,236.2∗∗∗ −285.4∗∗∗ −14.089∗∗∗ [312] [107] [4.55] P-value for effect of terrorism .0000 .0180 .0010 N 6,098 6,098 6,098

Attacks within a year before the survey

Alternative proxies for level of terrorism

2.7388∗∗∗ [0.82] −30.984∗∗∗ [7.02] .0000 4.5820∗∗∗ [0.65] −46.777∗∗∗ [6.03] .0000 2,292 2.1419 [1.31] −21.814 [14.40] .2426 4,229

From 2003 to 2006a

1.5921∗ [0.95] −18.375∗∗ [9.05] .1592

From 1996 to 2006

Restricting the sample to different time periods

TABLE VIII THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS, ROBUSTNESS TESTS

DOES TERRORISM WORK?

1495

.0613

.0161

4.4180∗∗∗ [1.51] −39.832∗∗∗ [15.17]

Excluding Jerusalem

.0005

6.0163∗∗∗ [1.70] −63.176∗∗∗ [15.80]

Excluding Jerusalem and Tel Aviv

.0107 6,098

.0000 5,398

.0000 4,176

.0000 6,098

6.3722∗∗∗ [1.33] −64.165∗∗∗ [13.78]

.0064

4.2215∗∗∗ [1.36] −39.882∗∗∗ [12.61]

Marginal effects using a probit model

8.5875∗∗∗ [1.90] −145.069∗∗∗ [50.73] 621.608 [362.98] .0000 6,098

3.5676∗ [2.01] −32.302 [43.83] −16.372 [258.72] .0044

Including a higher-order polynomial

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for agreeing to territorial concessions to Palestinians. In addition to the respective proxy for the severity of terrorism, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. The marginal effects of the probit model are calculated at the means. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. a The regressions at the bottom using only observations from years 2003 and 2006 do not include subdistrict specific time trends because there are only two periods for each subdistrict. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

P-value for effect of terrorism N

Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 3.1428∗∗∗ 6.7684∗∗∗ 7.9114∗∗∗ [1.29] [1.26] [1.57] Quadratic effect −37.916∗∗∗ −67.448∗∗∗ −87.880∗∗∗ [12.90] [14.62] [19.84] Cubic effect

P-value for effect of terrorism

Effect of terror fatalities per capita using only survey data Linear effect 2.8397∗∗ [1.28] Quadratic effect −27.441∗∗∗ [11.45] Cubic effect

Using an alternative definition of agree to concessions

TABLE VIII (CONTINUED)

1496 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1497

past.24 However, the effect of terrorism on an individual’s political preferences does not completely disappear even when measured over a longer time period. Table A.2 in the Online Appendix examines whether terror attacks by different Palestinian factions have similar effects on the views of Israelis. Because the goals of the different groups are not always clear and may not be consistent with each other, we distinguish between attacks perpetrated by Islamic-based groups (Hamas and Islamic Jihad) and the rest. Most of the attacks are perpetrated by Islamic groups, and Table A.2 shows that similar results are obtained when we use terror fatalities committed only by Islamic groups. Although Table A.2 shows that terror fatalities caused by Islamic groups have a slightly smaller effect on Israeli political views than overall fatalities, this difference is not significant. These findings suggest that although Palestinian groups may not have identical goals, Israelis do not seem to distinguish between the attacks of different groups. This result, however, could be due to the lack of awareness of who is perpetrating each attack. In Table A.3 in the Online Appendix, we also show that adding “attacks in neighboring subdistricts” to the specification has no influence on the estimated effect of local terror, although the effect of terror in neighboring areas is often significant but much smaller in magnitude. This finding demonstrates that our main results are not coming from the correlation of local attacks with factors associated with attacks at the broader regional level. Finally, it is worth noting that using the number of fatalities as the treatment variable, rather than fatalities per capita, yields very similar results as well—including much stronger effects for traditionally right-wing demographic groups (see Table A.4 in the Online Appendix). The first column in the bottom panel of Table VIII uses an alternative coding scheme for a person’s willingness to grant concessions. As noted in Section II, we coded a response of “four” (the middle value of the range of answers) in survey years 1996 and 1999 as being “unwilling” to make concessions. Because this value lies in the center of the range of answers (from one to seven), we 24. A possible explanation for the weaker results using terrorism “since the last election” versus the “last twelve months” is that the differential costs of a terror attack on local versus nonlocal residents dissipate over time, and therefore, the effects of attacks from a few years ago become subsumed in the aggregate year effects.

1498

QUARTERLY JOURNAL OF ECONOMICS

now test whether the results are sensitive to coding this value as being “willing” to make concessions. Table VIII shows that the coefficients using this alternative coding scheme are still highly significant. V.B. Changes in the Coding of the Question about Concessions Because the wording of the question regarding a person’s willingness to make concessions changed over time, we now explore whether our main results are a product of those changes rather than representing a causal effect. Up to now, we dealt with this issue by including fixed effects for each year in the regressions. Table VI showed that including or excluding fixed effects for each year does not affect the results, which suggests that changes in the structure of the questionnaires are not likely to be creating spurious estimates. Table VIII presents more evidence for this conclusion by restricting the sample to periods when there was very little change in the question (1996–2006) and to periods when there was no change at all (2003–2006).25 As shown in the upper panel, the coefficient estimates are still significant for the period 1996–2006, although not for the specification that includes subdistrict-specific time trends. For the years 2003–2006, the results are highly significant even for this short time period, which incidentally did not include the largest wave of attacks (i.e., 1999– 2003).26 These findings show that changes in the wording of the question over time are not responsible for our main results. V.C. Specifying the Nonlinearity We now examine the specification of the nonlinear effect of terror on political attitudes. In addition, we check whether the nonlinear effect is coming entirely from the observations from Jerusalem, as it appears in Figure IV. Table VIII shows that the results are still highly significant and nonlinear after Jerusalem residents are omitted from the sample, and after Tel Aviv and Jerusalem residents are omitted from the sample. This finding suggests that the nonlinear effect of terror on political attitudes is not entirely due to the perhaps unique characteristics and 25. From 1999 to 2003, there was a small change in the structure of the question, as the range of answers went from one to seven to one to four. 26. Similar results are obtained if we start the sample in 1992 or 1999. Starting the sample in 1992 yields coefficients (standard errors) for the specification in column (5) of Table VI equal to 2.66 (1.09) and −27.16 (10.18) for the linear and quadratic terms, respectively. Starting the sample in 1999 produces 2.11 (1.17) and −24.00 (11.09), respectively.

1499

DOES TERRORISM WORK?

experiences of Jerusalem or Tel Aviv. Table VIII also shows that using a probit model instead of a linear probability model yields very similar results.27 In addition, the last column in the bottom panel of Table VIII adds a cubic term for the level of local terror per capita, in order to see whether the nonlinearity should be modeled with a higher-order polynomial. The cubic term is not significant, which suggests that using a quadratic specification is sufficient. V.D. Alternative Panel Data Model Specifications This section tests the robustness of the results to alternative panel data models. Because our data set consists of repeated cross sections of individuals, we aggregate the data to the subdistrict– year level to exploit these other methods. As a baseline model, we estimate the fixed-effect model used until now (column (5) of Table VI) at that level of aggregation. Specifically, we estimate equation (2) from Section III. Although the number of observations is reduced to 86, the first column of Table IX shows that the results for this model are still highly significant.28 Table IX also presents a first-differences model as an alternative way to control for the fixed effect of each subdistrict. The results in column (3) are very similar to those for the fixed-effect model, thereby providing support for the strict exogeneity assumption. In addition, we consider an alternative specification of the “levels” model that includes a lagged dependent variable as a control variable: (7)

2

view jt = ρ · view jt−1 + α1 · terror jt + α2 · terror jt + β · x jt + γt + μ j + ε jt .

This “dynamic panel data model” captures the idea that political attitudes may be moving slowly, so that attitudes at time t are correlated with those at t − 1, and terrorist activity may introduce an innovation to the evolution of political views. This model is estimated using OLS in the second column of Table IX, which shows results very similar to those for the model without the 27. The table presents the marginal effects evaluated at the means from the probit model. In the rest of the paper, we choose to present results from a linear probability model instead of a probit, because the interpretation of the marginal effects using a nonlinear specification is not straightforward in a probit model. 28. This regression also differs from the one using individual-level data by giving equal weight to all subdistricts, whereas the models based on individual-level data essentially give more weight to those subdistricts with more observations.

.0112 86

4.608∗∗ [2.27] −56.489∗∗∗ [21.75] −0.0108 [0.154] .0374 66

(2)

.0034 66

3.478∗ [1.78] −39.225∗∗ [18.44]

First differences (3) 3.9685∗∗∗ [1.63] −55.361∗∗∗ [16.17] −0.1463 [0.19] .0000 48

Arellano–Bond estimation (4)

4.7583∗∗∗ [1.83] −57.622∗∗∗ [18.02] 0.0448 [0.14] .0007 66

System dynamic panel data estimation (5)

Notes. Each column in each panel presents the results of a separate regression where the dependent variable is an indicator for agreeing to territorial concessions to Palestinians. In addition to terror fatalities per capita 12 months before the survey, all regressions include the same covariates as specification (5) in Table VI aggregated at the subdistrict–year level. The first two columns present a fixed-effect estimation. Column (3) uses a first differences estimation while column (4) presents a general method of moments estimate based on Arellano and Bond (1991), and column (5) uses additional moment conditions based on Arellano and Bover (1995) and Blundell and Bond (1998). Robust standard errors appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

P-value for effect of terrorism N

Terror fatalities per capita within a year before the survey Linear effect 5.2083∗∗∗ [1.75] Quadratic effect −57.306∗∗∗ [18.29] Lagged support for granting territorial concessions

(1)

OLS fixed effects

TABLE IX THE EFFECT OF TERROR FATALITIES ON SUPPORT FOR GRANTING TERRITORIAL CONCESSIONS, AGGREGATING DATA AT SUBDISTRICT LEVEL

1500 QUARTERLY JOURNAL OF ECONOMICS

DOES TERRORISM WORK?

1501

lagged dependent variable. To address concerns about whether OLS yields consistent estimates when a lagged dependent variable is included in the fixed-effects model, the model is also estimated with GMM following Arellano and Bond (1991) in column (4) and using additional moment conditions (see Arellano and Bover [1995] and Blundell and Bond [1998]) in column (5). Using both methods, the results are very similar to previous results. Overall, the nonlinear effect of terror on political accommodation is robust to all the most widely used panel data methods. V.E. Alternative Measures of Political Attitudes We now examine whether terror has affected the two other available ways of defining political views: support for a Palestinian state, and defining oneself as being right-wing. To do this, we create a summary measure using the first factor from a factor analysis on both views. The first factor explains 71% of the variation, and the factor loadings can be interpreted as giving a more positive weight to more accommodating positions. Specifically, the factor loadings on the first factor are 0.85 on “support for a Palestinian state” and −0.85 on “defining oneself as having a right-wing political tendency.” As such, positive values of the first factor indicate a more left-wing position on the conflict. Results for this summary measure are presented in Table X. As seen before, there is a significant nonlinear effect of terrorism on this summary measure—terrorist attacks induce individuals to shift their views toward a more accommodating stance, but after a certain threshold, additional attacks cause Israelis to adopt a more hard-line stance versus the Palestinians. Table X also presents the results for our summary measure of alternative attitudes for different subpopulations. Again, the nonlinear pattern is significant for most demographic groups, but the effects are once again much stronger for people who are religious, less educated, and from an Asian–African ethnic background. These findings demonstrate that terrorism has had a pervasive impact within many subgroups of the population, with the strongest impact on groups that are particularly known for holding right-wing views. The finding of stronger effects for these traditionally right-wing groups across all outcomes highlights the dramatic shift in the political map in Israel. Overall, the results in this section show that the significant effect of terrorism on political attitudes, and its nonlinear pattern, is robust to alternative ways of defining an individual’s views

Female

Male

Below average

3.2673 3.049 [3.24] [4.51] −65.77∗∗ −45.44 [29.2] [40.0] .0039 .1956

Above 45

Above average

8.726∗∗∗ 2.2953 [2.69] [3.42] −91.85∗∗∗ −37.90 [25.8] [31.0] .0024 .2489

Average

Partition by expenditures

Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 5.7566∗∗ 8.2814∗∗ 4.9736∗∗ 3.2349 7.2096∗ 3.3378 1.7381 11.7436∗∗∗ 7.0940∗ [2.75] [4.12] [2.30] [4.35] [3.91] [4.42] [4.66] [3.32] [4.19] Quadratic effect −64.085∗∗∗ −83.184∗∗ −66.643∗∗∗ −37.925 −49.318 −68.630 −55.341 −81.651∗∗∗ −94.575∗∗ [24.7] [37.0] [24.2] [41.6] [37.6] [42.9] [40.3] [31.1] [41.6] P-value for effect of terrorism .0340 .0860 .0236 .6285 .1416 .0825 .0275 .0013 .0300 N 5,122 2,518 2,604 1,785 1,635 1,702 1,902 1,809 1,411

4.197 [3.66] −19.56 [34.5] .2135

30 to 45

Partition by age Below 30

Effect of terror fatalities per capita using only survey data Linear effect 4.7977∗ 5.885∗ 5.299∗ 3.072 [2.56] [3.24] [2.75] [3.80] Quadratic effect −56.65∗∗∗ −56.155∗ −74.78∗∗∗ −35.96 [22.7] [29.8] [23.5] [33.5] P-value for effect of terrorism .0323 .1718 .0009 .5116

Entire sample

Partition by gender

TABLE X THE EFFECT OF TERROR FATALITIES ON A SUMMARY MEASURE OF TWO ALTERNATIVE ATTITUDES TOWARD PALESTINIANS BASED ON FACTOR ANALYSIS

1502 QUARTERLY JOURNAL OF ECONOMICS

Academic education Secular

Religious

Partition by religiosity Immigrant

Other

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for accommodating views toward the Palestinians using factor analysis based on the two attitudes discussed in the text. In addition to terror fatalities per capita within a year before the survey, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

0.9514 [3.52] −0.288 [32.9] .8069 3,068

10.778∗∗∗ 0.724 [3.50] [3.22] −119.75∗∗∗ −20.55 [33.5] [29.1] .0023 .4357

African– Asian

Partition by ethnicity

9.9804∗∗∗ 11.9123∗∗∗ [2.77] [3.34] −106.52∗∗∗ −155.76∗∗∗ [27.6] [34.5] .0010 .0001 3,180 2,054

4.599∗ [2.70] −59.52∗∗∗ [25.0] .0388

Native Israeli

Partition by country of birth

Effect of terror fatalities per capita using only survey data Linear effect 6.137∗∗∗ 1.975 0.104 12.223∗∗∗ 4.525 [2.35] [4.60] [3.48] [3.44] [3.27] Quadratic effect −59.19∗∗∗ −49.23 −19.33 −111.97∗∗∗ −56.78∗ [24.0] [42.1] [31.5] [31.2] [32.2] P-value for effect of terrorism .0365 .0786 .2902 .0024 .1947 Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 8.7890∗∗∗ 1.8101 4.3563 10.6170∗∗ −1.0630 [2.45] [4.85] [4.01] [4.66] [4.30] −49.211 −124.19∗∗∗ −4.858 Quadratic effect −94.435∗∗∗ −19.621 [25.1] [42.1] [36.0] [43.4] [37.6] P-value for effect of terrorism .0013 .8894 .3748 .0136 .7262 N 3,123 1,999 3,784 1,338 1,942

Below academic

Partition by education

TABLE X (CONTINUED)

DOES TERRORISM WORK?

1503

1504

QUARTERLY JOURNAL OF ECONOMICS

toward the Palestinians.29 Moreover, the similarity of the results for all three measures of political views provides further evidence against the possibility that changes in the structure of a particular question in the survey over time could be responsible for our main results.

VI. THE EFFECT OF TERROR ON VOTING VERSUS POLITICAL VIEWS We now examine the effect of terror on voting preferences using the outcome variable “support for the right-wing bloc” in the upcoming elections. As we will see, this variable is fundamentally different in its nature from the previous outcome measures. Table XI presents this analysis for the linear specification—the quadratic term was not included because it was insignificant in most specifications. In contrast to the political outcomes previously analyzed, the linear effect is generally positive, which suggests that terror attacks encourage Israelis to vote for right-wing parties. This finding is consistent with Berrebi and Klor (2008), who used data on actual voting patterns at the local level (rather than our measure, which uses the respondent’s voting intentions in the upcoming elections) to show that local attacks turned voters toward right-wing parties. Combined with our previous results, this suggests that terrorism is causing Israelis to vote increasingly for right-wing parties, whereas at the same time, they are turning left in their political views. The difference in the pattern of results can be reconciled by the idea that the platforms of the political parties are endogenously changing over time. This shift over time is evident from a casual inspection of the parties’ official platforms. For example, the platform of the right-wing Likud party during the 1988 elections stated on its first page that “The State of Israel has the right to sovereignty in Judea, Samaria, and the Gaza Strip,” and that “there will be no territorial division, no Palestinian state, foreign sovereignty, or foreign self-determination (in the land of Israel).” This stands in stark contrast to the Likud’s platform before the 2009 elections, which stated that “The Likud is prepared to make (territorial) concessions in exchange for a true and reliable peace 29. The overall results using the “support Palestinian state” measure versus the other measures are weaker, but there is still the familiar pattern whereby strong and significant effects are found for individuals with an Asian–African background.

Secular

Below academic

1.5026∗ [0.87] 2,319

1.6769∗∗∗ [0.67]

Below average

1.1741∗∗ [0.56]

Effect of terror fatalities per capita using only survey data Linear effect 0.6903 0.9045∗ 1.1535∗∗∗ −0.2812 [0.50] [0.47] [0.48] [0.58]

0.3775 [0.48] 3,794

0.6675 [0.50]

Native Israeli

1.0530 [0.77] 2,323

0.6118 [0.66]

African– Asian

0.1191 [0.54] 3,873

0.8741∗∗ [0.39]

Other

Notes. Each column in each panel presents the results of a separate OLS regression where the dependent variable is an indicator for voting for a party in the right bloc of political parties. In addition to terror fatalities per capita within a year before the survey, all regressions include the same covariates as specifications (5) and (7) in Table VI. Robust standard errors, adjusted for clustering at the subdistrict-year level, appear in brackets. The P-value for the effect of terrorism tests the hypothesis that the joint effect of all proxies for the severity of terrorism included in each regression are equal to zero. ∗ indicates statistically significant at 10% level; ∗∗ indicates statistically significant at 5% level; ∗∗∗ indicates statistically significant at 1% level.

Effect of terror fatalities per capita including subdistrict time trends and characteristics −0.2633 0.8770 0.7232 1.2452 Linear effect 0.8561∗ [0.50] [0.65] [0.60] [0.73] [0.84] N 3,566 2,630 4,604 1,592 2,402

Immigrant

1.7822∗∗∗ [0.71] 1,724

0.7184 [0.71]

Above average

Partition by ethnicity

−1.5186∗∗ [0.65] 2,153

−0.1969 [0.53]

Average

Partition by expenditures

Partition by country of birth

Religious

Academic education

Partition by religiosity

Partition by education

Effect of terror fatalities per capita including subdistrict time trends and characteristics Linear effect 0.7639∗ −0.8345 2.1207∗∗∗ 1.0983∗ −0.1460 1.1542 [0.41] [0.60] [0.53] [0.63] [0.68] [0.83] N 6,196 3,041 3,155 2,022 1,916 2,258

1.1080∗ [0.58]

Below 30

Effect of terror fatalities per capita using only survey data Linear effect 0.7195∗∗ −0.2216 1.7809∗∗∗ 1.3511∗∗∗ −0.4158 [0.35] [0.50] [0.43] [0.53] [0.51]

Male

Above 45

Female

Partition by age 30 to 45

Entire sample

Partition by gender

TABLE XI THE EFFECT OF TERROR FATALITIES ON VOTES FOR A PARTY IN THE RIGHT BLOC OF POLITICAL PARTIES

DOES TERRORISM WORK?

1505

1506

QUARTERLY JOURNAL OF ECONOMICS

agreement.” Arguably, the Likud’s position in 2009 is to the left of the left-wing Labor party’s platform in 1988.30 Table XI also presents the analysis within each subgroup, and shows that the linear effect of terrorism on supporting right-wing parties is found predominantly among individuals who are male, non-native, secular, highly educated, and not from an Asian– African ethnic background population. The last three groups are quite notable, because our previous results indicated that terrorism leads to a more accommodating attitude particularly among individuals who are religious, less educated, and from an Asian– African background. In contrast, we now find that the shift toward right-wing parties occurred within subgroups that are strongly identified with left-wing parties, rather than those groups who are typically right-wing. The stronger results for left-leaning groups on the probability of voting right-wing, combined with our evidence that they are not shifting toward less accommodating political views regarding Palestinian demands, shows that these groups are increasing their support for right-wing parties only because the right-wing parties are moving to the left. As a result, the overall pattern of results suggests that terror is shifting the entire political landscape by moving public opinion to the left, and moving the right-wing parties accordingly. This pattern of results is consistent with the theory of policy voting (Kiewiet 1981). According to this theory, parties benefit from the salience of issues to which they are widely viewed as attaching the highest priority. Thus, in periods of repeated terror attacks, voters increase their support for the right bloc of political parties because terror attacks amplify the salience of the security dimension in political discourse, and the right bloc is typically identified as placing a stronger emphasis on security-related issues compared to the left bloc. Given that right-wing parties benefit from the increasing prominence of the security issue during a wave of terror, theoretical models of candidate location predict that this causes the disadvantaged candidate, which in this scenario is the left bloc of parties, to move away from the center (Groseclose 2001; Aragones and Palfrey 2002). In contrast, the advantaged candidate—the right bloc—moves toward the center 30. For the 1988 elections, the Labor-Alignment party platform “ruled out the establishment of another separate state within the territorial area between Israel and Jordan.” For the 2009 elections, however, Labor supported the creation of a Palestinian state together with the evacuation of isolated settlements.

DOES TERRORISM WORK?

1507

of the political map. As a result, the left bloc loses support to the right bloc, as the right bloc moves to the left. This is one possible explanation for the pattern of results uncovered in our analysis, but the theoretical mechanisms behind these findings deserve further attention. VII. CONCLUSIONS This paper presents the first systematic examination of whether terrorism is an effective strategy for achieving political goals, while paying particular attention to the issue of causality. Our results show that terror attacks by Palestinian factions have moved the Israeli electorate toward a more accommodating stance regarding the political objectives of the Palestinians. At the same time, terrorism induces Israelis to vote increasingly for right-wing parties, as the right-wing parties (and particular demographic groups that tend to be right-wing in their views) are shifting to the left in response to terror.31 These findings highlight the importance of examining how terrorism affects political views, not just voting patterns, in assessing the effectiveness of terror. Looking at the effect of terrorism only on voting patterns in order to infer its effect on political views would lead to the opposite conclusion, at least in the context of the Israeli–Palestinian conflict. Although terrorism in small doses appears to be an effective political tool, our results suggest that terror activity beyond a certain threshold seems to backfire on the goals of terrorist factions, by hardening the stance of the targeted population. This finding could be one explanation for why terrorist factions tend to implement their tactics in episodes that are rather limited in scale and diverse in terms of geographic placement. Others have argued that Palestinian terrorism has worked in exacting political concessions (Dershowitz 2002; Hoffman 2006). Their claim, however, is that terrorism raised the salience of the Israeli–Palestinian conflict, which increased pressure from the 31. We show significant effects on public opinion, but for terror to be effective, this should result in changes in public policy. Our finding that there has been a dramatic shift in the political platforms of the parties is consistent with the idea that terror has led to a significant change in the policies of the main political parties. In practice, this policy shift is perhaps best exemplified by the Israeli unilateral withdrawal from the entire Gaza Strip and parts of the West Bank in 2005 (in the aftermath of the second Palestinian uprising), which was carried out by a government led by the right-wing Likud party.

1508

QUARTERLY JOURNAL OF ECONOMICS

international community on the Israeli government. Our paper shows that terrorism works not only because of the possibility of fostering international pressure, but also because it creates domestic political pressure from the targeted electorate. Many conflicts in history have been settled by peaceful means (the racial conflict in South Africa, the civil rights movement in the United States, the British occupation of India, etc.). Understanding when conflicts are conducted peacefully rather than violently is a complicated issue that deserves more attention. It may well be the case that a more peaceful, diplomatic strategy would have been more effective in achieving Palestinian goals. Moreover, the apparent political effectiveness of Palestinian terrorism may not have been worth the economic, social, and human cost to the Palestinian population over time, as the conflict remains unsettled to this day. However, by showing that terror can be an effective political tool, our findings not only provide insights into how the Israeli–Palestinian conflict has evolved over time, but also shed light on why terror appears to be increasing in many parts of the world. Effective and comprehensive counterterrorism policies—which may consist of deterrence, raising the costs to terrorists, and diplomatic efforts—have to take into account the political gains that can be obtained through terrorism. THE HEBREW UNIVERSITY OF JERUSALEM, CENTRE FOR ECONOMIC POLICY RESEARCH, AND INSTITUTE FOR THE STUDY OF LABOR THE HEBREW UNIVERSITY OF JERUSALEM AND CENTRE FOR ECONOMIC POLICY RESEARCH

REFERENCES Abadie, Alberto, “Poverty, Political Freedom, and the Roots of Terrorism,” American Economic Review, 96 (2006), 50–56. Abrahms, Max, “Why Terrorism Does Not Work,” International Security, 31 (2006), 42–78. ——, “Why Democracies Make Superior Counterterrorists,” Security Studies, 16 (2007), 223–253. Aragones, Enriqueta, and Thomas R. Palfrey, “Mixed Equilibrium in a Downsian Model with a Favored Candidate,” Journal of Economic Theory, 103 (2002), 131–161. Arellano, Manuel, and Stephen Bond, “Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations,” Review of Economic Studies, 58 (1991), 277–297. Arellano, Manuel, and Olympia Bover, “Another Look at the Instrumental Variable Estimation of Error-Components Models,” Journal of Econometrics, 68 (1995), 29–51. Arian, Asher, and Michal Shamir, eds. The Elections in Israel—2006 (New Brunswick, NJ: Transaction, 2008).

DOES TERRORISM WORK?

1509

Baliga, Sandeep, and Tomas Sj¨ostr¨om, “Decoding Terror,” Mimeo, Northwestern University, 2009. Becker, Gary S., and Yona Rubinstein, “Fear and the Response to Terrorism: An Economic Analysis,” Mimeo, Brown University, 2008. Benmelech, Efraim, and Claude Berrebi, “Human Capital and the Productivity of Suicide Bombers,” Journal of Economic Perspectives, 21 (2007), 223–238. Benmelech, Efraim, Claude Berrebi, and Esteban F. Klor, “Economic Conditions and the Quality of Suicide Terrorism,” Mimeo, The Hebrew University of Jerusalem, 2009. Berman, Eli, and David D. Laitin, “Hard Targets: Theory and Evidence on Suicide Attacks,” NBER Working Paper No. 11740, 2005. ——, “Religion, Terrorism and Public Goods: Testing the Club Model,” Journal of Public Economics, 92 (2008), 1942–1967. Berrebi, Claude, “Evidence about the Link between Education, Poverty and Terrorism among Palestinians,” Peace Economics, Peace Science and Public Policy, 13 (2007), Article 2. Berrebi, Claude and Esteban F. Klor, “On Terrorism and Electoral Outcomes: Theory and Evidence from the Israeli–Palestinian Conflict,” Journal of Conflict Resolution, 50 (2006), 899–925. ——, “Are Voters Sensitive to Terrorism? Direct Evidence from the Israeli Electorate,” American Political Science Review, 102 (2008), 279–301. Blomberg, S. Brock, and Gregory D. Hess, “The Lexus and the Olive Branch: Globalization, Democratization and Terrorism,” in Terrorism, Economic Development, and Political Openness, Philip Keefer and Norman Loayza, eds. (New York: Cambridge University Press, 2008). Bloom, Mia, Dying to Kill: The Allure of Suicide Terror (New York: Columbia University Press, 2005). Blundell, Richard, and Stephen Bond, “Initial Conditions and Moment Restrictions in Dynamic Panel-Data Models,” Journal of Econometrics, 87 (1998), 115– 143. Bueno de Mesquita, Ethan, “Conciliation, Counterterrorism, and Patterns of Terrorist Violence,” International Organization, 51 (2005a), 145–176. ——, “The Quality of Terror,” American Journal of Political Science, 49 (2005b), 515–530. Bueno de Mesquita, Ethan, and Eric S. Dickson, “The Propaganda of the Deed: Terrorism, Counterterrorism, and Mobilization,” American Journal of Political Science, 51 (2007), 364–381. Byman, Daniel L., “Al-Qaeda as an Adversary: Do We Understand Our Enemy?” World Politics, 56 (2003), 139–163. Dershowitz, Alan, Why Terrorism Works (New Haven, CT: Yale University Press, 2002). Enders, Walter, and Todd Sandler, The Political Economy of Terrorism (Cambridge, UK: Cambridge University Press, 2006). Eubank, William L., and Leonard B. Weinberg, “Terrorism and Democracy: Perpetrators and Victims,” Terrorism and Political Violence, 13 (2001), 155–164. Gordon, Carol, and Asher Arian, “Threat and Decision Making,” Journal of Conflict Resolution, 45 (2001), 196–215. Gould, Eric, and Guy Stecklov, “Terror and the Costs of Crime,” Journal of Public Economics, 93 (2009), 1175–1188. Greenberg, Joel, “Mideast Turmoil: Palestinian Suicide Planner Expresses Joy over His Missions,” New York Times, May 9, 2002. Groseclose, Tim, “A Model of Candidate Location When One Candidate Has a Valence Advantage,” American Journal of Political Science, 45 (2001), 862– 886. Hoffman, Bruce, Inside Terrorism (New York: Columbia University Press, 2006). Iyengar, Radha, and Jonathan Monten, “Is There an ‘Emboldenment’ Effect? Evidence from the Insurgency in Iraq,” NBER Working Paper No. 13839, 2008. Jackson Wade, Sara E., and Dan Reiter, “Does Democracy Matter? Regime Type and Suicide Terrorism,” Journal of Conflict Resolution, 51 (2007), 329–348. Jaeger, David A., Esteban F. Klor, Sami H. Miaari, and M. Daniele Paserman, “The Struggle for Palestinian Hearts and Minds: Violence and Public Opinion in the Second Intifada,” NBER Working Paper No. 13956, 2008.

1510

QUARTERLY JOURNAL OF ECONOMICS

Jaeger, David A., and M. Daniele Paserman, “The Cycle of Violence? An Empirical Analysis of Fatalities in the Palestinian–Israeli Conflict,” American Economic Review, 98 (2008), 1591–1604. Karol, David, and Edward Miguel, “The Electoral Cost of War: Iraq Casualties and the 2004 U.S. Presidential Election,” Journal of Politics, 69 (2007), 633–648. Kiewet, D. Roderick, “Policy-Oriented Voting in Response to Economic Issues,” American Political Science Review, 75 (1981), 448–459. Krueger, Alan B., What Makes a Terrorist: Economics and the Roots of Terrorism (Princeton, NJ: Princeton University Press, 2007). Krueger, Alan B., and David D. Laitin, “Kto Kogo?: A Cross-Country Study of the Origins and Targets of Terrorism,” in Terrorism, Economic Development, and Political Openness, Philip Keefer and Norman Loayza, eds. (New York: Cambridge University Press, 2008). Krueger, Alan B., and Jitka Maleckova, “Education, Poverty and Terrorism: Is There a Causal Connection?” Journal of Economic Perspectives, 17 (2003), 119–144. Kydd, Andrew, and Barbara F. Walter, “Sabotaging the Peace: The Politics of Extremist Violence,” International Organization, 56 (2002), 263–296. ——, “The Strategies of Terrorism,” International Security, 31 (2006), 49–80. Lapan, Harvey E., and Todd Sandler, “Terrorism and Signaling,” European Journal of Political Economy, 9 (1993), 383–398. Li, Quan, “Does Democracy Promote or Reduce Transnational Terrorist Incidents?” Journal of Conflict Resolution, 49 (2005), 278–297. Pape, Robert A., “The Strategic Logic of Suicide Terrorism,” American Political Science Review, 97 (2003), 1–19. ——, Dying to Win: The Strategic Logic of Suicide Terrorism (New York: Random House Trade Paperbacks, 2005). Piazza, James A., “A Supply-Side View of Suicide Terrorism: A Cross-National Study,” Journal of Politics, 70 (2008), 28–39. Powell, Robert, “Allocating Defensive Resources with Private Information about Vulnerability,” American Political Science Review, 101 (2007), 799–809. Rohner, Dominic, and Bruno S. Frey, “Blood and Ink! The Common-Interest-Game between Terrorists and the Media,” Public Choice, 133 (2007), 129–145. Shamir, Michal, and Asher Arian, “Collective Identity and Electoral Competition in Israel,” American Political Science Review, 93 (1999), 265–277. Sheafer, Tamir, and Shira Dvir-Gvirsman, “The Spoiler Effect: Framing Attitudes and Expectations toward Peace.” Journal of Peace Research, 47 (2010), 205– 215. Spilerman, Seymour, and Guy Stecklov, “Societal Responses to Terrorist Attacks,” Annual Review of Sociology, 35 (2009), 167–189. Weinberg, Leonard B., and William L. Eubank, “Does Democracy Encourage Terrorism?” Terrorism and Political Violence, 6 (1994), 417–443.

POLITICAL SELECTION AND PERSISTENCE OF BAD GOVERNMENTS∗ DARON ACEMOGLU GEORGY EGOROV KONSTANTIN SONIN We study dynamic selection of governments under different political institutions, with a special focus on institutional “flexibility.” A government consists of a subset of the individuals in the society. The competence level of the government in office determines collective utilities (e.g., by determining the amount and quality of public goods), and each individual derives additional utility from being part of the government (e.g., rents from holding office). We characterize the dynamic evolution of governments and determine the structure of stable governments, which arise and persist in equilibrium. In our model, perfect democracy, where current members of the government do not have veto power over changes in governments, always leads to the emergence of the most competent government. However, any deviation from perfect democracy, to any regime with incumbency veto power, destroys this result. There is always at least one other, less competent government that is also stable and can persist forever, and even the least competent government can persist forever in office. We also show that there is a nonmonotonic relationship between the degree of incumbency veto power and the quality of government. In contrast, in the presence of stochastic shocks or changes in the environment, a regime with less incumbency veto power has greater flexibility and greater probability that high-competence governments will come to power. This result suggests that a particular advantage of “democratic regimes” (with a limited number of veto players) may be their greater adaptability to changes rather than their performance under given conditions. Finally, we show that “royalty-like” dictatorships may be more successful than “junta-like” dictatorships because in these regimes veto players are less afraid of change.

I. INTRODUCTION A central role of (successful) political institutions is to ensure the selection of the right (honest, competent, motivated) politicians. Besley (2005, p. 43), for example, quotes James Madison to emphasize the importance of the selection of politicians for the success of a society: ∗ Daron Acemoglu gratefully acknowledges financial support from the National Science Foundation and the Canadian Institute for Advanced Research. We thank four anonymous referees, the editors, Robert Barro and Elhanan Helpman, and participants at the California Institute of Technology Political Economy Seminar, the “Determinants of Social Conflict” conference in Madrid, the Frontiers of Political Economics conference in Moscow, the NBER summer institute in Cambridge, the Third Game Theory World Congress in Evanston, and the University of Chicago theory seminar, and Renee Bowen and Robert Powell in particular, for helpful comments. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1511

1512

QUARTERLY JOURNAL OF ECONOMICS

The aim of every political Constitution, is or ought to be, first to obtain for rulers men who possess most wisdom to discern, and most virtue to pursue, the common good of society; and in the next place, to take the most effectual precautions for keeping them virtuous whilst they continue to hold their public trust.

Equally important, but less often stressed, is the “flexibility” of institutions, meaning their ability to deal with shocks and changing situations.1 In this paper, we construct a dynamic model of government formation to highlight the potential sources of inefficiency in the selection of governments and to identify features of political processes that create “institutional flexibility.”2 The “government” is made up of a subset of the citizens (e.g., each three-player group may be a government, etc.). Each (potential) government has a different level of competence, determining the collective utility it provides to citizens (e.g., the level of public goods). Each individual also receives rents from being part of the government (additional income, utility of office, or rents from corruption). New governments are put in place by a combination of “votes” from the citizens and “consent” from current government members. We parameterize different political regimes with the extent of necessary consent of current government members, which we refer to as incumbency veto power.3 A “perfect” democracy can be thought of as a situation in which there is no incumbency veto power and no such consent is necessary. Many political institutions, in contrast, provide additional decision making or blocking power to current government members. For instance, in many democracies, various sources of incumbency veto power 1. For instance, the skills necessary for successful wartime politicians and governments are very different from those that are useful for the successful management of the economy during peacetime, as illustrated perhaps most clearly by Winston Churchill’s political career. 2. Even though we model changes in the underlying environment and the competences of different governments as resulting from stochastic shocks, in practice these may also result from deterministic changes in the nature of the economy. For example, authoritarian regimes such as the rule of General Park in South Korea or Lee Kuan Yew in Singapore may be beneficial or less damaging during the early stages of development, whereas a different style of government, with greater participation, may be necessary as the economy develops and becomes more complex. Acemoglu, Aghion, and Zilibotti (2006) suggest that “appropriate” institutions may be a function of the distance of an economy from the world technology frontier, and Aghion, Alesina, and Trebbi (2009) provide empirical evidence consistent with this pattern. 3. The role of veto players in politics is studied in Tsebelis (2002); “individual veto players” in Tsebelis (2002) are similar to “members of royalty” discussed below. Instead, incumbency veto power in our model implies that some of the current members of the government need to consent to changes (and the identity of those providing their consent is not important).

POLITICAL SELECTION AND BAD GOVERNMENTS

1513

make the government in power harder to oust than instituting it anew would have been had it been out of power (see, e.g., Cox and Katz [1996] for a discussion of such incumbency veto power in mature democracies). In nondemocratic societies, the potential veto power of current government members is more pronounced, so one might naturally think that consent from several members of the current government would be required before a change was implemented. In this light, we take incumbency veto power as an inverse measure of democracy, though it only captures one stylized dimension of how democratic a regime is. The first contribution of our paper is to provide a general and tractable framework for the study of dynamic political selection issues and to provide a detailed characterization of the structure (and efficiency) of the selection of politicians under different political institutions (assuming sufficiently forward-looking players). Perfect democracy always ensures the emergence of the best (most competent) government. In contrast, under any other arrangement, incompetent and bad governments can emerge and persist despite the absence of information-related challenges to selecting good politicians. For example, even a small departure from perfect democracy, whereby only one member of the current government needs to consent to a new government, may make the worst possible government persist forever. The intuitive explanation for why even a small degree of incumbency veto power might lead to such outcomes is as follows: improvements away from a bad (or even the worst) government might lead to another potential government that is itself unstable and will open the way for a further round of changes. If this process ultimately leads to a government that does not have any common members with the initial government, then it may fail to get the support of any of the initial government members. In this case, the initial government survives even though it has a low, or even possibly the lowest, level of competence. This result provides a potential explanation for why many autocratic or semiautocratic regimes, including those currently in power in Iran, Russia, Venezuela, and Zimbabwe, resist the inclusion of “competent technocrats” in the government—because they are afraid that these technocrats can later become supporters of further reform, ultimately unseating even the most powerful current incumbents.4 4. For example, on Iranian politics and resistance to the inclusion of technocrats during Khomeini’s reign, see Menashri (2001), and more recently under

1514

QUARTERLY JOURNAL OF ECONOMICS

Another important implication of these dynamic interactions in political selection is that, beyond perfect democracy, there is no obvious ranking among different shades of imperfect democracy. Any of these different regimes may lead to better governments in the long run. This result is consistent with the empirical findings in the literature that show no clear-cut relationship between democracy and economic performance (e.g., Barro [1996]; Przeworski and Limongi [1997]; Minier [1998]). In fact, under all regimes except perfect democracy, the competence of the equilibrium government and the success of the society depend strongly on the identity of the initial members of the government, which is in line with the emphasis in the recent political science and economics literatures on the role that leaders may play under weak institutions (see, for example, Brooker [2000] or Jones and Olken [2005], who show that the death of an autocrat leads to a significant change in growth and this does not happen with democratic leaders). Our second contribution relates to the study of institutional flexibility. For this purpose, we enrich the above-mentioned framework with shocks that change the competence of different types of governments (thus capturing potential changes in the needs of the society for different types of skills and expertise). Although a systematic analysis of this class of dynamic games is challenging, we provide a characterization of the structure of equilibria when stochastic shocks are sufficiently infrequent and players are sufficiently patient. Using this characterization, we show how the quality (competence level) of governments evolves in the presence of stochastic shocks and how this evolution is impacted by political institutions. Whereas without shocks a greater degree of democracy (fewer veto players) does not necessarily guarantee a better government, in the stochastic environment it leads to a greater institutional flexibility and to better outcomes in the long run (in particular, a higher probability that the best government will be in power). Intuitively, this is because a regime with fewer veto players enables greater adaptability to changes in the environment (which alter the relative ranking of governments in terms of quality).5 At a slightly more technical level, this result reflects Ahmadinejad’s presidency, see Alfoneh (2008). On Russian politics under Vladimir Putin, see Baker and Glasser (2007). On Zimbabwe under Mugabe, see Meredith (2007). 5. The stochastic analysis also shows that random shocks to the identity of the members of the government may sometimes lead to better governments in

POLITICAL SELECTION AND BAD GOVERNMENTS

1515

the fact that in a regime with limited incumbency veto power, there are “relatively few” other stable governments near a stable government, so a shock that destabilizes the current government likely leads to a big jump in competence. Finally, we also show that in the presence of shocks, “royaltylike” nondemocratic regimes, where some individuals must always be in the government, may lead to better long-run outcomes than “junta-like” regimes, where a subset of the current members of the junta can block change (even though no specific member is essential). The royalty-like regimes might sometimes allow greater adaptation to change because one (or more) of the members of the initial government is secure in his or her position. In contrast, as discussed above, without such security the fear of further changes might block all competence-increasing reforms in government.6 We now illustrate some of the basic ideas with a simple example. EXAMPLE 1. Suppose that the society consists of n ≥ 6 individuals, and that any k = 3 individuals could form a government. A change in government requires both the support of the majority of the population and the consent of l = 1 member of the government, so that there is a “minimal” degree of incumbency veto power. Suppose that individual j has a level of competence γ j and order the individuals, without loss of any generality, in descending order according to their competence, so γ1 > γ2 > · · · > γn. The competence of a government is the sum of the competences of its three members. Each individual obtains utility from the competence level of the government and also a large rent from being in office, so that each prefers to be in office regardless of the competence level of the government. Suppose also that individuals have a sufficiently high discount factor so that the future matters a lot relative to the present.

the long run because they destroy stable incompetent governments. Besley (2005, p. 50) writes, “History suggests that four main methods of selection to political office are available: drawing lots, heredity, the use of force and voting.” Our model suggests why, somewhat paradoxically, drawing lots, which was used in ancient Greece, might sometimes lead to better long-run outcomes than the alternatives. 6. This and several of the results for junta-like regimes discussed above contrast with the presumption in the existing literature that a greater number of veto players increases policy stability (e.g., Tsebelis [2002]). In particular, the presence of a veto player (or member of “royalty”) would increase stability when players were not forward-looking or discounted the future very heavily, but we show that it can reduce stability when they are forward-looking and patient.

1516

QUARTERLY JOURNAL OF ECONOMICS

It is straightforward to determine the stable governments that will persist and remain in power once formed. Evidently, {1, 2, 3} is a stable government, because it has the highest level of competence, so neither a majority of outsiders nor members of the government would like to initiate a change (some outsiders may want to initiate a change: for example, 4, 5, and 6 would prefer government {4, 5, 6}, but they do not have the power to enforce such a change). In contrast, governments of the form {1, i, j}, {i, 2, j}, and {i, j, 3} are unstable (for i, j > 3), which means that starting with these governments, there will necessarily be a change. In particular, in each of these cases, {1, 2, 3} will receive support both from one current member of government and from the rest of the population, who would be willing to see a more competent government. Consider next the case where n = 6 and suppose that the society starts with the government {4, 5, 6}. This is also a stable government, even though it is the lowest-competence government and thus the worst possible option for the society as a whole. This is because any change in government must result in a new government of one of the following three forms: {1, i, j}, {i, 2, j}, or {i, j, 3}. But we know that all of these types of governments are unstable. Therefore, any of the more competent governments will ultimately take the society to {1, 2, 3}, which does not include any of the members of the initial government. Because individuals are relatively patient, none of the initial members of the government would support (consent to) a change that will ultimately exclude them. As a consequence, the initial worst government persists forever. Returning to our discussion of the unwillingness of certain governments to include skilled technocrats, this example shows why such a technocrat, for example individual 1, will not be included in the government {4, 5, 6}, even though he would potentially increase the quality and competence of the government substantially. One can further verify that {4, 5, 6} is also a stable government when l = 3, because in this case any change requires the support of all three members of government and none of them would consent to a change that removed him or her from the government. In contrast, under l = 2, {4, 5, 6} is not a stable government, and thus the quality of the government

POLITICAL SELECTION AND BAD GOVERNMENTS

1517

is higher under intermediate incumbency veto power, l = 2, than under l = 1 or l = 3. Now consider the same environment as above but with potential changes in the competences of the agents. For example, individual 4 may see an increase in competence, so that he or she becomes the third most competent agent (i.e., γ4 ∈ (γ3 , γ2 )). Suppose that shocks are sufficiently infrequent so that the stability of governments in periods without shocks is given by the same reasoning as in the nonstochastic case. Consider the situation starting with the government {4, 5, 6} and suppose l = 1. Then this government remains in power until the shock occurs. Nevertheless, the equilibrium government will eventually converge to {1, 2, 3}. At some point a shock will change the relative competences of agents 3 and 4, and the government {4, 5, 6} will become unstable; individual 4 will support the emergence of the government {1, 2, 4}, which now has the highest competence. In contrast, when l = 3, the ruling government remains in power even after the shock. This simple example thus illustrates how, even though a regime with fewer veto players does not ensure better outcomes in nonstochastic environments, it may provide greater flexibility and hence better long-run outcomes in the presence of shocks. Our paper is related to several different literatures. Although much of the literature on political economy focuses on the role of political institutions in providing (or failing to provide) the right incentives to politicians (see, among others, Niskanen [1971]; Barro [1973]; Ferejohn [1986]; Shleifer and Vishny [1993]; Besley and Case [1995]; Persson, Roland, and Tabellini [1997]; and Acemoglu, Robinson, and Verdier [2004]), there is also a small (but growing) literature investigating the selection of politicians, most notably Banks and Sundaram (1998), Besley (2005), and Diermeier, Keane, and Merlo (2005). The main challenge facing the society and the design of political institutions in these papers is that the ability and motivations of politicians are not observed by voters or outside parties. Although such information-related selection issues are undoubtedly important, our paper focuses on the difficulty of ensuring that the “right” government is selected even when information is perfect and common. Also differently from these literatures, we emphasize the importance of institutional flexibility in the face of shocks.

1518

QUARTERLY JOURNAL OF ECONOMICS

Osborne and Slivinski (1996), Besley and Coate (1997, 1998), Bueno de Mesquita et al. (2003), Caselli and Morelli (2004), Messner and Polborn (2004), Mattozzi and Merlo (2008), Padro-i-Miquel (2007), and Besley and Kudamatsu (2009) provide alternative and complementary “theories of bad governments/politicians.” For example, Bueno de Mesquita et al. (2003) emphasize the composition of the “selectorate,” the group of players that can select governments, as an important factor leading to inefficient policies. In Bueno de Mesquita et al. (2003), Padroi-Miquel (2007), and Besley and Kudamatsu (2009), the fear of future instability also contributes to the emergence of inefficient policies. Caselli and Morelli (2004) suggest that voters might be unwilling to replace a corrupt incumbent by a challenger whom they expect to be equally corrupt. Mattozzi and Merlo (2008) argue that more competent politicians have higher opportunity costs of entering politics.7 However, these papers do not develop the potential persistence in bad governments resulting from the dynamics of government formation and do not focus on the importance of institutional flexibility. We are also not aware of other papers providing a comparison of different political regimes in terms of the selection of politicians under nonstochastic and stochastic conditions.8 Also closely related are prior analyses of dynamic political equilibria in the context of club formation, as in Roberts (1999) ` Maschler, and Shalev (2001), as well as dynamic and Barbera, analyses of choice of constitutions and equilibrium political institutions as in Barbera` and Jackson (2004), Messner and Polborn (2004), Acemoglu and Robinson (2000, 2006), and Lagunoff (2006). Our recent work, Acemoglu, Egorov, and Sonin (2008), provides a general framework for the analysis of the dynamics of constitutions, coalitions, and clubs. The current paper is a continuation of this line of research. It differs from our previous work in a number of important dimensions. First, the focus here is on the substantive questions concerning the relationship between different 7. McKelvey and Reizman (1992) suggest that seniority rules in the Senate and the House create an endogenous advantage for the incumbent members, and current members of these bodies will have an incentive to introduce such seniority rules. 8. Our results are also related to recent work on the persistence of bad governments and inefficient institutions, including Acemoglu and Robinson (2008), Acemoglu, Ticchi, and Vindigni (2010), and Egorov and Sonin (2010). Acemoglu (2008) also emphasizes the potential benefits of democracy in the long run but through a different channel—because the alternative, oligarchy, creates entry barriers and sclerosis.

POLITICAL SELECTION AND BAD GOVERNMENTS

1519

political institutions and the selection of politicians and governments, which is new, relatively unexplored, and (in our view) important. Second, this paper extends our previous work by allowing stochastic shocks and enables us to investigate issues of institutional flexibility. Third, it involves a structure of preferences to which our previous results cannot be directly applied.9 The rest of the paper is organized as follows. Section II introduces the model. Section III introduces the concept of (Markov) political equilibrium, which allows a general and tractable characterization of equilibria in this class of games. Section IV provides our main results on the comparison of different regimes in terms of selection of governments and politicians. Section V extends the analysis to allow stochastic changes in the competences of members of the society and presents a comparison of different regimes in the presence of stochastic shocks. Section VI concludes. The Appendix contains the proofs of some of our main results; analyzes an extensive-form game with explicitly specified proposal and voting procedures, and shows the equivalence between the Markov perfect equilibria (MPEs) of this game and the (simpler) notion of political equilibrium we use in the text; and provides additional examples illustrating some of the claims we make in the text. Online Appendix B contains the remaining proofs. II. MODEL We consider a dynamic game in discrete time indexed by t = 0, 1, 2, . . .. The population is represented by the set I and consists of n < ∞ individuals. We refer to nonempty subsets of I as coalitions and denote the set of coalitions by C. We also designate a subset of coalitions G ⊂ C as the set of feasible governments. For example, the set of feasible governments could consist of all groups of individuals of size k0 (for some integer k0 ) or all groups of individuals of size greater than k1 and less than some other integer k2 . To simplify the discussion, we define k¯ = maxG∈G |G|, so k¯ is the upper bound for the size of any feasible government: that ¯ It is natural to presume that k¯ < n/2. is, for any G ∈ G, |G| ≤ k. In each period, the society is ruled by one of the feasible governments Gt ∈ G. The initial government G0 is given as part 9. In particular, the results in Acemoglu, Egorov, and Sonin (2008) apply under a set of acyclicity conditions. Such acyclicity does not hold in the current paper (see Online Appendix B). This makes the general characterization of the structure of equilibria both more challenging and of some methodological interest.

1520

QUARTERLY JOURNAL OF ECONOMICS

of the description of the game and Gt for t > 0 is determined in equilibrium as a result of the political process described below. The government in power at any date affects three aspects of the society: 1. It influences collective utilities (for example, by providing public goods or influencing how competently the government functions). 2. It determines individual utilities (members of the government may receive additional utility because of rents of being in office or corruption). 3. It indirectly influences the future evolution of governments by shaping the distribution of political power in the society (for example, by creating incumbency advantage in democracies or providing greater decision-making power or veto rights to members of the government under alternative political institutions). We now describe each of these in turn. The influence of the government on collective utilities is modeled via its competence. In particular, at each date t, there exists a function t : G → R designating the competence of each feasible government G ∈ G (at t ∈ R as government G’s competence, with that date). We refer to G the convention that higher values correspond to greater competence. In Section IV, we will assume that each individual has a certain level of competence or ability, and the competence of a government is a function of the abilities of its members. For now, this additional assumption is not necessary. Note also that the function t depends on time. This generality is introduced to allow for changes in the environment (in particular, changes in the relative competences of different individuals and governments). Individual utilities are determined by the competence of the government that is in power at that date and by whether the individual in question is part of the government. More specifically, each individual i ∈ I at time τ has discounted (expected) utility given by (1)

Uiτ = E

∞ t=τ

β (t−τ ) uit ,

POLITICAL SELECTION AND BAD GOVERNMENTS

1521

where β ∈ (0, 1) is the discount factor and uit is individual’s stage payoff, given by (2)

t t uit = wi (Gt , G t ) = wi (G ),

t where in the second equality we suppress dependence on G t to simplify notation; we will do this throughout unless special emphasis is necessary. Throughout, we impose the following assumptions on wi .

ASSUMPTION 1. The function wi satisfies the following properties: t 1. For each i ∈ I and any G, H ∈ G such that G > tH : if i ∈ G or i ∈ / H, then wi (G) > wi (H). 2. For any G, H ∈ G and any i ∈ G \ H, wi (G) > wi (H).

Part 1 of this assumption is a relatively mild restriction on payoffs. It implies that all else equal, more competent governments give higher stage payoffs. In particular, if an individual belongs to both governments G and H, and G is more competent than H, then he or she prefers G. The same conclusion also holds when the individual is not a member of either of these two governments or is only a member of G (and not of H). Therefore, this part of the assumption implies that the only situation in which an individual may prefer a less competent government to a more competent one is when he or she is a member of the former but not of the latter. This simply captures the presence of rents from holding office or additional income from being in government due to higher salaries or corruption. The interesting interactions in our setup result from the “conflict of interest”: individuals prefer to be in the government even when this does not benefit the rest of the society. Part 2 of the assumption strengthens the first part and imposes the condition that this conflict of interest is always present; that is, individuals receive higher payoffs from governments that include them than from those that exclude them (regardless of the competence levels of the two governments). We impose both parts of this assumption throughout. It is important to note that Assumption 1 implies that all voters who are not part of the government care about a onedimensional government competence; this feature simplifies the analysis considerably. Nevertheless, the tractability of our framework makes it possible to enrich this environment by allowing other sources of disagreement or conflict of interest among voters, and we return to this issue in the Conclusions.

1522

QUARTERLY JOURNAL OF ECONOMICS

EXAMPLE 2. As an example, suppose that the competence of government G, G , is the amount of public good produced in the economy under feasible government G, and (3)

wi (G) = vi (G ) + bi I{i∈G} ,

where vi : R → R is a strictly increasing function (for each i ∈ I) corresponding to the utility from public good for individual i, bi is a measure of the rents that individual i obtains from being in office, and I X is the indicator of event X. If bi ≥ 0 for each i ∈ I, then (3) satisfies part 1 of Assumption 1. In addition, if bi is sufficiently large for each i, then each individual prefers to be a member of the government, even if this government has a very low level of competence; thus part 2 of Assumption 1 is also satisfied. Finally, the government in power influences the determination of future governments whenever consent of some current government members is necessary for change. We represent the set of individuals (regular citizens and government members) who can, collectively, induce a change in government by specifying the set of winning coalitions, WG , which is a function of current government G (for each G ∈ G). This is an economical way of summarizing the relevant information, because the set of winning coalitions is precisely the set of subsets of the society that are able to force (or to block) a change in government. We impose only a minimal amount of structure on the set of winning coalitions. ASSUMPTION 2. For any feasible government G ∈ G, WG is given by WG = {X ∈ C : |X| ≥ mG and |X ∩ G| ≥ lG }, where lG and mG are integers satisfying 0 ≤ lG ≤ |G| ≤ k¯ < mG ≤ n − k¯ (recall that k¯ is the maximal size of the government and n is the size of the society). The restrictions imposed in Assumption 2 are intuitive. In particular, they state that a new government can be instituted if it receives a sufficient number of votes from the entire society (mG total votes) and if it receives support from some subset of the members of the current government (lG of the current government members need to support such a change). This definition allows lG to be any number between 0 and |G|. One special feature of Assumption 2 is that it does not relate the number of veto players in the current government, lG , to the total number of individuals in

POLITICAL SELECTION AND BAD GOVERNMENTS

1523

the society who wish to change the government, mG . This aspect of Assumption 2 can be relaxed without affecting our general characterization; we return to a discussion of this issue in the Conclusions. Given this notation, the case where there is no incumbency veto power, lG = 0, can be thought of as perfect democracy, where current members of the government have no special power. The case where lG = |G| can be thought of as extreme dictatorship, where unanimity among government members is necessary for any change. Between these extremes are imperfect democracies (or less strict forms of dictatorships), which may arise either because there is some form of (strong or weak) incumbency veto power in democracy or because current government (junta) members are able to block the introduction of a new government. In what follows, one might wish to interpret lG as an inverse measure of the degree of democracy, though naturally this only captures one dimension of democratic regimes in practice. Note also that Assumption 2 imposes some mild assumptions on mG . In particular, less than k¯ individuals are insufficient for a change to take place. This ensures that a rival government cannot take power without any support from other individuals (recall that k¯ denotes the maximum size of the government, so the rival government must have no more than k¯ members), and mG ≤ n − k¯ individuals are sufficient to implement a change provided that lG members of the current government are among them. For example, these requirements are naturally met when k¯ < n/2 and mG = (n + 1)/2 (i.e., majority rule).10 In addition to Assumptions 1 and 2, we also impose the following genericity assumption, which ensures that different governments have different competences. This assumption simplifies the notation and can be made without much loss of generality, because if it were not satisfied for a society, any small perturbation of competence levels would restore it. ASSUMPTION 3. For any t ≥ 0 and any G, H ∈ G such that G = H, t G = tH . III. POLITICAL EQUILIBRIA IN NONSTOCHASTIC ENVIRONMENTS In this section, we focus on nonstochastic environments, t = G for all G ∈ G). For these environments, where t = (or G 10. Recall also that x denotes the integer part of a real number x.

1524

QUARTERLY JOURNAL OF ECONOMICS

we introduce our equilibrium concept, (Markov) political equilibrium, and show that equilibria have a simple recursive characterization.11 We return to the more general stochastic environments in Section V. III.A. Political Equilibrium Our equilibrium concept, (Markov) political equilibrium, imposes that only transitions from the current government to a new government that increase the discounted utility of the members of a winning coalition will take place; and if no such transition exists, the current government will be stable (i.e., it will persist in equilibrium). The qualifier “Markov” is added because this definition implicitly imposes that transitions from the current to a new government depend on the current government—not on the entire history. To introduce this equilibrium concept more formally, let us first define the transition rule φ : G → G, which maps each feasible government G in power at time t to the government that would emerge in period t + 1.12 Given φ, we can write the discounted utility implied by (1) for each individual i ∈ I starting from the current government G ∈ G recursively as Vi (G | φ), given by (4)

Vi (G | φ) = wi (G) + βVi (φ(G) | φ) for all G ∈ G.

Intuitively, starting from G ∈ G, individual i ∈ I receives a current payoff of wi (G). Then φ (uniquely) determines the next period’s government φ(G), and thus the continuation value of this individual, discounted to the current period, is βVi (φ(G) | φ). A government G is stable given mapping φ if φ(G) = G. In addition, we say that φ is acyclic if for any (possibly infinite) chain H1 , H2 , . . . ⊂ G such that Hk+1 = φ(Hk), and any a < b < c, if Ha = Hc then Ha = Hb = Hc . Given (4), the next definition introduces the notion of a political equilibrium, which will be represented by the mapping φ provided that two conditions are met. 11. Throughout, we refer to this equilibrium concept as “political equilibrium” or simply as “equilibrium.” We do not use the acronym MPE, which will be used for the Markov perfect equilibrium of a noncooperative game in the Appendix. 12. In principle, φ could be set-valued, mapping from G into P(G) (the power set of G), but our analysis below shows that, thanks to Assumption 3, its image is always a singleton (i.e., it is a “function” rather than a “correspondence,” and also by implication, it is uniquely defined). We impose this assumption to simplify the notation.

POLITICAL SELECTION AND BAD GOVERNMENTS

1525

DEFINITION 1. A mapping φ : G → G is a (Markov) political equilibrium if for any G ∈ G, the following two conditions are satisfied: i. either the set of players who prefer φ(G) to G (in terms of discounted utility) form a winning coalition, that is, S = {i ∈ I : Vi (φ(G) | φ) > Vi (G | φ)} ∈ WG (or equivalently |S| ≥ mG and |S ∩ G| ≥ lG ); or else, φ(G) = G; ii. there is no alternative government H ∈ G that is preferred both to a transition to φ(G) and to staying in G permanently, that is, there is no H such that = {i ∈ I : Vi (H | φ) > Vi (φ(G) | φ)} ∈ WG SH

and = {i ∈ I : Vi (H | φ) > wi (G)/(1 − β)} ∈ WG SH (alternatively, for any alternative H, either |SH | < mG , or |SH ∩ G| < lG , or |SH | < mG , or |SH ∩ G| < lG ).

This definition states that a mapping φ is a political equilibrium if it maps the current government G to alternative φ(G), which (unless it coincides with G) must be preferred to G (taking continuation values into account) by a sufficient majority of the population and a sufficient number of current government members (in order not to be blocked). Note that in part (i), the set S can be equivalently written as S = {i ∈ I : Vi (φ(G) | φ) > wi (G)/(1 − β)}, because if this set is not a winning coalition, then φ(G) = G and thus Vi (G | φ) = wi (G)/(1 − β). Part (ii) of the definition requires that there does not exist another alternative H that would have been a “more preferable” transition; that is, there should be no H that is preferred both to a transition to φ(G) and to staying in G forever by a sufficient majority of the population and a sufficient number of current government members. The latter condition is imposed, because if there exists a winning coalition that prefers H to a transition to φ(G) but there is no winning coalition that prefers H to staying in G forever, then at each stage a move to H can be blocked. We use the definition of political equilibrium in Definition 1 in this and the next section. The advantage of this definition is its simplicity. A disadvantage is that it does not explicitly specify how offers for different types of transitions are made and the exact sequences of events at each stage. In the Appendix, we describe an infinite-horizon extensive-form game, where there is an explicit sequence in which proposals are made, votes are cast,

1526

QUARTERLY JOURNAL OF ECONOMICS

and transitions take place. We then characterize the MPEs of this dynamic game and show that they are equivalent to political equilibria as defined in Definition 1. Briefly, in this extensive-form game, any given government can be in either a sheltered or an unstable state. Sheltered governments cannot be challenged but become unstable with some probability. When the incumbent government is unstable, all individuals (according to a prespecified order) propose possible alternative governments. Primaries across these governments determine a challenger government, and then a vote between this challenger and the incumbent governments determines whether there is a transition to a new government (depending on whether those in support of the challenger form a winning coalition according to Assumption 2). New governments start out as unstable, and with some probability become sheltered. All votes are sequential. We prove that for a sufficiently high discount factor, the MPE of this game does not depend on the sequence in which proposals are made, the protocols for primaries, or the sequence in which votes are cast, and coincides with political equilibria described by Definition 1. This result justifies our focus on the much simpler notion of political equilibrium in the text. The fact that new governments start out as unstable provides a justification for part (ii) of Definition 1 that there should not exist another alternative H that is “more preferable” than φ(G) and than staying in G forever; otherwise there would be an immediate transition to H. III.B. General Characterization We now prove the existence and provide a characterization of political equilibria. We start with a recursive characterization of the mapping φ described in Definition 1. Let us enumerate the elements of the set G as {G1 , G2 , . . . , G|G| } such that Gx > Gy whenever x < y. With this enumeration, G1 is the most competent (“best”) government, whereas G|G| is the least competent government. In view of Assumption 3, this enumeration is well defined and unique. Now, suppose that for some q > 1, we have defined φ for all G j with j < q. Define the set (5)

Mq ≡ { j : 1 ≤ j < q, {i ∈ I : wi (G j ) > wi (Gq )} ∈ WGq , and φ(G j ) = G j }.

Note that this set depends simply on stage payoffs in (2), not on the discounted utilities defined in (4), which are “endogenous” objects.

POLITICAL SELECTION AND BAD GOVERNMENTS

1527

This set can thus be computed easily from the primitives of the model (for each q). Given this set, let the mapping φ be (6)

φ(Gq ) =

Gq Gmin{ j∈Mq }

if Mq = ∅; if Mq = ∅.

Because the set Mq is well defined, the mapping φ is also well defined, and by construction it is single-valued. Theorems 1 and 2 next show that, for sufficiently high discount factors, this mapping constitutes the unique acyclic political equilibrium and that, under additional mild conditions, it is also the unique political equilibrium (even considering possible cyclic equilibria). THEOREM 1. Suppose that Assumptions 1–3 hold and let φ : G → G be as defined in (6). Then there exists β0 < 1 such that for any discount factor β > β0 , φ is the unique acyclic political equilibrium. Proof. See the Appendix.

Let us now illustrate the intuition for why the mapping φ constitutes a political equilibrium. Recall that G1 is the most competent (“best”) government. It is clear that we must have φ(G1 ) = G1 , because all members of the population that are not in G1 will prefer it to any other G ∈ G (from Assumption 1). Assumption 2 then ensures that there will not be a winning coalition in favor of a permanent move to G . However, G itself may not persist, and it may eventually lead to some alternative government G ∈ G. But in this case, we can apply this reasoning to G instead of G , and thus the conclusion φ(G1 ) = G1 applies. Next suppose we start with government G2 in power. The same argument applies if G is any one of G3 , G4 , . . . , G|G| . One of these may eventually lead to G1 ; thus for sufficiently high discount factors, a sufficient majority of the population may support a transition to such a G in order eventually to reach G1 . However, discounting also implies that in this case, a sufficient majority would also prefer a direct transition to G1 to this dynamic path (recall part (ii) of Definition 1). So the relevant choice for the society is between G1 and G2 . In this comparison, G1 will be preferred if it has sufficiently many supporters, that is, if the set of individuals preferring G1 to G2 is a winning coalition within G2 , or more formally if {i ∈ I : wi (G1 ) > wi (G2 )} ∈ WG2 .

1528

QUARTERLY JOURNAL OF ECONOMICS

If this is the case, φ(G2 ) = G1 ; otherwise, φ(G2 ) = G2 . This is exactly what the function φ defined in (6) stipulates. Now let us start from government G3 . We then only need to consider the choice between G1 , G2 , and G3 . To move to G1 , it suffices that a winning coalition within G3 prefers G1 to G3 .13 However, whether the society will transition to G2 depends on the stability of G2 . In particular, we may have a situation in which G2 is not a stable government, which, by necessity, implies that φ(G2 ) = G1 . Then a transition to G2 will lead to a permanent transition to G1 in the next period. However, this sequence may be nondesirable for some of those who prefer to move to G2 . In particular, there may exist a winning coalition in G3 that prefers to stay in G3 rather than to transit permanently to G1 (and as a consequence, there is no winning coalition that prefers such a transition), even though there also exists a winning coalition in G3 that would have preferred a permanent move to G2 . Writing this more explicitly, we may have {i ∈ I : wi (G2 ) > wi (G3 )} ∈ WG3 , but {i ∈ I : wi (G1 ) > wi (G3 )} ∈ / WG3 . If so, the transition from G3 to G2 may be blocked with the anticipation that it will lead to G1 , which does not receive the support of a winning coalition within G3 . This reasoning illustrates that for a transition to take place, not only should the target government be preferred to the current one by a winning coalition (starting from the current government), but also that the target government should be “stable,” that is, φ(G ) = G . This is exactly the requirement in (6). In this light, the intuition for the mapping φ and thus for Theorem 1 is that a government G will persist in equilibrium (will be stable) if there does not exist another stable government receiving support from a winning coalition (a sufficient majority of the population and the required number of current members of government). Theorem 1 states that φ in (6) is the unique acyclic political equilibrium. However, it does not rule out cyclic equilibria. We provide an example of a cyclic equilibrium in Example 11 in the 13. If some winning coalition also prefers G2 to G3 , then G1 should still be chosen over G2 , because only members of G2 who do not belong to G1 prefer G2 to G1 , and Assumption 2 ensures that those preferring G1 over G2 (starting in G3 ) also form a winning coalition. Then a transition to G2 is ruled out by part (ii) of Definition 1.

POLITICAL SELECTION AND BAD GOVERNMENTS

1529

Appendix. Cyclic equilibria are unintuitive and “fragile.” We next show that they can also be ruled out under a variety of relatively weak assumptions. The next theorem thus strengthens Theorem 1 so that φ in (6) is the unique equilibrium (among both cyclic and acyclic ones). THEOREM 2. The mapping φ defined in (6) is the unique political equilibrium (and hence in the light of Theorem 1, any political equilibrium is acyclic) if any of the following conditions holds: 1. For any G ∈ G, |G| = k, lG = l and mG = m for some k, l and m. 2. For any G ∈ G, lG ≥ 1. 3. For any collection of different feasible governments H1 , . . . , H q ∈ G (for q ≥ 2) and for all i ∈ I, we have wi (H1 ) = q ( p=1 wi (Hp))/q. 4. θ > ε · |G|, where θ ≡ min{i∈I and G,H∈G: i∈G\H} {wi (G)−wi (H)} and ε ≡ max{i∈I and G,H∈G: i∈G∩H} {wi (G) − wi (H)}. Proof. See Online Appendix B.

This theorem states four relatively mild conditions under which there are no cyclic equilibria (thus making φ in (6) the unique equilibrium). First, if all feasible governments have the same size, k, the same degree of incumbency veto power, l, and the same threshold for the required number of total votes for change, m, then all equilibria must be acyclic and thus φ in (6) is the unique political equilibrium. Second, the same conclusion applies if we always need the consent of at least one member of the current government for a transition to a new government. These two results imply that cyclic equilibria are only possible if starting from some governments, there is no incumbency veto power and either the degree of incumbency veto power or the vote threshold differs across governments. The third part of the theorem shows that there are also no acyclic political equilibria under a mild restriction on payoffs (which is a slight strengthening of Assumption 3 and holds generically,14 meaning that if it did not hold, a small perturbation of payoff functions would restore it). Finally, the fourth part of the theorem provides a condition on preferences that also rules out cyclic equilibria. In particular, this condition states that if each individual receives sufficiently high 14. This requirement is exactly the same as Assumption 3 , which we impose in the Appendix in the analysis of the extensive form game.

1530

QUARTERLY JOURNAL OF ECONOMICS

utility from being in government (greater than θ ) and does not care much about the composition of the rest of the government (the difference in his or her utility between any two governments including him or her is always less than ε), then all equilibria must be acyclic. In the Appendix, we show (Example 11) how a cyclic political equilibrium is possible if none of the four sufficient conditions in Theorem 2 holds. IV. CHARACTERIZATION OF NONSTOCHASTIC TRANSITIONS IV.A. Main Results We now compare different political regimes in terms of their ability to select governments with high levels of competence. To simplify the exposition and focus on the more important interactions, we assume that all feasible governments have the same size, k ∈ N, where k < n/2. More formally, let us define C k = {Y ∈ C : |Y | = k}. Then G = C k. In addition, we assume that for any G ∈ G, lG = l ∈ N and mG = m ∈ N, so that the set of winning coalitions can be simply expressed as (7)

WG = {X ∈ C : |X| ≥ m and |X ∩ G| ≥ l},

where 0 ≤ l ≤ k < m ≤ n − k. If l = 0, then all individuals have equal weight and there is no incumbency veto power; thus we have a perfect democracy. In contrast, if l > 0, the consent of some of the members of the government is necessary for a change; thus there is some incumbency veto power. We have thus strengthened Assumption 2 to the following. ASSUMPTION 2 . We have that G = C k, and that there exist integers l and m such that the set of winning coalitions is given by (7). In view of part 1 of Theorem 2, Assumption 2 ensures that the acyclic political equilibrium φ given by (6) is the unique equilibrium; naturally, we will focus on this equilibrium throughout the rest of the analysis. In addition, given this additional structure, the mapping φ can be written in a simpler form. Recall that governments are still ranked according to their level of competence, so that G1 denotes the most competent government. Then

POLITICAL SELECTION AND BAD GOVERNMENTS

1531

we have (8) Mq = { j : 1 ≤ j < q, |G j ∩ Gq | ≥ l,

and

φ(G j ) = G j },

and, as before, (9)

φ(Gq ) =

Gq Gmin{ j∈Mq }

if Mq = ∅; if Mq = ∅.

Naturally, the mapping φ is again well defined and unique. Finally, let us also define D = {G ∈ G : φ(G) = G} as the set of stable governments (the fixed points of the mapping φ). If G ∈ D, then φ(G) = G, and this government will persist forever if it is the initial government of the society. We now investigate the structure of stable governments and how it changes as a function of the underlying political institutions—in particular, the extent of incumbency veto power, l. Throughout this section, we assume that Assumptions 1, 2 , and 3 hold, and we do not add these qualifiers to any of the propositions to economize on space. Our first proposition provides an important technical result (part 1). It then uses this result to show that perfect democracy (l = 0) ensures the emergence of the best (most competent) government, but any departure from perfect democracy destroys this result and enables the emergence of highly incompetent/inefficient governments. It also shows that extreme dictatorship (l = k) makes all initial governments stable, regardless of how low their competence may be. PROPOSITION 1. The set of stable feasible governments D satisfies the following properties. 1. If G, H ∈ D and |G ∩ H| ≥ l, then G = H. In other words, any two distinct stable governments may have at most l − 1 common members. 2. Suppose that l = 0. Then D = {G1 }. In other words, starting from any initial government, the society will transit to the most competent government. 3. Suppose l ≥ 1. Then there are at least two stable governments; that is, |D| ≥ 2. Moreover, the least competent governments may be stable.

1532

QUARTERLY JOURNAL OF ECONOMICS

4. Suppose l = k. Then D = G, so every feasible government is stable. Proof. See Online Appendix B.

Proposition 1 shows the fundamental contrast between perfect democracy, where incumbents have no veto power, and other political institutions, which provide some additional power to “insiders” (current members of the government). Perfect democracy leads to the formation of the best government. With any deviation from perfect democracy, there will necessarily exist at least one other stable government (by definition less competent than the best), and even the worst government might be stable. The next example supplements Example 1 from the Introduction by showing a richer environment in which the least competent government is stable. EXAMPLE 3. Suppose n = 9, k = 3, l = 1, and m = 5, so that a change in government requires support from a simple majority of the society, including at least one member of the current government. Suppose that I = {1, 2, . . . , 9}, and that stage payoffs are given by (3) in Example 2. Assume also that {i1 ,i2 ,i3 } = 1000 − 100i1 − 10i2 − i3 (for i1 < i2 < i3 ). This implies that {1, 2, 3} is the most competent government, and is therefore stable. Any other government that includes 1 or 2 or 3 is unstable. For example, the government {2, 5, 9} will transit to {1, 2, 3}, as all individuals except 5 and 9 prefer the latter. However, government {4, 5, 6} is stable: any government that is more competent must include 1 or 2 or 3, and therefore either is {1, 2, 3} or will immediately transit to {1, 2, 3}, which means that any such transition will not receive support from any of the members of {4, 5, 6}. Now, proceeding inductively, we find that any government other than {1, 2, 3} and {4, 5, 6} that contains at least one individual 1, 2, . . . , 6 is unstable. Consequently, government {7, 8, 9}, which is the least competent government, is stable. Proposition 1 establishes that under any regime other than perfect democracy, there will necessarily exist stable inefficient/ incompetent governments and these may in fact have quite low levels of competence. It does not, however, provide a characterization of when highly incompetent governments will be stable. We next provide a systematic answer to this question, focusing on societies with large numbers of individuals (i.e., n large).

POLITICAL SELECTION AND BAD GOVERNMENTS

1533

Before doing so, we introduce an assumption that will be used in the third part of the next proposition and in later results. In particular, in what follows we will sometimes suppose that each individual i ∈ I has a level of ability (or competence) given by γi ∈ R+ and that the competence of the government is a strictly increasing function of the abilities of its members. This is more formally stated in the next assumption. ASSUMPTION 4. Suppose G ∈ G, and individuals i, j ∈ I are such that i ∈ G, j ∈ / G, and γi ≥ γ j . Then G ≥ (G\{i})∪{ j} . The canonical form of the competence function consistent with Assumption 4 is G =

(10)

i∈G

γi ,

though for most of our analysis, we do not need to impose this specific functional form. Assumption 4 is useful because it enables us to rank individuals in terms of their “abilities.” This ranking is strict, because Assumptions 3 and 4 together imply that γi = γ j whenever i = j. When we impose Assumption 4, we also enumerate individuals according to their abilities, so that γi > γ j whenever i < j. The next proposition shows that for societies above a certain threshold of size (as a function of k and l), there always exist stable governments that contain no member of the ideal government and no member of any group of certain prespecified sizes (thus, no member of a group that would generate a range of potentially high-competence governments). Then, under Assumption 4, it extends this result, providing a bound on the percentile of the ability distribution such that there exist stable governments that do not include any individuals with competences above this percentile. PROPOSITION 2. Suppose l ≥ 1 (and, as before, that Assumptions 1, 2 , and 3 hold). 1. If (11)

n ≥ 2k + k(k − l)

(k − 1)! , (l − 1)!(k − l)!

then there exists a stable government G ∈ D that contains no members of the ideal government G1 .

1534

QUARTERLY JOURNAL OF ECONOMICS

2. Take any x ∈ N. If (12)

n ≥ k + x + x(k − l)

(k − 1)! , (l − 1)!(k − l)!

then for any set of individuals X with |X| ≤ x, there exists a stable government G ∈ D such that X ∩ G = ∅ (so no member of set X belongs to G). 3. Suppose in addition that Assumption 4 holds and let (13)

ρ=

1 (k−1)! 1 + (k − l) (l−1)!(k−l)!

.

Then there exists a stable government G ∈ D that does not include any of the ρn highest-ability individuals. Proof. See Online Appendix B.

Let us provide the intuition for part 1 of Proposition 2 when l = 1. Recall that G1 is the most competent government. Let G be the most competent government among those that do not include members of G1 (such a G exists, because n > 2k by assumption). In this case, Proposition 2 implies that G is stable; that is, G ∈ D. The reason is that if φ(G) = H = G, then H > G , and therefore H ∩ G1 contains at least one element by construction of G. But then φ(H) = G1 , as implied by (9). Intuitively, if l = 1, then once the current government contains a member of the most competent government G1 , this member will consent to (support) a transition to G1 , which will also receive the support of the population at large. He or she can do so, because G1 is stable, and thus there are no threats that further rounds of transitions will harm him or her. But then, as in Example 1 in the Introduction, G itself becomes stable, because any reform away from G will take us to an unstable government. Part 2 of the proposition has a similar intuition, but it states the stronger result that one can choose any subset of the society with size not exceeding the threshold defined in (12) such that there exist stable governments that do not include any member of this subset (which may be taken to include several of the most competent governments).15 Finally, part 3, which follows immediately from part 2 under Assumption 4, further strengthens both parts 1 and 2 of this proposition and also parts 3 and 15. Note that the upper bound on X in part 2 of Proposition 2 is O(x), meaning that increasing x does not require an exponential increase in the size of population n for Proposition 2 to hold.

POLITICAL SELECTION AND BAD GOVERNMENTS

1535

4 of Proposition 1: it shows that there exist stable governments that do not include a certain fraction of the highest-ability individuals. Interestingly, this fraction, given in (13), is nonmonotonic in l, reaching its maximum at l = k/2, that is, for an intermediate level of incumbency veto power. This partly anticipates the results pertaining to the relative success of different regimes in selecting more competent governments, which we discuss in the next proposition. Before providing a more systematic analysis of the relationship between political regimes and the quality of governments, we first extend Example 1 from the Introduction to show that, starting with the same government, the long-run equilibrium government may be worse when there is less incumbency veto power (as long as we are not in a perfect democracy). EXAMPLE 4. Take the setup from Example 3 (n = 9, k = 3, l = 1, and m = 5), and suppose that the initial government is {4, 5, 6}. As we showed there, government {4, 5, 6} is stable and will therefore persist. Suppose, however, that l = 2 instead. In that case, {4, 5, 6} is unstable and φ({4, 5, 6}) = {1, 4, 5}; thus there will be a transition to {1, 4, 5}. Because {1, 4, 5} is more competent than {4, 5, 6}, this is an example where the long-run equilibrium government is worse under l = 1 than under l = 2. Note that if l = 3, {4, 5, 6} would be stable again. When either k = 1 or k = 2, the structure of stable governments is relatively straightforward. (Note that in this proposition, and in the examples that follow, a, b or c denote the indices of individuals, with our ranking in which lower-ranked individuals have higher ability; thus γa > γb whenever a < b.) PROPOSITION 3. Suppose that Assumptions 1, 2 , 3, and 4 hold. 1. Suppose that k = 1. If l = 0, then φ(G) = {G1 } = {1} for any G ∈ G. If l = k = 1, then φ(G) = G for any G ∈ G. 2. Suppose that k = 2. If l = 0, then φ(G) = G1 = {1, 2} for any G ∈ G. If l = 1, then if G = {a, b} with a < b, we have φ(G) = {a − 1, a} when a is even and φ(G) = {a , a + 1} when a is odd; in particular, φ(G) = G if and only if a is odd and b = a +1. If l = 2, then φ(G) = G for any G ∈ G. Proof. See Online Appendix B.

1536

QUARTERLY JOURNAL OF ECONOMICS

Proposition 3, though simple, provides an important insight into the structure of stable governments that will be further exploited in the next section. When k = 2 and l = 1, the competence of the stable government is determined by the more able of the two members of the initial government. This means that, with rare exceptions, the quality of the initial government will improve to some degree; that is, typically φ(G) > G . However, this increase is generally limited; when G = {a, b} with a < b, φ(G) = {a − 1, a} or φ(G) = {a, a + 1}, so that at best the nexthigher-ability individual is added to the initial government instead of the lower-ability member. Therefore, summarizing these three cases, we can say that with a perfect democracy, the best government will arise; with an extreme dictatorship, there will be no improvement in the initial government; and in between this, there will be some limited improvements in the quality of the government. When k ≥ 3, the structure of stable governments is more complex, though we can still develop a number of results about and insights into the structure of such governments. Naturally, the extremes with l = 0 and 3 are again straightforward. If l = 1 and the initial government is G = {a, b, c}, where a < b < c, then we can show that members ranked above a − 2 will never become members of the stable government φ(G), and the most competent member of G, a, is always a member of the stable government φ(G).16 Therefore, again with l = 1, only incremental improvements in the quality of the initial government are possible. This ceases to be the case when l = 2. In this case, it can be shown that whenever G = {a, b, c}, where a + b < c, φ(G) = G; instead φ(G) = {a, b, d}, where d < c and in fact, d a is possible. This implies a potentially very large improvement in the quality of the government (contrasting with the incremental improvements in the case where l = 1). Loosely speaking, the presence of two veto players when l = 2 allows the initial government to import veryhigh-ability individuals without compromising stability. The next example illustrates this feature, which is at the root of the result highlighted in Example 4, whereby lesser incumbency veto power can lead to worse stable governments. 16. More specifically, government G = {a, b, c}, where a < b < c, is stable if and only if a = b − 1 = c − 2, and c is a multiple of 3. Moreover, for any government G = {a, b, c} with a < b < c, φ(G) = {a − 2, a − 1, a} if a is a multiple of 3, φ(G) = {a − 1, a, a + 1} if a + 1 is a multiple of 3, and φ(G) = {a, a + 1, a + 2} if a + 2 is a multiple of 3.

POLITICAL SELECTION AND BAD GOVERNMENTS

1537

EXAMPLE 5. Suppose k = 3, and first take the case where l = 1. Suppose G = {100, 101, 220}, meaning that the initial government consists of individuals ranked 100, 101, and 220 in terms of ability. Then φl=1 (G) = {100, 101, 102} so that the third member of the government is replaced, but the highest and the second-highest-ability members are not. More generally, recall that only very limited improvements in the quality of the highest-ability member are possible in this case. Suppose instead that l = 2. Then it can be shown that φl=2 (G) = {1, 100, 101}, so that now the stable government includes the most able individual in the society. Naturally if the gaps in ability at the top of the distribution are larger, implying that highest-ability individuals have a disproportionate effect on government competence, this feature becomes particularly valuable. The following example extends the logic of Example 5 to any distribution and shows how expected competence may be higher under l = 2 than l = 1, and in fact, this result may hold under any distribution over initial (feasible) governments. EXAMPLE 6. Suppose k = 3, and fix a (any) probability distribution over initial governments with full support (i.e., with a positive probability of picking any initial feasible government). Assume that of players i1 , . . . , in, the first q (where q is a multiple of 3 and 3 ≤ q < n − 3) are “smart,” whereas the rest are “incompetent,” so that governments that include at least one of players i1 , . . . , iq will have very high competence relative to governments that do not. Moreover, differences in competence among governments that include at least one of the players i1 , . . . , iq and also among those that do not are small relative to the gap between the two groups of governments. Then it can be shown that the expected competence of the stable government φl=2 (G) (under l = 2) is greater than that of φl=1 (G) (under l = 1)—both expectations are evaluated according to the probability distribution fixed at the outset. This is intuitive in view of the structure of stable governments under the two political regimes. In particular, if G includes at least one of i1 , . . . , iq , so do φl=1 (G) and φl=2 (G). But if G does not, then φl=1 (G) will not include them either, whereas φl=2 (G) will include one with positive probability, because the presence of two veto players will allow the incorporation of one of the “smart” players without destabilizing the government.

1538

QUARTERLY JOURNAL OF ECONOMICS

Conversely, suppose that G is very high if all its players are from {i1 , . . . , iq }, and very low otherwise. In that case, the expected competence of φ(G) will be higher under l = 1 than under l = 2. Indeed, if l = 1, the society will end up with a competent government if at least one of the players is from {i1 , . . . , iq }, whereas if l = 2, because there are now two veto players, there needs to be at least two “smart” players for a competent government to form (though, when l = 2, this is not sufficient to guarantee the emergence of a competent government either). Examples 5 and 6 illustrate a number of important ideas. With greater incumbency veto power, in these examples with l = 2, a greater number of governments near the initial government are stable, and thus there is a higher probability of improvement in the competence of some of the members of the initial government. In contrast, with less incumbency veto power, in these examples with l = 1, fewer governments near the initial one are stable; thus incremental improvements are more likely. Consequently, when including a few high-ability individuals in the government is very important, regimes with greater incumbency veto power perform better; otherwise, regimes with less incumbency veto power perform better.17 Another important implication of these examples is that the situations in which regimes with greater incumbency veto power may perform better are not confined to some isolated instances. This feature applies to a broad class of configurations and to expected competences evaluated by taking uniform or nonuniform distributions over initial feasible governments. Nevertheless, we will see that in stochastic environments, there will be a distinct advantage to political regimes with less incumbency veto power or “greater degrees of democracy.” The intuition for this phenomenon will also be illustrated using Examples 5 and 6. IV.B. Royalty-Type Regimes We have so far focused on political institutions that are “juntalike” in the sense that no specific member is indispensable. In such an environment, the incumbency veto power takes the form of the requirement that some members of the current government must 17. In the former case, this effect is nonmonotonic, because the perfect democracy, l = 0, always performs best.

POLITICAL SELECTION AND BAD GOVERNMENTS

1539

consent to change. The alternative is a “royalty-like” environment where one or several members of the government are irreplaceable (i.e., correspond to “individual veto players” in terms of Tsebelis’s [2002] terminology). This can be conjectured to be a negative force, because it would mean that a potentially low-ability person must always be part of the government. However, because such an irreplaceable member (the member of “royalty”) is also unafraid of changes, better governments may be more likely to arise under certain circumstances, whereas, as we have seen, junta members would resist certain changes because of the further transitions that these would unleash. Let us change Assumption 2 and the structure of the set of winning coalitions WG to accommodate royalty-like regimes. We assume that there are l royalty whose votes are always necessary for a transition to be implemented (regardless of whether they are current government members). We denote the set of these individuals by Y . So the new set of winning coalitions becomes WG = {X ∈ C : |X| ≥ m and Y ⊂ X}. We also assume that all members of royalty are in the initial government; that is, Y ⊂ G0 . Note that the interpretation of the parameter l is now different from what it was for junta-like regimes. In particular, in junta-like regimes, l measured the incumbency veto power and could be considered an inverse measure of (one dimension of) the extent of democracy. In contrast, in the case of royalty, l = 1 corresponds to a one-person dictatorship, whereas l > 1 could be thought of as a “more participatory” regime. The next proposition compares royalty-like and junta-like institutions in terms of the expected competence of the equilibrium government, where, as in Example 6, the expectation is taken with respect to any full support probability distribution over the composition of the initial government. PROPOSITION 4. Consider a royalty-like regime with 1 ≤ l < k such that l royals are never removed from the government. Suppose that the competence of governments is given by (10). Let γi be the ability of the ith most able person in society, so γi > γ j for i < j. If {γ1 , . . . , γn} is sufficiently “convex,” meaning that γ1 −γ2 is sufficiently large, then the expected competence of the γ2 −γn government under the royalty system is greater than under the original junta-like system (with the same l). The opposite −γn−1 is sufficiently low. conclusion holds if γγ1n−1 −γn

1540

QUARTERLY JOURNAL OF ECONOMICS

Proof. See Online Appendix B.

Proposition 4 shows that royalty-like regimes achieve better expected performance than junta-like regimes provided that {γ1 , . . . , γn} is highly “convex” (such convexity implies that the benefit to society from having the highest-ability individual in government is relatively high). As discussed above, juntas are unlikely to lead to such high-quality governments because of the fear of a change leading to a further round of changes, which would exclude all initial members of the junta. Royalty-like regimes avoid this fear. Nevertheless, royalty-like regimes have a disadvantage in that the ability of royals may be very low (or, in a stochastic environment, may become very low at some point), and the royals will always be part of the government. In this sense, royalty-like regimes create a clear disadvantage. However, this result shows that when {γ1 , . . . , γn} is sufficiently convex (to outweigh the loss of expected competence because of the presence of a potentially lowability royal), expected competence is nonetheless higher under the royalty-like system. This result is interesting because it suggests that different types of dictatorships may have distinct implications for long-term quality of government and performance, and regimes that provide security to certain members of the incumbent government may be better at dealing with changes and in ensuring relatively high-quality governments in the long run. This proposition also highlights that, in contrast to existing results on veto players, a regime with individual veto players (members of royalty) can be less stable and more open to change. In particular, a junta-like regime with l > 0 has no individual veto players in the sense of Tsebelis (2002), whereas a royaltylike regime with the same l has such veto players, and Proposition 4 shows that the latter can lead to greater changes in the composition of the government.18 V. EQUILIBRIA IN STOCHASTIC ENVIRONMENTS In this section, we introduce stochastic shocks to the competence of different coalitions (or different individuals) in order to study the flexibility of different political institutions in their 18. If, instead, we assume that β is close to zero or that the players are myopic (as discussed in Example 12), then individual veto players always increase stability and reduce change in the composition of the government, as in the existing literature. This shows the importance of dynamic considerations in the analysis of changes in the structure of government and the impact of veto players.

POLITICAL SELECTION AND BAD GOVERNMENTS

1541

ability to adapt the nature and the composition of the government to changes in the underlying environment. Changes in the nature and structure of “high-competence” governments may result from changes in the economic, political, or social environment, which may in turn require different types of government to deal with newly emerging problems. Our main results in this section establish the relationship between the extent of incumbency veto power (one aspect of the degree of democracy) and the flexibility to adapt to changing environments (measured by the probability that the most competent will come to power). Changes in the environment are modeled succinctly by alt : G → R, which determines the lowing changes in the function G competence associated with each feasible government. Formally, we assume that at each t, with probability 1 − δ, there is no change t−1 t t from G , and with probability δ, there is a shock and G in G may change. In particular, following such a shock, we assume that t−1 t | G ) that gives there exists a set of distribution functions F (G t−1 t the conditional distribution of G at time t as functions of G . The characterization of political equilibria in this stochastic environment is a challenging task in general. However, when δ is sufficiently small so that the environment is stochastic but subject to infrequent changes, the structure of equilibria is similar to that in Theorem 1. We will exploit this characterization to illustrate the main implications of stochastic shocks for the selection of governments. In the rest of this section, we first generalize our definition of (Markov) political equilibrium to this stochastic environment and generalize Theorems 1 and 2 (for δ small). We then provide a systematic characterization of political transitions in this stochastic environment and illustrate the links between incumbency veto power and institutional flexibility. V.A. Stochastic Political Equilibria The structure of stochastic political equilibria is complicated in general because individuals need to consider the implications of current transitions on future transitions under a variety of scenarios. Nevertheless, when the likelihood of stochastic shocks, δ, is sufficiently small, as we have assumed here, then political equilibria must follow a logic similar to that in Definition 1 in Section III. Motivated by this reasoning, we introduce a similar definition of stochastic political equilibria (with infrequent shocks). Online Appendix B establishes that when the discount

1542

QUARTERLY JOURNAL OF ECONOMICS

factor is high and stochastic shocks are sufficiently infrequent, the MPE of the explicit-form game outlined there and our notions of (stochastic) political equilibrium are indeed equivalent. To introduce the notion of (stochastic Markov) political equilibrium, let us first consider a set of mappings φ{G } : G → G defined as in (6), but now separately for each {G }G∈G . These mappings are indexed by {G } to emphasize this dependence. Essentially, if the configuration of competences of different governments given by {G }G∈G applied forever, we would be in a nonstochastic environment and φ{G } would be the equilibrium transition rule, or simply the political equilibrium, as shown by Theorems 1 and 2. The idea underlying our definition for this stochastic environment with infrequent changes is that although the current configuration is {G }G∈G , φ{G } will still determine equilibrium behavior, because the probability of a change in competences is sufficiently small (see Online Appendix B). When the current configuration is {G }G∈G , φ{G } will determine political transitions, and if φ{G } (G) = G, then G will remain in power as a stable government. However, when a stochastic shock hits and {G }G∈G changes to {G }G∈G , political transitions will be determined by the transition rule φ{G } , and unless φ{G } (G) = G, following this shock, there will be a transition to a new government, G = φ{G } (G). DEFINITION 2. Let the set of mappings φ{G } : G → G (a separate mapping for each configuration {G }G∈G ) be defined by the following two conditions. When the configuration of competences is given by {G }G∈G , we have that for any G ∈ G: i. the set of players who prefer φ{G } (G) to G (in terms of discounted utility) form a winning coalition, that is, S = {i ∈ I : Vi (φ{G } (G) | φ{G } ) > Vi (G | φ{G } )} ∈ WG ; ii. there is no alternative government H ∈ G that is preferred both to a transition to φ{G } (G) and to staying in G permanently, that is, there is no H such that SH = {i ∈ I : Vi (H | φ{G } ) > Vi (φ{G } (G) | φ{G } )} ∈ WG

and SH = {i ∈ I : Vi (H | φ{G } ) > wi (G)/(1 − β)} ∈ WG (alternatively, |SH | < mG , or |SH ∩ G| < lG , or |SH | < mG , or |SH ∩ G| < lG ).

Then a set of mappings φ{G } : G → G constitutes a (stochastic Markov) political equilibrium for an environment with sufficiently infrequent changes if there is a transition to

POLITICAL SELECTION AND BAD GOVERNMENTS

1543

government Gt+1 at time t (starting with government Gt ) if t }G∈G = {G }G∈G and Gt+1 = φ{G } (Gt ). and only if {G Therefore, a political equilibrium with sufficiently infrequent changes involves the same political transitions (or the stability of governments) as those implied by the mappings φ{G } defined in (6), applied separately for each configuration {G }. The next theorem provides the general characterization of stochastic political equilibria in environments with sufficiently infrequent changes. THEOREM 3. Suppose that Assumptions 1–3 hold, and let φ{G } : G → G be the mapping defined by (6) applied separately for each configuration {G }. Then there exist β0 < 1 and δ0 > 0 such that for any discount factor β > β0 and any positive probability of shocks δ < δ0 , φ{G } is a unique acyclic political equilibrium. Proof. See Online Appendix B.

The intuition for this theorem is straightforward. When shocks are sufficiently infrequent, the same calculus that applied in the nonstochastic environment still determines preferences because all agents put most weight on the events that will happen before such a change. Consequently, a stable government will arise and will remain in place until a stochastic shock arrives and changes the configuration of competences. Following such a shock, the stable government for this new configuration of competences emerges. Therefore, Theorem 3 provides us with a tractable way of characterizing stochastic transitions. In the next section, we use this result to study the links between different political regimes and institutional flexibility. V.B. The Structure of Stochastic Transitions In the rest of this section, we compare different political regimes in terms of their flexibility (adaptability to stochastic shocks). Our main results will show that, even though limited incumbency veto power does not guarantee the emergence of more competent governments in the nonstochastic environment (nor does it guarantee greater expected competence), it does lead to greater “flexibility” and to better performance according to certain measures in the presence of shocks. In what follows, we always impose Assumptions 1, 2 , 3, and 4, which ensure that when the discount factor β is sufficiently large and the frequency of stochastic shocks δ is sufficiently small, there will be a unique (and acyclic)

1544

QUARTERLY JOURNAL OF ECONOMICS

political equilibrium. Propositions 5, 6, and 7 describe properties of this equilibrium. We also impose some additional structure on the distribution t−1 t | G ) by assuming that any shock corresponds to a rearF (G rangement (“permutation”) of the abilities of different individuals. Put differently, we assume throughout this section that there is a fixed vector of abilities, say a = {a1 , . . . , an}, and the actual distribution of abilities across individuals at time t, {γ jt }nj=1 , is given by some permutation ϕ t of this vector a. We adopt the convention that a1 > a2 > · · · > an. Intuitively, this captures the notion that a shock will change which individual is best placed to solve certain tasks and thus most effective in government functions. We next characterize the “flexibility” of different political regimes. Throughout the rest of this section, our measure of flexibility is the probability with which the best government will be in power (either at given t or as t → ∞).19 More formally, let πt (l, k, n | G, {G }) be the probability that in a society with n individuals under a political regime characterized by l (for given k), a configuration of competences given by {G }, and current government G ∈ G, the most competent government will be in power at the time t. Given n and k, we will think of a regime characterized by l as more flexible than one characterized by l if πt (l , k, n | G, {G }) > πt (l, k, n | G, {G }) for all G and {G } and for all t following a stochastic shock. Similarly, we can think of the regime as asymptotically more flexible than another, if limt→∞ πt (l , k, n | G, {G }) > limt→∞ πt (l, k, n | G, {G }) for all G and {G } (provided that these limits are well defined). Clearly, “being more flexible” is a partial order. PROPOSITION 5. Suppose that any permutation ϕ of the vector a following a shock is equally likely. 1. If l = 0, then a shock immediately leads to the replacement of the current government by the new most competent government. 2. If l = 1, the competence of the government following a shock never decreases further; instead, it increases with 19. This is a natural metric of flexibility in the context of our model; because we have not introduced any cardinal comparisons between the competences of governments, “expected competence” would not be a meaningful measure (see also Footnote 21). Note also that we would obtain similar results if we related flexibility to the probability that one of the best two or three governments comes to power, etc.

POLITICAL SELECTION AND BAD GOVERNMENTS

1545

probability no less than 1−

(k − 1)!(n − k)! n − 1 −1 =1− . (n − 1)! k− 1

Starting with any G and {G }, the probability that the most competent government will ultimately come to power as a result of a shock is lim πt (l, k, n | G, {G }) n − k n −1 < 1. = π (l, k, n) ≡ 1 − k k

t→∞

For fixed k as n → ∞, π (l, k, n) → 0. 3. If l = k ≥ 2, then a shock never leads to a change in government. The probability that the most competent government is in power at any given period (any t) after the shock is −1 n . πt (l = k, k, n | ·, ·) = k This probability is strictly less than πt (l = 0, k, n | G, {G }) and πt (l = 1, k, n | G, {G }) for any G and {G }. Proof. See Online Appendix B.

Proposition 5 contains a number of important results. A perfect democracy (l = 0) does not create any barriers against the installation of the best government at any point in time. Hence, under a perfect democracy, every shock is “flexibly” met by a change in government according to the wishes of the population at large (which here means that the most competent government will come to power). As we know from the analysis in Section IV, this is no longer true as soon as members of the governments have some veto power. In particular, we know that without stochastic shocks, arbitrarily incompetent governments may come to power and remain in power. However, in the presence of shocks, the evolution of equilibrium governments becomes more complex. Next consider the case with l ≥ 1. Now, even though the immediate effect of a shock may be a deterioration in government competence, there are forces that increase government competence in the long run. This is most clearly illustrated in the case where l = 1. With this set of political institutions, there is zero

1546

QUARTERLY JOURNAL OF ECONOMICS

probability that there will be a further decrease in government competence following a shock. Moreover, there is a positive probability that competence will improve and in fact a positive probability that, following a shock, the most competent government will be instituted. In particular, a shock may make the current government unstable, and in this case, there will be a transition to a new stable government. A transition to a less competent government would never receive support from the population. The change in competences may be such that the only stable government after the shock, starting with the current government, may be the best government.20 Proposition 5 also shows that when political institutions take the form of an extreme dictatorship, there will never be any transition; thus the current government can deteriorate following shocks (in fact, it can do so significantly). Most importantly, Proposition 5, as well as Proposition 6 below, shows that regimes with intermediate levels of incumbency veto power have a higher degree of flexibility than extreme dictatorship, ensuring better long-run outcomes (and naturally, perfect democracy has the highest degree of flexibility). This unambiguous ranking in the presence of stochastic shocks (and its stronger version stated in the next proposition) contrasts with the results in Section IV, which showed that general comparisons between regimes with different degrees of incumbency veto power (beyond perfect democracy) are not possible in the nonstochastic case. An informal intuition for the greater flexibility of regimes with more limited incumbency veto power in the presence of stochastic shocks can be obtained from Examples 5 and 6 in the preceding section. Recall from these examples that an advantage of the less democratic regime, l = 2, is that the presence of two veto players makes a large number of governments near the initial one stable. But this implies that if the initial government is destabilized because of a shock, there will only be a move to a nearby government. In contrast, the more democratic regime, l = 1, often makes highly incompetent governments stable because there are no nearby stable governments (recall, for example, part 2 of Proposition 3). But this also implies that if a shock destabilizes the current government, a significant improvement in the quality of the government becomes more likely. Thus, at a broad level and 20. Nevertheless, the probability of the most competent government coming to power, though positive, may be arbitrarily low.

POLITICAL SELECTION AND BAD GOVERNMENTS

1547

contrasting with the presumption in the existing literature (e.g., Tsebelis [2002]), regimes with greater incumbency veto power “create more stability,” which facilitates small or moderate-sized improvements in initial government quality; but they do not create a large “basin of attraction” for the most competent government. In contrast, in regimes with less incumbency veto power, low-competence governments are often made stable by the instability of nearby alternative governments; this instability can be a disadvantage in deterministic environments, as illustrated in the preceding section, but turns into a significant flexibility advantage in the presence of stochastic shocks because it creates the possibility that, after a shock, there may be a jump to a very highcompetence government (in particular, to the best government, which now has a larger “basin of attraction”). The next proposition strengthens the conclusions of Proposition 5. In particular, it establishes that the probability of having the most competent government in power is decreasing in l (or in other words, it is increasing in this measure of the “degree of democracy”).21 PROPOSITION 6. The probability of having the most competent government in power after a shock (for any t), πt (l, k, n | G, {G }), is decreasing in l for any k, n, G, and {G }. Proof. See Online Appendix B.

Propositions 5 and 6 highlight a distinct flexibility advantage (in terms of the probability of the most competent government coming to power) of regimes with low incumbency veto power (“more democratic” regimes). These results can be strengthened further when shocks are “limited” in the sense that only the abilities of two (or several) individuals in the society are swapped. The next proposition contains these results. PROPOSITION 7. Suppose that any shock permutes the abilities of x individuals in the society. 21. This conclusion need not be true for the “expected competence” of the government, because we have not made cardinal assumptions on abilities and competences. In particular, it is possible that some player is not a member of any stable government for some l and becomes part of a stable government for some l < l. If this player is such that the competence of any government that includes him or her is very low (e.g., his or her ability is very low), then expected competence under l may be lower. In Online Appendix B, we provide an example (Example 10) illustrating this point, and we also show that the expected competence of government is decreasing in l when l is close to 0 or to k.

1548

QUARTERLY JOURNAL OF ECONOMICS

1. If x = 2 (so that the abilities of two individuals are swapped at a time) and l ≤ k − 1, then the competence of the government in power is nondecreasing over time; that is, πt (l, k, n | G, {G }) is nondecreasing in t for any l, k, n, G, and {G } such that l ≤ k − 1. Moreover, if the probability of swapping of abilities between any two individuals is positive, then the most competent government will be in power as t → ∞ with probability 1; that is, limt→∞ πt (l, k, n | G, {G }) = 1 (for any l, k, n, G, and {G } such that l ≤ k − 1). 2. If x > 2, then the results in part 1 hold provided that l ≤ k − x/2 . Proof. See Online Appendix B.

An interesting application of Proposition 7 is that when shocks are (relatively) rare and limited in their scope, relatively democratic regimes will gradually improve over time and install the most competent government in the long run. This is not true for the most autocratic governments, however. This proposition, therefore, strengthens the conclusions of Propositions 5 and 6 in highlighting the flexibility benefits of more democratic regimes. VI. CONCLUSIONS In this paper, we have provided a tractable model of dynamic political selection. The main barrier to the selection of good politicians and to the formation of good governments in our model is not the difficulty of identifying competent or honest politicians, but the incumbency veto power of current governments. Our framework shows how a small degree of incumbency veto power can lead to the persistence of highly inefficient and incompetent governments. This is because incumbency veto power implies that one of (potentially many) members of the government needs to consent to a change in the composition of government. However, all current members of the government may recognize that any change may unleash a further round of changes, ultimately unseating them. In this case, they will all oppose any change in government, even if such changes can improve welfare significantly for the rest of the society, and highly incompetent governments can remain in power. Using this framework, we study the implications of different political institutions for the selection of governments in both

POLITICAL SELECTION AND BAD GOVERNMENTS

1549

nonstochastic and stochastic environments. A perfect democracy corresponds to a situation in which there is no incumbency veto power; thus citizens can nominate alternative governments and vote them to power without the need for the consent of any member of the incumbent government. In this case, we show that the most competent government will always come to power. However, interestingly, any deviation from perfect democracy breaks this result and the long-run equilibrium government can be arbitrarily incompetent (relative to the best possible government). In extreme dictatorship, where any single member of the current government has a veto power on any change, the initial government always remains in power and can be arbitrarily costly for the society. More surprisingly, the same is true for any political institution other than perfect democracy. Moreover, there is no obvious ranking between different sets of political institutions (other than perfect democracy and extreme dictatorship) in terms of what they imply for the quality of long-run government. In fact, regimes with greater incumbency veto power, which may be thought of as “less democratic,” can lead to higher-quality governments both in specific instances and in expectation (with uniform or nonuniform distribution over the set of feasible initial governments). Even though no such ranking across political institutions is possible, we provide a fairly tight characterization of the structure of stable governments in our benchmark nonstochastic society. In contrast, in stochastic environments, more democratic political regimes have a distinct advantage because of their greater “flexibility.” In particular, in stochastic environments, either the abilities and competences of individuals or the needs of government functions change, shuffling the ranking of different possible governments in terms of their competence and effectiveness. Less incumbency veto power then ensures greater “adaptability” or flexibility. This result therefore suggests that a distinct advantage of “more democratic” regimes might be their greater flexibility in the face of shocks and changing environments. Finally, we also compare “junta-like” and “royalty-like” regimes. The former is our benchmark society, where change in government requires the consent or support of one or multiple members of the current government. The latter corresponds to situations in which one or multiple individuals are special and must always be part of the government (hence the title “royalty”). If royal individuals have low ability, royalty-like regimes can lead to the persistence of highly incompetent governments. However,

1550

QUARTERLY JOURNAL OF ECONOMICS

we also show that in stochastic environments, royalty-like regimes may lead to the emergence of higher quality governments in the long run than junta-like regimes. This is because royal individuals are not afraid of changes in governments, because their tenure is absolute. In contrast, members of the junta may resist changes that may increase government competence or quality because such changes may lead to another round of changes, ultimately excluding all members of the initial government. An important contribution of our paper is to provide a tractable framework for the dynamic analysis of the selection of politicians. This tractability makes it possible to extend the analysis in various fruitful directions. For example, it is possible to introduce conflict of interest among voters by having each government be represented by two characteristics: competence and ideological leaning. In this case, not all voters will simply prefer the most competent government. The general approach developed here remains applicable in this case. Another generalization would allow the strength of the preferences of voters for a particular government to influence the number of veto players, so that a transition away from a semi-incompetent government can be blocked by a few insiders, but more unified opposition from government members would be necessary to block a transition away from a highly incompetent government. The current framework also abstracts from “self-selection” issues, whereby some citizens may not wish to be part of some, or any, governments (e.g., as in Caselli and Morelli [2004] or Mattozzi and Merlo [2008]). Such considerations can also be incorporated by adding a second dimension of heterogeneity in the outside opportunities of citizens, and restricting feasible transitions to coalitions that only include members who would prefer or can be incentivized to be part of the government. An open question, which may be studied by placing more structure on preferences and institutions, is the characterization of equilibria in the stochastic environment when shocks occur frequently. The most important direction for future research, which is again feasible using the general approach here, is an extension of this framework to incorporate the asymmetric information issues emphasized in the previous literature. For example, we can generalize the environment in this paper so that the ability of an individual is not observed until he or she becomes part of the government. In this case, to install high-quality governments, it is necessary to first “experiment” with different types of governments.

POLITICAL SELECTION AND BAD GOVERNMENTS

1551

The dynamic interactions highlighted by our analysis will then become a barrier to such experimentation. In this case, the set of political institutions that will ensure high-quality governments must exhibit a different type of flexibility, whereby some degree of “churning” of governments can be guaranteed even without shocks. Another interesting area is to introduce additional instruments, so that some political regimes can provide incentives to politicians to take actions in line with the interests of the society at large. In that case, successful political institutions must ensure both the selection of high-ability individuals and the provision of incentives to these individuals once they are in government. Finally, as hinted by the discussion in this paragraph, an interesting and challenging extension is to develop a more general “mechanism design” approach in this context, whereby certain aspects of political institutions are designed to facilitate the appropriate future changes in government. We view these directions as interesting and important areas for future research. APPENDIX Let us introduce the following binary relation on the set of feasible governments G. For any G, H ∈ G we write (14)

H G if and only if {i ∈ I : wi (H) > wi (G)} ∈ WG .

In other words, H G if and only if there exists a winning coalition WG in G such that all members of WG have higher stage payoff in H than in G. Let us also define set D as D = {G ∈ G : φ(G) = G}. A. Proof of Theorem 1 We start with two lemmas that establish useful properties of the payoff functions and the mapping φ and then present the proof of Theorem 1. LEMMA 1. Suppose that G, H ∈ G and G > H . Then 1. For any i ∈ I, wi (G) < wi (H) if and only if i ∈ H \ G. 2. H G. ¯ 3. |{i ∈ I : wi (G) > wi (H)}| > n/2 ≥ k. Proof of Lemma 1. Part 1. If G > H then, by Assumption 1, / H. Hence, wi (G) < wi (H) is wi (G) > wi (H) whenever i ∈ G or i ∈

1552

QUARTERLY JOURNAL OF ECONOMICS

possible only if i ∈ H \ G (note that wi (G) = wi (H) is ruled out by Assumption 3). At the same time, i ∈ H \ G implies that wi (G) < wi (H) by Assumption 1, hence the equivalence. Part 2. We have |H \ G| ≤ |H| ≤ k¯ ≤ mG , because by Assump/ WG , and H G by definition (14). tion 2 k¯ ≤ mG , so that H \ G ∈ Part 3. We have {i ∈ I : wi (G) > wi (H)} = I \ {i ∈ I : wi (G) < wi (H)} ⊃ I \ (H \ G); hence |{i ∈ I : wi (G) > wi (H)}| ≥ n − k¯ ≥ n − ¯ n/2 = n/2 ≥ k. LEMMA 2. Consider the mapping φ defined in (6) and let G, H ∈ G. Then 1. Either φ(G) = G (and then G ∈ D) or φ(G) G. 2. φ(G) ≥ G . 3. If φ(G) G and H G, then φ(G) ≥ H . 4. φ(φ(G)) = φ(G). Proof of Lemma 2. The proof of this lemma is straightforward and is omitted. Proof of Theorem 1. As in the text, let us enumerate elements of G as {G1 , G2 , . . . , G|G| } such that Gx > Gy whenever x < y. First, we prove that the function φ defined in (6) constitutes a (Markov) political equilibrium. Take any Gq , 1 ≤ q ≤ |G|. By (6), either φ(Gq ) = Gq , or φ(Gq ) = Gmin{ j∈Mq } . In the latter case, the set of players who obtain higher-stage payoff under φ(Gq ) than under Gq (i.e., those with wi (φ(Gq ) > wi (Gq ))) form a winning coalition in Gq by (5). Because by definition φ(Gq ) is φ-stable, that is, φ(Gq ) = φ(φ(Gq )), we have Vi (φ(Gq )) > Vi (Gq ) for a winning coalition of players. Hence, in either case condition (i) of Definition 1 is satisfied. Now, suppose, to obtain a contradiction, that condition (ii) of Definition 1 is violated, and X, Y ∈ WGq are winning coalitions such that Vi (H) > wi (Gq )/(1 − β) for all i ∈ X and Vi (H) > Vi (φ(Gq )) for all i ∈ Y and some alternative H ∈ G. Consider first the case φ(H) = φ(Gq ) ; then Vi (H) > Vi (φ(Gq )) would imply wi (φ(H)) > wi (φ(Gq )) as β is close to 1, and hence the set of players who have wi (φ(H)) > wi (φ(Gq )) would include Y , and thus would be a winning coalition in Gq . This is impossible if φ(H) < φ(Gq ) (only players in φ(H) would possibly prefer φ(H), and they are fewer than mGq ). If φ(H) > φ(Gq ) , however, we would get a government φ(H) that was φ-stable by construction of φ and that was preferred to Gq by at least mGq players (all except perhaps members of Gq ) and at least lG members of Gq , as φ(H) is stable and

POLITICAL SELECTION AND BAD GOVERNMENTS

1553

φ(Gq ) ≥ Gq (indeed, at least lG members of Gq —those in coalition X—had Vi (φ(H)) > wi (Gq )/(1 − β), which thus means they belong to φ(H), and hence must have wi (φ(H)) > wi (Gq )). This would imply that φ(H) ∈ Mq by (5), but in that case φ(H) > φ(Gq ) would contradict (6). Finally, consider the case φ(H) = φ(Gq ) , which by Assumption 3 implies that φ(H) = φ(Gq ). Now Vi (H) > Vi (φ(Gq )) implies that wi (H) > wi (φ(Gq )) for all i ∈ Y , as the instantaneous utilities are the same except for the current period. Because this includes at least mGq players, we must have that H > φ(Gq ) . But φ(H) ≥ H by (6), so φ(H) ≥ H > φ(Gq ) , which contradicts φ(H) = φ(Gq ) . This contradiction proves that mapping φ satisfies both conditions of Definition 1, and thus forms a political equilibrium. To prove uniqueness, suppose that there is another acyclic political equilibrium ψ. For each G ∈ G, define χ (G) = ψ |G| (G); due to acyclicity, χ (G) is ψ-stable for all G. We prove the following sequence of claims. First, we must have that χ(G) ≥ G ; indeed, otherwise condition (i) of Definition 1 would not be satisfied for large β. Second, we prove that all transitions must take place in one step; that is, χ (G) = ψ(G) for all G. If this were not the case, then, due to the finiteness of any chain of transitions, there would exist G ∈ G such that ψ(G) = χ (G), but ψ 2 (G) = χ (G). Take H = χ (G). Then H > ψ(G) , H > G , and ψ(H) = H. For β sufficiently close to 1, the condition Vi (H) > wi (G)/(1 − β) is automatically satisfied for the winning coalition of players, who had Vi (ψ(G)) > wi (G)/(1 − β). We next prove that Vi (H) > Vi (ψ(G)) for a winning coalition of players in G. Note that this condition is equivalent to wi (H) > wi (ψ(G)). The fact that at least mG players prefer H to ψ(G) follows from H > ψ(G) . Moreover, because χ (G) = H, at least lG members of G must also be members of H; naturally, they prefer H to ψ(G). Consequently, condition (ii) of Definition 1 is violated. This contradiction proves that χ (G) = ψ(G) for all G. Finally, we prove that ψ(G) coincides with φ(G), defined in (6). Suppose not; that is, suppose that φ(G) = ψ(G). Without loss of generality, we may assume that G is the most competent government that has φ(G) = ψ(G); that is, φ(H) = ψ(H) whenever H > G . By Assumption 3, we have that φ(G) = ψ(G) . Suppose that φ(G) > ψ(G) (the case φ(G) < ψ(G) is treated similarly). As ψ(G) forms a political equilibrium, it must satisfy condition (ii) of Definition 1, for H = φ(G) in particular. Because φ(G) is a political equilibrium, it must be the case that wi (φ(G)) > wi (G), and

1554

QUARTERLY JOURNAL OF ECONOMICS

thus Vi (H | φ) > wi (G)/(1 − β), for a winning coalition of players. Now, we see that the following two facts must hold. First, Vi (H | ψ) > wi (G)/(1 − β) for a winning coalition of players; this follows from the fact that H is φ-stable and thus ψ-stable as H > G and from the choice of G. Second, Vi (H | ψ) > Vi (ψ(G) | ψ) for a winning coalition of players; indeed, H and ψ(G) are ψ-stable, and the former is preferred to the latter (in terms of stage payoffs) by at least mG players and at least lG government members, as φ(G) > ψ(G) and the intersection of H and G contains at least lG members (because H = φ(G)). The existence of such H leads to a contradiction, which completes the proof. B. Dynamic Game We now present a dynamic game that captures certain salient features of the process of government change. This game involves different individuals proposing alternative governments and then all individuals (including current members of the government) voting over these proposals. Then we define the MPEs of this game, establish their existence and their properties, and show the equivalence between the notion of MPE and that of the (stochastic) Markov political equilibrium defined in the text. We also provide a number of examples referred to in the text (e.g., on the possibility of a cyclic MPE or political equilibrium, and on changes in expected competence). Let us first introduce this additional state variable, denoted by v t , which determines whether the current government can be changed. In particular, v t takes two values: v t = s corresponds to a “sheltered” political situation (or “stable” political situation, although we reserve the term “stable” for governments that persist over time) and v t = u designates an unstable situation. The government can be changed only during unstable times. A sheltered political situation destabilizes (becomes unstable) with probability r in each period; that is, Pr(v t = u | v t−1 = s) = r. These events are independent across periods, and we also assume that v 0 = u. An unstable situation becomes sheltered when an incumbent government survives a challenge or is not challenged at all (as explained below). We next describe the procedure for challenging an incumbent government. We start with some government Gt at time t. If at time t the situation is unstable, then all individuals i ∈ I are ordered according to some sequence ηGt . Individuals, in turn, nominate subsets of alternative governments Ait ⊂ G \ {Gt }, which

POLITICAL SELECTION AND BAD GOVERNMENTS

1555

will be part of the primaries. An individual may choose not to nominate any alternative government, in which case he or she may choose Ait = ∅. All nominated governments make up the set At , so (15)

At = {G ∈ G \ {Gt } : G ∈ Ai for some i ∈ I}.

If At = ∅, then all alternatives in At take part in the primaries at time t. The primaries work as follows. All of the altert t t natives in At are ordered ξGAt (1), ξGAt (2), . . . , ξGAt (|At |) according to some prespecified order (depending on At and the current governt ment Gt ). We refer to this order as the protocol, ξGAt . The primaries are then used to determine the challenging government G ∈ At . In particular, we start with G1 , given by the first element of the t t protocol ξGAt , ξGAt (1). In the second step, G1 is voted against the sect ond element, ξGAt (2). We assume that all votes are sequential (and show below that the sequence in which votes take place does not have any affect on the outcome). If more than n/2 of individuals t support the latter, then G2 = ξGAt (2); otherwise G2 = G1 . Proceed ing in order, G3 , G4 , . . ., and G|At | are determined, and G is equal to the last element of the sequence, G|At | . This ends the primary. After the primary, the challenger G is voted against the incumbent government Gt . G wins if and only if a winning coalition of individuals (i.e., a coalition that belongs to WGt t ) supports G . Otherwise, we say that the incumbent government Gt wins. If At = ∅ to start with, then there is no challenger and the incumbent government is again the winner. If the incumbent government wins, it stays in power, and moreover the political situation becomes sheltered; that is, Gt+1 = Gt and v t+1 = s. Otherwise, the challenger becomes the new government, but the situation remains unstable; that is, Gt+1 = G and v t+1 = v t = u. All individuals receive stage payoff wi (Gt ) (we assume that the new government starts acting from the next period). More formally, the exact procedure is as follows.

r Period t = 0, 1, 2, . . . begins with government Gt in power.

If the political situation is sheltered, v t = s, then each individual i ∈ I receives stage payoff uit (Gt ); in the next period, Gt+1 = Gt , v t+1 = v t = s with probability 1 − r and v t+1 = u with probability r. r If the political situation is unstable, vt = u, then the following events take place:

1556

QUARTERLY JOURNAL OF ECONOMICS

1. Individuals are ordered according to ηGt , and in this sequence, each individual i nominates a subset of feasible governments Ait ⊂ G \ {Gt } for the primaries. These determine the set of alternatives At as in (15). 2. If At = ∅, then we say that the incumbent government wins, Gt+1 = Gt , v t+1 = s, and each individual receives states payoff uit (Gt ). If At = ∅, then the alternatives in t At are ordered according to protocol ξGAt . t t 3. If A = ∅, then the alternatives in A are voted against t each other. In particular, at the first step, G1 = ξGAt (1). t t If |A | > 1, then for 2 ≤ j ≤ |A |, at step j, alternative t Gj−1 is voted against ξGAt ( j). Voting in the primary takes place as follows: all individuals vote yes or no sequentially according to some prespecified order, and Gj = t ξGAt ( j) if and only if the set of the individuals who voted yes, Y tj , is a simple majority (i.e., if |Y tj | > n/2); otherwise, Gj = Gj−1 . The challenger is determined as G = G|At | . 4. Government G challenges the incumbent government Gt , and voting in the election takes place. In particular, all individuals vote yes or no sequentially according to some prespecified order, and G wins if and only if the set of the individuals who voted yes, Y t , is a winning coalition in Gt (i.e., if Y t ∈ WGt t ); otherwise, Gt wins. 5. If Gt wins, then Gt+1 = Gt , v t+1 = s; if G wins, then Gt+1 = G , v t+1 = u. In either case, each individual obtains stage payoff uit (Gt ) = wi (Gt ). Several important features of this dynamic game are worth emphasizing. First, the set of winning coalitions, WGt t when the government is Gt , determines which proposals for change in the government are accepted. Second, to specify a well-defined game we had to introduce the prespecified order ηG in which indit viduals nominate alternatives for the primaries, the protocol ξGA for the order in which alternatives are considered, and also the order in which votes are cast. Ideally we would like these orders not to have a major influence on the structure of equilibria, because they are not an essential part of the economic environment and we do not have a good way of mapping the specific orders to reality. We will see that this is indeed the case in the equilibria of interest. Finally, the rate at which political situations become unstable, r,

POLITICAL SELECTION AND BAD GOVERNMENTS

1557

has an important influence on payoffs by determining the rate at which opportunities to change the government arise. In what follows, we assume that r is relatively small, so that political situations are not unstable most of the time. Here, it is also important that political instability ceases after the incumbent government withstands a challenge (or if there is no challenge). This can be interpreted as the government having survived a “no-confidence” motion. In addition, as in the text, we focus on situations in which the discount factor β is large. C. Strategies and Definition of Equilibrium We define strategies and equilibria in the usual fashion. In t particular, let ht,Q denote the history of the game up to period t t and stage Q in period t (there are several stages in period t if v t = u). This history includes all governments, proposals, votes, and stochastic events up to this time. The set of histories is denoted by t t Ht,Q . A history ht,Q can also be decomposed into two parts. We can t t,Q t t t write h = (h , Q ) and correspondingly Ht,Q = Ht × Qt , where t h summarizes all events that have taken place up to period t − 1 and Q t is the list of events that have taken place within the time instant t when there is an opportunity to change the government. t A strategy of individual i ∈ I, denoted by σi , maps Ht,Q (for t all t and Q ) into a proposal when i nominates alternative governments for primaries (i.e., at the first stage of the period where v t = u) and a vote for each possible proposal at each possible decision node (recall that the ordering of alternatives is automatic and is done according to a protocol). A subgame perfect equilibrium (SPE) is a strategy profile {σi }i∈I such that the strategy of each i is the best response to the strategies of all other individuals for all histories. Because there can be several SPE in dynamic games, many supported by complex trigger strategies, which are not our focus here, in this Appendix, we will limit our attention to the Markovian subset of SPEs. We next introduce the standard definition of MPE: DEFINITION 3. A Markov perfect equilibrium is a profile of strategies {σi∗ }i∈I that forms an SPE and such that σi∗ for each i in each period t depends only on Gt , t , W t , and Q t (previous actions taken in period t). MPEs are natural in such dynamic games, because they enable individuals to condition on all of the payoff-relevant information, but rule out complicated trigger-like strategies, which

1558

QUARTERLY JOURNAL OF ECONOMICS

are not our focus in this paper. It turns out that even MPEs potentially lead to a very rich set of behavior. For this reason, it is also useful to consider subsets of MPEs—in particular, acyclic MPEs and order-independent MPEs. As discussed in the text, an equilibrium is acyclic if cycles (changing the initial government but then reinstalling it at some future date) do not take place along the equilibrium path. Cyclical MPEs are both less realistic and more difficult to characterize, motivating our main focus on acyclic MPEs. Formally, we have DEFINITION 4. An MPE σ ∗ is cyclic if the probability that there exist t1 < t2 < t3 such that Gt3 = Gt1 = Gt2 along the equilibrium path is positive. An MPE σ ∗ is acyclic if it is not cyclic. Another relevant subset of MPEs, order-independent MPEs or simply order-independent equilibria, is introduced by Moldovanu and Winter (1995). These equilibria require that strategies should not depend on the order in which certain events (e.g., proposalmaking) unfold. Here we generalize (and slightly modify) their definition for our present context. For this purpose, let us denote the above-described game when the set of protocols is given by t ξ = {ξGA }G∈G,At ∈P(G),G/∈At as GAME[ξ ] and denote the set of feasible protocols by X . DEFINITION 5. Consider GAME[ξ ]. σ ∗ is an order-independent equilibrium for GAME[ξ ] if for any ξ ∈ X , there exists an equilibrium σ ∗ of GAME[ξ ] such that σ ∗ and σ ∗ lead to the same distributions of equilibrium governments Gτ | Gt for τ > t. We establish the relationship between acyclic and orderindependent equilibria in Theorem 5.22 D. Characterization of Markov Perfect Equilibria Recall the mapping φ : G → G defined by (6). We use the next theorem to establish the equivalence between political equilibria and MPEs in the dynamic noncooperative game. THEOREM 4. Consider the game described above. Suppose that Assumptions 1–3 hold and let φ : G → G be the political 22. One could also require order independence with respect to η as well as with respect to ξ . It can be easily verified that the equilibria we focus on already satisfy this property and hence, this is not added as a requirement of “order independence” in Definition 5.

POLITICAL SELECTION AND BAD GOVERNMENTS

1559

equilibrium given by (6). Then there exists ε > 0 such that for any β and r satisfying β > 1 − ε and r/(1 − β) < ε and any protocol ξ ∈ X , 1. There exists an acyclic MPE in pure strategies σ ∗ . 2. Take an acyclic MPE in pure or mixed strategies σ ∗ . Then we have that, if φ(G0 ) = G0 , then there are no transitions; otherwise, with probability 1, there exists a period t where the government φ(G0 ) is proposed, wins the primaries, and wins the power struggle against Gt . After that, there are no transitions, so Gτ = φ(G0 ) for all τ ≥ t. Proof of Theorem 4. Part 1. The proof of this theorem relies on Lemma 1. Let β0 be such that for any β > β0 the following inequalities are satisfied: (16) for any G, G , H, H ∈ G and i ∈ I : wi (G) < wi (H) implies (1 − β |G| )wi (G ) + β |G| wi (G) < 1 − β |G| wi (H ) + β |G| wi (H). For each G ∈ G, define the following mapping χG : G → G: φ(H) if H = G χG (H) = G if H = G. Take any protocol ξ ∈ X . Now take some node of the game in the beginning of some period t when ν t = u. Consider the stages of the dynamic game that take place in this period as a finite game by assigning the following payoffs to the terminal nodes: (17)

⎧ β ⎪ ⎪ wi (H) + wi (φ(H)) if H = G ⎪ ⎪ 1−β ⎨ 2 1 + rβ rβ vi (G, H) = wi (G) + wi (φ(G)) ⎪ ⎪ ⎪ 1 − β(1 − r) (1 − β)(1 − β(1 − r)) ⎪ ⎩ if H = G,

where H = Gt+1 is the government that is scheduled to be in power in period t + 1, that is, the government that defeated the incumbent Gt if it was defeated, and Gt itself if it was not. For any such period t, take an SPE in pure strategies σG∗ = σG∗ t of the truncated game, such that this SPE is the same for any two nodes with the same incumbent government; the latter requirement ensures that once we map these SPEs to a strategy profile σ ∗ of the entire

1560

QUARTERLY JOURNAL OF ECONOMICS

game GAME[ξ ], this profile will be Markovian. In what follows, we prove that for any G ∈ G, (a) if σG∗ is played, then there is no transition if φ(G) = G and there is a transition to φ(G) otherwise; and (b) actions in profile σ ∗ are best responses if continuation payoffs are taken from profile σ ∗ rather than assumed to be given by (17). These two results will complete the proof of part 1. We start with part (a). Take any government G and consider the SPE of the truncated game σG∗ . First, consider the subgame where some alternative H has won the primaries and challenges the incumbent government G. Clearly, proposal H will be accepted if and only if φ(H) G. This implies, in particular, from the construction of mapping φ, that if φ(G) = G, then no alternative H may be accepted. Second, consider the subgame where nominations have been made and the players are voting according to protocol ξGA . We prove that if φ(G) ∈ A, then φ(G) wins the primaries regardless of ξ (and subsequently wins against G, as φ(φ(G)) = φ(G) G. This is proved by backward induction: assuming that φ(G) has number q in the protocol, let us show that if it makes its way to the jth round, where q ≤ j ≤ |A|, then it will win in this round. The base is evident: if φ(G) wins in the last round, 1 w(φ(G)) (we drop the players will get v(G, φ(G)) = χG (φ(G)) = 1−β subscript for the player to refer to w and v as vectors of payoffs), whereas if it loses, they get v(G, H) for some H = φ(G). Clearly, voting for φ(G) is better for a majority of the population, and thus φ(G) wins the primaries and defeats G. The step is proven similarly, hence, in the subgame that starts from the qth round, φ(G) defeats the incumbent government. Because this holds irrespective of what happens in previous rounds, this concludes the second step as well. Third, consider the stage where nominations are made, and suppose, to obtain a contradiction, that φ(G) is not proposed. Then, in the equilibrium, players get a payoff vector v(G, H), where H = φ(G). But then, clearly, any member of φ(G) has a profitable deviation, which is to nominate φ(G) instead of or in addition to what he or she nominates in profile σG∗ . Because in an SPE there should be no profitable deviations, this completes the proof of part (a). Part (b) is straightforward. Suppose that the incumbent government is G. If some alternative H defeats government G, then from part (a), the payoffs that players get starting from next peβ 1 wi (H) if φ(H) = H and wi (H) + 1−β wi (φ(H)) riod are given by 1−β otherwise; in either case, the payoff is exactly equal to vi (G, H).

POLITICAL SELECTION AND BAD GOVERNMENTS

1561

If no alternative defeats government G, then ν t+1 = s (the situation becomes stable), and after that, government G stays until the situation becomes unstable, and government φ(G) is in power in all periods ever since; this again gives the payoff 1+rβ rβ 2 w (G) + (1−β)(1−β(1−r)) wi (φ(G)). This implies that the contin1−β(1−r) i uation payoffs are indeed given by vi (G, H), which means that if in the entire game profile σ ∗ is played, no player has a profitable deviation. This proves part 1. Part 2. Suppose σ ∗ is an acyclic MPE. Take any government G = Gt at some period t in some node on or off the equilibrium path. Define binary relation → on set G as follows: G → H if and only if either G = H and G has a positive probability of staying in power when Gt = G and ν t = u, or G = H and Gt+1 = H with positive probability if Gt = G and ν t = u. Define another binary relation → on G as follows: G → H if any only if there exists a sequence (perhaps empty) of different governments H1 , . . . , Hq such that G → H1 → H2 → · · · → Hq = H and H → H. In other words, G → H if there is an on-equilibrium path that involves a sequence of transitions from G to H and stabilization of political situation at H. Now, because σ ∗ is an acyclic equilibrium, there is no sequence that contains at least two different governments H1 , . . . , Hq such that H1 → H2 → · · · → Hq → H1 . Suppose that for at least one G ∈ G, the set {H ∈ G : G → H} contains at least two elements. From acyclicity it is easy to derive the existence of government G with the following properties: {H ∈ G : G → H} contains at least two elements, but for any element H of this set, {H ∈ G : H → H } is a singleton. Consider the restriction of profile σ ∗ on the part of the game where government G is in power, and call it σG∗ . The way we picked G implies that some government may defeat G with a positive probability, and for any such government H the subsequent evolution prescribed by profile σ ∗ does not exhibit any uncertainty, and the political situation will stabilize at the unique government H = G (but perhaps H = H) such that H → H in no more than |G| − 2 steps. Given our assumption (16) and the assumption that r is small, this implies that no player is indifferent between two terminal nodes of this period that ultimately lead to two different governments H1 and H2 , or between one where G stays and one where it is overthrown. But players act sequentially, one at a time, which means that the last player to act on the equilibrium path when it is still possible to get different outcomes must mix, and

1562

QUARTERLY JOURNAL OF ECONOMICS

therefore be indifferent. This contradiction proves that for any G, government H such that G → H is well defined. Denote this government by ψ(G). To complete the proof, we must show that ψ(G) = φ(G) for all G. Suppose not; then, because ψ(G) G (otherwise G would not be defeated, as players would prefer to stay in G), we must have that φ(G) > ψ(G) . This implies that if some alternative H such that H → φ(G) is nominated, it must win the primaries; this is easily shown by backward induction. If no such alternative is nominated, then, because there is a player who prefers φ(G) to ψ(G) (any member of φ(G) does), that player would be better off deviating and nominating ψ(G). A deviation is not possible in equilibrium, so ψ(G) = φ(G) for all G. By construction of mapping ψ, this implies that there are no transitions if G = φ(G) and one or more transitions ultimately leading to government φ(G) otherwise. This completes the proof. The most important result of this theorem is that any acyclic MPE leads to equilibrium transitions given by the same mapping φ, defined in (6), that characterizes political equilibria as defined in Definition 1. This result thus provides further justification for the notion of political equilibrium used in the paper. The hypothesis that r is sufficiently small ensures that stable political situations are sufficiently “stable,” so that if the government passes a “no-confidence” vote, it stays in power for a nontrivial amount of time. Such a requirement is important for the existence of an MPE in pure strategies and thus for our characterization of equilibria. It underpins the second requirement in part 2 of Definition 1. Example 7, which is presented next, illustrates the potential for nonexistence of pure strategy MPEs without this assumption. EXAMPLE 7. Suppose that the society consists of five individuals 1, 2, 3, 4, 5 (n = 5). Suppose each government consists of two members, so k = 2. Suppose also that l = 1 and m = 3, and {i, j} = 30 − min{i, j} − 5 max{i, j}. Moreover, assume that all individuals care a lot about being in the government and about competence if they are not in the government; however, if an individual compares the utility of being a member of two different governments, he or she is almost indifferent. In this environment, there are two fixed points of mapping φ: {1, 2} and {3, 4}.

POLITICAL SELECTION AND BAD GOVERNMENTS

1563

Let us show that there is no MPE in pure strategies if v t = u for all t (so that the incumbent government is contested in each period). Suppose that there is such an equilibrium for some protocol ξ . One can easily see that no alternative may win if the incumbent government is {1, 2}: indeed, if in equilibrium there is a transition to some G = {1, 2}, then in the last vote, when {1, 2} is challenged by G, both 1 and 2 would be better off rejecting the alternative and postponing the transition to the government (or a chain of governments) that they like less. It is also not hard to show that any of the governments that include 1 or 2 (i.e., {1, 3}, {1, 4}, {1, 5}, {2, 3}, {2, 4}, and {2, 5}) lose the contest for power to {1, 2} in equilibrium. Indeed, if {1, 2} is included in the primaries, it must be the winner (intuitively, this happens because {1, 2} is the Condorcet winner in simple majority voting). Given that, it must always be included in the primaries, for otherwise individual 1 would have a profitable deviation and nominate {1, 2}. We can now conclude that government {3, 4} is stable: any government that includes 1 or 2 will immediately lead to {1, 2}, which is undesirable for both 3 and 4, whereas {3, 5} and {4, 5} are worse than {3, 4} for 3 and 4 as well; therefore, if there is some transition in equilibrium, then 3 and 4 are better off staying at {3, 4} for an extra period, which is a profitable deviation. We now consider the governments {3, 5} and {4, 5}. First, we rule out the possibility that from {3, 5} the individuals move to {4, 5} and vice versa. Indeed, if this were the case, then in the last vote when the government was {3, 5} and the alternative was {4, 5}, individuals 1, 2, 3, 5 would be better off blocking this transition (i.e., postponing it for one period). Hence, either one of governments {3, 5} and {4, 5} is stable or one of them leads to {3, 4} in one step or {1, 2} in two steps. We consider these three possibilities for the government {3, 5} and arrive at a contradiction; the case of {4, 5} may be considered similarly and also leads to a contradiction. It is trivial to see that a transition to {1, 2} (in one or two steps) cannot be an equilibrium. If this were the case, then in the last vote, individuals 3 and 5 would block this transition, because they are better off staying in {3, 5} for one more period (even if the intermediate step to {1, 2} is a government that includes either 3 or 5). This is a profitable deviation that cannot happen in an equilibrium. It is also trivial to check

1564

QUARTERLY JOURNAL OF ECONOMICS

that {3, 5} cannot be stable. Indeed, if this were the case, then if alternative {3, 4} won the primaries, it would be accepted, as individuals 1, 2, 3, 4 would support it. At the same time, any alternative that would lead to {1, 2} would not be accepted, and neither would alternative {4, 5}, unless it led to {3, 4}. Because of that, alternative {3, 4} would make its way through the primaries if nominated, for it is better than {3, 5} for a simple majority of individuals. But then {3, 4} must be nominated, for, say, individual 4 is better off if it were, because he or she prefers {3, 4} to {3, 5}. Consequently, if {3, 5} were stable, we would get a contradiction, because we proved that in this case, {3, 4} must be nominated, win the primaries, and take over the incumbent government {3, 5}. The remaining case to consider is where from {3, 5} the individuals transit to {3, 4}. Note that in this case, alternative {1, 2} would be accepted if it won the primaries: indeed, individuals 1 and 2 prefer {1, 2} over {3, 4} for obvious reasons, but individual 5 is also better off if {1, 2} is accepted, even if the former grants him an extra period of staying in power (as the discount factor β is close to 1). Similarly, any alternative that would lead to {1, 2} in the next period must also be accepted in the last vote. This implies, however, that the alternative ({1, 2} or some other one that leads to {1, 2} ) must necessarily win the primaries if nominated (by the previous discussion, {4, 5} may not be a stable government, and hence the only choice the individuals make is whether to move ultimately to {3, 4} or to {1, 2}, of which they prefer the latter). This, in turn, means that {1, 2} must be nominated, for otherwise, say, individual 1 would be better off doing that. Hence, we have come to a contradiction in all possible cases, which proves that for no protocol ξ a MPE exists in pure strategies. Thus, the proof that both cyclic and acyclic MPEs do not exist is complete. E. Cycles, Acyclicity, and Order-Independent Equilibria The acyclicity requirement in Theorem 4 (similar to the requirement of acyclic political equilibrium in Theorem 1) is not redundant. We next provide an example of a cyclic MPE. EXAMPLE 8. Consider a society consisting of five individuals (n = 5). The only feasible governments are {1}, {2}, {3}, {4}, {5}. Suppose that there is “perfect democracy,” that is, lG = l = 0 for G ∈ G k, and that voting takes the form of simple majority

POLITICAL SELECTION AND BAD GOVERNMENTS

1565

rule, that is, mG = m = 3 for all G. Suppose also that the competences of different feasible governments are given by {i} = 5 − i, so {1} is the best government. Assume also that stage payoffs are given as in Example 2. In particular, wi (G) = G + 100I{i∈G} . These utilities imply that each individual receives a high value from being part of the government relative to the utility he or she receives from government competence. Finally, we define the protocols ξGA as follows. If G = {1}, G\{G} {{2},{3},{4},{5}} A then ξG = ξ{1} = ({3}, {4}, {5}, {2}) and ξ{1} for A = {2,3,4,5}

({3}, {4}, {5}, {2}) is obtained from ξ{1}

by dropping govern{{2},{3},{5}}

ments that are not in A: for example, ξ{1}

= ({3}, {5},

{{1},{3},{4},{5}} {2}). For other governments, we define ξ{2} = ({4}, {{1},{2},{4},{5}} {{1},{2},{3},{5}} {5}, {1}, {3}), ξ{3} =({5}, {1}, {2}, {4}), ξ{4} =({1}, {{1},{2},{3},{4}} {2}, {3}, {5}) and ξ{5} = ({2}, {3}, {4}, {1}), and for other A A again define ξG by dropping the governments absent in

A. Then there exists an equilibrium where the governments follow a cycle of the form {5} → {4} → {3} → {2} → {1} → {5} → · · · . To verify this claim, consider the following nomination strategies by the individuals. If the government is {1}, two individuals nominate {2} and the other three nominate {5}; if it is {2}, two individuals nominate {3} and three nominate {1}; if it is {3}, two nominate {4} and three nominate {2}; if it is {4}, two nominate {5} and three nominate {3}; if it is {5}, two nominate {1} and three nominate {4}. Let us next turn to voting strategies. Here we appeal to Lemma 1 from Acemoglu, Egorov, and Sonin (2008), which shows that in this class of games, it is sufficient to focus on strategies in which individuals always vote for the alternative yielding the highest payoff for them at each stage. In equilibrium, any alternative government that wins the primaries, on or off equilibrium path, subsequently wins against the incumbent government. In particular, in such an equilibrium, supporting the incumbent government breaks a cycle, but only one person (the member of the incumbent government) is in favor of it. We next show if only one individual deviates in the nomination stage, the next government in the cycle still wins in the primaries. Suppose that the current government is {3} (other cases are treated similarly). Then by

1566

QUARTERLY JOURNAL OF ECONOMICS

construction, governments {2} and {4} are necessarily nominated, and {1} or {5} may either be nominated or not. If the last vote in the primaries is between {2} and {4}, then {2} wins: indeed, all individuals know that both alternatives can take over the incumbent government, but {2} is preferred by individuals 1, 2, and 5 (because they want to be government members earlier rather than later). If, however, the last stage involves voting between {4} on the one hand and either {1} or {5} on the other, then {4} wins for similar reason. Now, if either {1} or {5} is nominated, then in the first voting it is voted against {2}. All individuals know that accepting {2} will ultimately lead to a transition to {2}, whereas supporting {1} or {5} will lead to {4}. Because of that, at least three individuals (1, 2, 5) will support {2}. This proves that {2} will win against the incumbent government {3}, provided that {2} and {4} participate in the primaries, which is necessarily the case if no more than one individual deviates. This, in turn, implies that nomination strategies are also optimal in the sense that there is no profitable one-shot deviation for any individual. We can easily verify that this holds for other incumbent governments as well. We have thus proved that the strategies we constructed form SPE; because they are also Markovian, it is an MPE as well. Along the equilibrium path, the governments follow a cycle {5} → {4} → {3} → {2} → {1} → {5} → · · · . We can similarly construct a cycle that moves in the other direction: {1} → {2} → {3} → {4} → {5} → {1} → · · · (though this would require different protocols). Hence, for some protocols, cyclic equilibria are possible. Intuitively, a cycle enables different individuals that will not be part of the limiting (stable) government to enjoy the benefits of being in power. This example, and the intuition we suggest, also highlight that even when there is a cyclic equilibrium, an acyclic equilibrium still exists. (This is clear from the statement in Theorem 1, and also from Theorem 5.) Example 8 also makes it clear that cyclic equilibria are somewhat artificial and less robust. Moreover, as emphasized in Theorems 1 and 4, acyclic equilibria have an intuitive and economically meaningful structure. In the text, we showed how certain natural restrictions rule out cyclic political equilibria (Theorem 2). Here we take a complementary approach and show that the refinement of MPE introduced

POLITICAL SELECTION AND BAD GOVERNMENTS

1567

above, order-independent equilibrium, is also sufficient to rule out cyclic equilibria (even without the conditions in Theorem 2). This is established in Theorem 5 below, which also shows that with order-independent MPE, multi-step transitions, which are possible under MPE as shown in the next example, will also be ruled out. EXAMPLE 9. Take the setup of Example 8, with the exception that l{1} = 1 (so that consent of individual 1 is needed to change the government when the government is {1}). It is easy to check that the strategy profile constructed in Example 8 is an MPE in this case as well. However, this strategy profile will lead to different equilibrium transitions. Indeed, if the government is {1}, individual 1 will vote against any alternative which wins the primaries, and thus alternative {5} will not be accepted in equilibrium, so government {1} will persist. Hence, in equilibrium, the transitions are as follows: {5} → {4} → {3} → {2} → {1}. We now establish that order-independent equilibria always exist, are always acyclic, and lead to rapid (one-step) equilibrium transitions. As such, this theorem will be a strong complement to Theorem 2 in the text, though its proof requires a slightly stronger version of Assumption 3, which we now introduce. ASSUMPTION 3. For any i ∈ I and any sequence of feasible governments, H1 , H2 , . . . , Hq ∈ G which include at least two different ones, we have q j=2 wi (H j ) . wi (H1 ) = q−1 Recall that Assumption 3 imposed that no two feasible governments have exactly the same competence. Assumption 3 strengthens this and requires that the competence of any government should not be the average of the competences of other feasible governments. Like Assumption 3, Assumption 3 is satisfied “generically,” in the sense that if it were not satisfied for a society, any small perturbation of competence levels would restore it. THEOREM 5. Consider the game described above. Suppose that Assumptions 1, 2, and 3 hold and let φ : G → G be the political equilibrium defined by (6). Then there exists ε > 0 such that

1568

QUARTERLY JOURNAL OF ECONOMICS

for any β and r satisfying β > 1 − ε and r/(1 − β) < ε and any protocol ξ ∈ X : 1. There exists an order-independent MPE in pure strategies σ ∗. 2. Any order-independent MPE in pure strategies σ ∗ is acyclic. 3. In any order-independent MPE σ ∗ , we have that, if φ(G0 ) = G0 , then there are no transitions and government Gt = G0 for each t; if φ(G0 ) = G0 , then there is a transition from G0 to φ(G0 ) in period t = 0, and there are no more transitions: Gt = φ(G0 ) for all t ≥ 1. 4. In any order-independent MPE σ ∗ , the payoff of each individual i ∈ I is given by β wi (φ(G0 )). ui0 = wi (G0 ) + 1−β Proof of Theorem 5. See Online Appendix B.

F. Stochastic Markov Perfect Equilibria We next characterize the structure of (order-independent) stochastic MPEs (that is, order-independent MPEs in the presence of stochastic shocks) and establish the equivalence between order-independent (or acyclic) MPE and our notion of (acyclic stochastic) political equilibrium. Once again, the most important conclusion from this theorem is that MPE of the dynamic game discussed here under stochastic shocks lead to the same behavior as our notion of stochastic political equilibrium introduced in Definition 2. THEOREM 6. Consider the above-described stochastic environment. Suppose that Assumptions 1, 2, 3 , and 4 hold. Let t . φ t : G → G be the political equilibrium defined by (6) for G Then there exists ε > 0 such that for any β and r satisfying β > 1 − ε, r/(1 − β) < ε, and δ < ε, for any protocol ξ ∈ X , we have the following results: 1. There exists an order-independent MPE in pure strategies. 2. Suppose that between periods t1 and t2 there are no shocks. Then in any order-independent MPE in pure strategies, the following results hold: if φ(Gt1 ) = Gt1 , then there are no transitions between t1 and t2 ; if φ(Gt1 ) = Gt1 , then alternative φ(Gt1 ) is accepted during the first period of instability (after t1 ).

POLITICAL SELECTION AND BAD GOVERNMENTS

Proof of Theorem 6. See Online Appendix B.

1569

G. Additional Examples The next example (Example 10) shows that in stochastic environments, even though the likelihood of the best government coming to power is higher under more democratic institutions, the expected competence of stable governments may be lower. EXAMPLE 10. Suppose n = 9, k = 4, l = l1 = 3, m = 5. Let the individuals be denoted 1, 2, 3, 4, 5, 6, 7, 8, 9, with decreasing ability. Namely, suppose that the abilities of individuals 1, . . . , 8 are given by γi = 28−i , and γ9 = −106 . Then the 14 stable governments, in the order of decreasing competence, are given as follows: {1, 2, 3, 4} {2, 3, 5, 8} {1, 2, 5, 6} {1, 2, 7, 8}

{2, 3, 6, 7} {2, 4, 5, 7}

{1, 3, 5, 7} {1, 3, 6, 8}

{2, 4, 6, 8} {3, 4, 5, 6}

{1, 4, 5, 8} {3, 4, 7, 8} {1, 4, 6, 7} {5, 6, 7, 8} (Note that this would be the list of stable governments for 9 , except that, say, {1368} any decreasing sequence {γi }i=1 may become less competent than {1458} .) Now consider the same parameters, but take l = l2 = 2. Then there are three stable governments {1, 2, 3, 4}, {1, 5, 6, 7}, and {2, 5, 8, 9}. For a random initial government, the probability that individual 9 will be a part of the stable government that evolves is 9/126 = 1/16: of 94 = 126 feasible governments there are 9 governments that lead to {2, 5, 8, 9}, which are {2, 5, 8, 9}, {2, 6, 8, 9}, {2, 7, 8, 9}, {3, 5, 8, 9}, {3, 6, 8, 9}, {3, 7, 8, 9}, {4, 5, 8, 9}, {4, 6, 8, 9}, and {4, 7, 8, 9}. Clearly, the expected competence of government for l2 = 2 is negative, whereas for l1 = 1 it is positive, as no stable government includes the least competent individual 9. Next, we provide an example (Example 11) of a cyclic political equilibrium. EXAMPLE 11. There are n = 19 players and three feasible governments: A = {1, 2, 3, 4, 5, 6, 7}, B = {7, 8, 9, 10, 11, 12, 13}, C = {13, 14, 15, 16, 17, 18, 19} (so k¯ = 7). The discount factor

1570

QUARTERLY JOURNAL OF ECONOMICS

is sufficiently close to 1, say, β > 0.999. The institutional parameters of these governments and players’ utilities from them are given in the following table: G {A} {B} {C}

lG 0 0 0

mG 10 11 12

1 90 60 10

2 90 20 10

3 90 20 10

4 90 20 10

5 90 20 10

6 90 20 10

7 90 80 10

8 60 80 10

9 60 80 10

10 30 80 10

11 30 80 10

12 30 80 10

13 30 80 70

14 30 20 70

15 30 20 70

16 30 20 70

17 30 20 70

18 30 20 70

19 45 20 70

We claim that φ, given by φ(A) = C, φ(B) = A, φ(C) = B is a (cyclic) political equilibrium. Let us first check property (ii) of Definition 1. The set of players with Vi (C) > Vi (A) is {10, 11, 12, 13, 14, 15, 16, 17, 18, 19} (as β is close to 1, the simplest way to check this condition for player i is to verify whether wi (A) is greater or less than the average of wi (A), wi (B), wi (C); the case where these are equal deserves more detailed study, and is critical for this example). These ten players form a winning coalition in A. The set of players with Vi (A) > Vi (B) is {2, 3, 4, 5, 6, 14, 15, 16, 17, 18, 19}; these eleven players form a winning coalition in B. The set of players with Vi (B) > Vi (C) is {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}; these twelve players form a winning coalition in C. Let us now check condition (ii) of Definition 1. Suppose the current government is A; then the only H we need to consider is B. Indeed, if H = C then Vi (H) > Vi (φ(A)) = Vi (C) is impossible for any player, and if H = A, then Vi (H) > Vi (φ(A)) cannot hold for a winning coalition of players, as the opposite inequality Vi (φ(A)) > Vi (A) holds for a winning coalition (condition (i)), and any two winning coalitions intersect in this example. But for H = B, condition (ii) of Definition 1 is also satisfied, as Vi (B) > wi (A)/(1 − β) holds for players from the set {10, 11, 12, 13, 14, 15, 16, 17, 18} only, which is not a winning coalition, as there are only nine players (we used the fact that player 19 has Vi (A) > Vi (B), but Vi (B) > wi (A)/(1 − β) for β close to 1, as 45 is the average of 20 and 70). If the current government is B, then, as before, only government H = C needs to be considered. But Vi (C) > Vi (A) holds for ten players only, and this is not a winning coalition in B. Finally, if the current government is C, then again, only the case H = A needs to be checked. But Vi (A) > Vi (B) holds for only eleven players, and this is not a winning coalition in C. So both conditions of Definition 1 are satisfied, and thus φ is a cyclic political equilibrium.

POLITICAL SELECTION AND BAD GOVERNMENTS

1571

The results in the paper are obtained for β close to 1 (and in the case with shocks, for r small). If β is close to 0, then an MPE in pure strategies always exists (in the stochastic case, it does for any r). In fact, it is straightforward to show that the equilibrium mapping φ0 takes the following form (again, assuming that governments are enumerated from the best one to the worst one): for all q ≥ 1, define Nq = { j : 1 ≤ j < q, |G j ∩ Gq | ≥ l}, and then, as in the case with β close to 1, define φ0 by Gq if Nq = ∅; φ0 (Gq ) = Gmin{ j∈Nq } if Nq = ∅. Intuitively, β = 0, or close to 0, corresponds to myopic players, and the equilibrium mapping simply defines the best government that includes at least l members from the current government. Consequently, there is no longer any requirement related to the stability of the new government φ(G), which resulted from dynamic considerations. We now demonstrate (Example 12) that for intermediate values of β, the situation is significantly different (and the same argument applies when r is not close to 0): there might not exist an MPE in pure strategies. EXAMPLE 12. Set β = 1/2, and suppose n = 5. Assume k = 2 and l = 1. That is, feasible governments consist of two players, and a transition requires the consent of a simple majority of individuals which must include a member of the incumbent government. For simplicity, restrict the set of feasible governments, G, to four elements: {1, 2}, {1, 3}, {2, 4}, {3, 4}. Preferences of players over these governments are defined in the table below: G {1, 2} {1, 3} {2, 4} {3, 4}

1 90 70 10 1

2 90 10 70 1

3 25 40 15 35

4 20 15 80 25

5 50 40 10 1

In this example, 5 is a “dummy player” who always prefers more competent governments because he has no opportunity to become a member of the government. Players 1 and 2 have well-behaved preferences, so governments {1, 3} and {2, 4}

1572

QUARTERLY JOURNAL OF ECONOMICS

would lead to {1, 2} in one step. Both 1 and 2 value their own membership in the government much more than the quality of governments they are not members of, so transitions from {1, 3} and {2, 4} to these governments will not happen in equilibrium. Player 3 also prefers to stay in {3, 4} forever rather than moving to {1, 2} via {1, 3}. In contrast, player 4 would prefer to transit to {1, 2} through {2, 4} rather than staying in {3, 4} forever. The majority, however, prefers the latter transition to the former one, and these considerations together lead to nonexistence. To obtain a contradiction, suppose that there exists a MPE in pure strategies that implements some transition rule φ. First, we prove that φ({1, 2}) = φ({1, 3}) = {1, 2}. Then we will show that φ({3, 4}) cannot be any of the feasible governments. It is straightforward that φ({1, 2}) = {1, 2} as otherwise 1 and 2 would block the last transition, as they prefer to stay in {1, 2} for one more period. Now let us take government {1, 3}, and prove that φ({1, 3}) = {1, 2}. To show this, notice first that a transition to {2, 4} or {3, 4} would never be approved at the last voting, no matter what the equilibrium transition from {1, 3} is. Indeed, a transition to {2, 4} will ultimately be blocked by 1, 3, 5, for each of whom a single period in {2, 4} is worse than staying in {1, 3} for one period, no matter what the subsequent path is; this follows from evaluating discounted utilities with β = 1/2. A transition to {3, 4} will be blocked by 1, 2, 5 for similar reasons. But then it is easy to see that {1, 3} cannot be stable either. Indeed, if alternative {1, 2} is proposed for the primaries, then the ultimate choice that players make is between staying in {1, 3} and transiting to {1, 2}, as transiting to any of the other two governments is not an option, as such a transition would be blocked at the last stage. But a transition to {1, 2} is preferred by all players except 3; hence, it will eventually be the outcome of the primaries and will defeat {1, 3} in the election stage. Anticipating this, if {1, 2} is not in the primaries in equilibrium player 1 is better off proposing {1, 2}, and this is a profitable deviation. This proves that φ({1, 3}) = {1, 2}. Our next step is to prove that φ({2, 4}) = {1, 2}. A transition to {3, 4} will be blocked at the last stage regardless of the equilibrium transition rule, because both 2 and 4 are worse off, and one incumbent is needed. The same is true about a transition to {1, 3}. Hence, {2, 4} is either stable or must

POLITICAL SELECTION AND BAD GOVERNMENTS

1573

transition to {1, 2}; an argument similar to the one before shows that {2, 4} cannot be stable, as then alternative {1, 2} would win the primaries and be implemented, and therefore some player will propose it for the primaries. This shows that φ({2, 4}) = {1, 2}. Finally, consider state {3, 4}. It is easy to see that φ({3, 4}) cannot equal {1, 2}. Moreover, an immediate transition to {1, 2} will not be accepted at the last voting stage regardless of the equilibrium transition, because both 3 and 4 would prefer to stay in {3, 4} for an extra period (indeed, even if in the next period they expect to transition to the state they like least, such as {1, 3} for player 4, staying in {3, 4} is still preferred as they know that the next transition would be to {1, 2} anyway). Consequently, there are three possibilities. Consider first the case where {3, 4} is stable. This means that an offer to move to {1, 3} would be blocked by 3 and 4 (both get a lower expected utility from that then from staying in {3, 4}). Hence, the only alternative that may be accepted is {2, 4}, and it will actually be accepted if proposed for primaries, as it gives a higher discounted utility to players 1, 2, 4, 5 than staying in {3, 4}. The same reasoning as before suggests that it will then be nominated for primaries, which in turn means that {3, 4} cannot be stable. Consider the possibility that φ({3, 4}) = {2, 4}. If this is the case, then alternative {1, 3} would be accepted, if it makes its way to the final voting. Indeed, for players 1, 3, 5 (and perhaps even player 2, depending on the value of r), transiting to {1, 3} is preferred to staying in {3, 4}, even if eventually a transition to {2, 4} will happen. Hence, if some player nominates {1, 3}, then players will compare transiting to {1, 3} to transiting {2, 4} and staying in {3, 4}, and here {1, 3} is the Condorcet winner: it beats staying in {3, 4} as we just proved, and it beats transiting to {2, 4} because 1, 3, 5 prefer transiting to {1, 2} via {1, 3} than via {2, 4}. Consequently, {1, 3} will be implemented if nominated, and hence some player, say, 1, will be better off nominating it. Consequently, φ({3, 4}) = {2, 4} is impossible. The last possibility is that φ({3, 4}) = {1, 3}. But then, given β = 1/2, both 3 and 4 are better off blocking this transition to {1, 3} at the final voting in order to stay in {3, 4} for at least one more period (and perhaps more, depending on r). This

1574

QUARTERLY JOURNAL OF ECONOMICS

shows that a transition to {1, 3} cannot happen in equilibrium. This final contradiction shows that there is no MPE in pure strategies in this example. MASSACHUSETTS INSTITUTE OF TECHNOLOGY AND CANADIAN INSTITUTE FOR ADVANCED RESEARCH KELLOGG SCHOOL OF MANAGEMENT NEW ECONOMIC SCHOOL

REFERENCES Acemoglu, Daron, “Oligarchic versus Democratic Societies,” Journal of European Economic Association, 6 (2008), 1–44. Acemoglu, Daron, Philippe Aghion, and Fabrizio Zilibotti, “Distance to Frontier, Selection, and Economic Growth,” Journal of the European Economic Association, 4 (2006), 37–74. Acemoglu, Daron, Georgy Egorov, and Konstantin Sonin, “Dynamics and Stability in Constitutions, Coalitions, and Clubs,” NBER Working Paper No. 14239, 2008. Acemoglu, Daron, and James Robinson, “Why Did the West Extend the Franchise? Democracy, Inequality, and Growth in Historical Perspective,” Quarterly Journal of Economics, 115 (2000), 1167–1199. ——, Economic Origins of Dictatorship and Democracy (New York: Cambridge University Press, 2006). ——, “Persistence of Power, Elites and Institutions,” American Economic Review, 98 (2008), 267–293. Acemoglu, Daron, James Robinson, and Thierry Verdier, “Kleptocracy and Divideand-Rule,” Journal of the European Economic Association, 2 (2004), 162–192. Acemoglu, Daron, Davide Ticchi, and Andrea Vindigni, “Emergence and Persistence of Inefficient States,” Journal of European Economic Association, forthcoming, 2010. Aghion, Philippe, Alberto Alesina, and Francesco Trebbi, “Democracy, Technology and Growth,” in Institutions and Economic Performance, Elhanan Helpman, ed. (Cambridge, MA: Harvard University Press, 2009). Alfoneh, Ali, “Ahmadinejad vs. the Technocrats,” American Enterprise Institute Online (http://www.aei.org/outlook/27966, 2008). Baker, Peter, and Susan Glasser, Kremlin Rising: Vladimir Putin’s Russia and the End of Revolution, updated ed. (Dulles, VA: Potomac Books, 2007). Banks, Jeffrey S., and Rangarajan K. Sundaram, “Optimal Retention in Principal/ Agent Models,” Journal of Economic Theory, 82 (1998), 293–323. ` Salvador, and Matthew Jackson, “Choosing How to Choose: Self-Stable Barbera, Majority Rules and Constitutions,” Quarterly Journal of Economics, 119 (2004), 1011–1048. ` Salvador, Michael Maschler, and Jonathan Shalev, “Voting for Voters: A Barbera, Model of the Electoral Evolution,” Games and Economic Behavior, 37 (2001), 40–78. Barro, Robert, “The Control of Politicians: An Economic Model,” Public Choice, 14 (1973), 19–42. ——, “Democracy and Growth,” Journal of Economic Growth, 1 (1996), 1–27. Besley, Timothy, “Political Selection,” Journal of Economic Perspectives, 19 (2005), 43–60. Besley, Timothy, and Anne Case, “Incumbent Behavior: Vote Seeking, Tax Setting and Yardstick Competition,” American Economic Review, 85 (1995), 25–45. Besley, Timothy, and Stephen Coate, “An Economic Model of Representative Democracy,” Quarterly Journal of Economics, 112 (1997), 85–114. ——, “Sources of Inefficiency in a Representative Democracy: A Dynamic Analysis,” American Economic Review, 88 (1998), 139–156.

POLITICAL SELECTION AND BAD GOVERNMENTS

1575

Besley, Timothy, and Masayuki Kudamatsu, “Making Autocracy Work,” in Institutions and Economic Performance, Elhanan Helpman, ed. (Cambridge, MA: Harvard University Press, 2009). Brooker, Paul, Non-Democratic Regimes: Theory, Government, and Politics (New York: St. Martin’s Press, 2000). Bueno de Mesquita, Bruce, Alastair Smith, Randolph D. Siverson, and James D. Morrow, The Logic of Political Survival (Cambridge, MA: MIT Press, 2003). Caselli, Francesco, and Massimo Morelli, “Bad Politicians,” Journal of Public Economics, 88 (2004), 759–782. Cox, Gary, and Jonathan Katz, “Why Did the Incumbency Advantage in U.S. House Elections Grow?” American Journal of Political Science, 40 (1996), 478–497. Diermeier, Daniel, Michael Keane, and Antonio Merlo, “A Political Economy Model of Congressional Careers,” American Economic Review, 95 (2005), 347–373. Egorov, Georgy, and Konstantin Sonin, “Dictators and Their Viziers: Endogenizing the Loyalty–Competence Trade-off,” Journal of European Economic Association, forthcoming, 2010. Ferejohn, John, “Incumbent Performance and Electoral Control,” Public Choice, 50 (1986), 5–25. Jones, Benjamin, and Benjamin Olken, “Do Leaders Matter? National Leadership and Growth since World War II,” Quarterly Journal of Economics, 120 (2005), 835–864. Lagunoff, Roger, “Markov Equilibrium in Models of Dynamic Endogenous Political Institutions,” Georgetown University, Mimeo, 2006. Mattozzi, Andrea, and Antonio Merlo, “Political Careers or Career Politicians?” Journal of Public Economics, 92 (2008), 597–608. McKelvey, Richard, and Raymond Reizman, “Seniority in Legislatures,” American Political Science Review, 86 (1992), 951–965. Menashri, David, Post-Revolutionary Politics in Iran: Religion, Society in Power (London: Frank Cass Publishers, 2001). Meredith, Martin, Mugabe: Power, Plunder and the Struggle for Zimbabwe (New York: Perseus Books, 2007). Messner, Matthias, and Mattias Polborn “Voting on Majority Rules,” Review of Economic Studies, 71 (2004), 115–132. Minier, Jenny, “Democracy and Growth: Alternative Approaches,” Journal of Economic Growth, 3 (1998), 241–266. Moldovanu, Benny, and Eyal Winter, “Order Independent Equilibria,” Games and Economic Behavior, 9 (1995), 21–34. Niskanen, William, Bureaucracy and Representative Government (New York: Auldine, Atherton, 1971). Osborne, Martin, and Al Slivinski, “A Model of Political Competition with CitizenCandidates,” Quarterly Journal of Economics, 111 (1996), 65–96. Padro-i-Miquel, Gerard, “The Control of Politicians in Divided Societies: The Politics of Fear,” Review of Economic Studies, 74 (2007), 1259–1274. Persson, Torsten, Gerard Roland, and Guido Tabellini, “Separation of Power and Political Accountability,” Quarterly Journal of Economics, 112 (1997), 1163– 1202. Przeworski, Adam, and Fernando Limongi, “Modernization: Theories and Facts,” World Politics, 49 (1997), 155–183. Roberts, Kevin, “Dynamic Voting in Clubs,” London School of Economics, Mimeo, 1999. Shleifer, Andrei, and Robert Vishny, “Corruption,” Quarterly Journal of Economics, 108 (1993), 599–617. Tsebelis, George, Veto Players: How Political Institutions Work (Princeton, NJ: Princeton University Press, 2002).

THE DEVELOPING WORLD IS POORER THAN WE THOUGHT, BUT NO LESS SUCCESSFUL IN THE FIGHT AGAINST POVERTY∗ SHAOHUA CHEN AND MARTIN RAVALLION A new data set on national poverty lines is combined with new price data and almost 700 household surveys to estimate absolute poverty measures for the developing world. We find that 25% of the population lived in poverty in 2005, as judged by what “poverty” typically means in the world’s poorest countries. This is higher than past estimates. Substantial overall progress is still indicated—the corresponding poverty rate was 52% in 1981—but progress was very uneven across regions. The trends over time and regional profile are robust to various changes in methodology, though precise counts are more sensitive.

I. INTRODUCTION When the extent of poverty in a given country is assessed, a common (real) poverty line is typically used for all citizens within that country, such that two people with the same standard of living—measured in terms of current purchasing power over commodities—are treated the same way in that both are either poor or not poor. Similarly, for the purpose of measuring poverty in the world as a whole, a common standard is typically applied across countries. This assumes that a person’s poverty status depends on his or her own command over commodities, and not on where he or she lives independently of that.1 In choosing a poverty line for a given country one naturally looks for a line that is considered appropriate for that country, while acknowledging that rich countries tend to have higher real ∗ A great many colleagues at the World Bank helped us in obtaining the necessary data for this paper and answered our many questions. An important acknowledgement goes to the staff of over 100 governmental statistics offices who collected the primary household and price survey data. Our thanks go to Prem Sangraula, Yan Bai, Xiaoyang Li, and Qinghua Zhao for their invaluable help in setting up the data sets we use here. The Bank’s Development Data Group helped us with our many questions concerning the 2005 ICP and other data issues; we are particularly grateful to Yuri Dikhanov and Olivier Dupriez. We have also benefited from the comments of Francois Bourguignon, Gaurav Datt, Angus Deaton, Massoud Karshenas, Aart Kraay, Peter Lanjouw, Rinku Murgai, Ana Revenga, Luis Serv´en, Merrell Tuck, Dominique van de Walle, Kavita Watsa, and the journal’s editors, Robert Barro and Larry Katz, and anonymous referees. We are especially grateful to Angus Deaton, whose comments prompted us to provide a more complete explanation of why we obtain a higher global poverty count with the new data. These are our views and should not be attributed to the World Bank or any affiliated organization. [email protected]; [email protected] 1. For further discussion of this assumption, see Ravallion (2008b) and Ravallion and Chen (2010). C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1577

1578

QUARTERLY JOURNAL OF ECONOMICS

poverty lines than poor ones. (Goods that are luxuries in rural India, say, are considered absolute necessities in the United States.) There must, however, be some lower bound, because the cost of a nutritionally adequate diet (and even of social needs) cannot fall to zero. Focusing on that lower bound for the purpose of measuring poverty in the world as a whole gives the resulting poverty measure a salience in characterizing “extreme poverty,” though higher lines are also needed to obtain a complete picture of the distribution of levels of living. This reasoning led Ravallion, Datt, and van de Walle (RDV) (1991)—in background research for the 1990 World Development Report (World Bank 1990)—to propose two international lines: the lower one was the predicted line for the poorest country and the higher one was a more typical line amongst low-income countries. The latter became known as the “$1-a-day” line. In 2004, about one in five people in the developing world—close to one billion people—were poor by this standard (Chen and Ravallion 2007). This paper reports on the most extensive revision yet of the World Bank’s estimates of poverty measures for the developing world.2 In the light of a great deal of new data, the paper estimates the global poverty count for 2005 and updates all past estimates back to 1981. New data from three sources make the need for this revision compelling. The first is the 2005 International Comparison Program (ICP). The price surveys done by the ICP have been the main data source for estimating PPPs, which serve the important role of locating the residents of each country in the “global” distribution. Prior to the present paper, our most recent global poverty measures had been anchored to the 1993 round of the ICP. A better funded round of the ICP in 2005, managed by the World Bank, took considerable effort to improve the price surveys, including developing clearer product descriptions. A concern about the 1993 and prior ICP rounds was a lack of clear standards in defining internationally comparable commodities. This is a serious concern in comparing the cost of living between poor countries and rich ones, given that there is likely to be an economic gradient in the quality of commodities consumed and (relatively homogeneous) “name brands” are less common in poor countries. Without strict standards in defining the products to be priced, there is a risk 2. By the “developing world” we mean all low- and middle-income countries— essentially the Part 2 member countries of the World Bank.

POVERTY IN THE DEVELOPING WORLD

1579

that one will underestimate the cost of living in poor countries by confusing quality differences with price differences. The new ICP data imply some dramatic revisions to past estimates, consistent with the view that the old ICP data had underestimated the cost-of-living in poor countries (World Bank 2008b). The second data source is a new compilation of poverty lines. The original “$1-a-day” line was based on a compilation of national lines for only 22 developing countries, mostly from academic studies in the 1980s. Although this was the best that could be done at the time, the sample was hardly representative of developing countries even in the 1980s. Since then, national poverty lines have been developed for many other countries. Based on a new compilation of national lines for 75 developing countries provided by Ravallion, Chen, and Sangraula (2009), this paper implements updated international poverty lines, in the spirit of the aim of the original $1-a-day line, namely to measure global poverty by the standards of the poorest countries. The third data source is the large number of new household surveys now available. We draw on 675 surveys, spanning 115 countries and 1979–2006. (In contrast, the original RDV estimates used 22 surveys, one per country; Chen and Ravallion [2004] used 450 surveys.) Each of our international poverty lines at PPP is converted to local currencies in 2005 and then is converted to the prices prevailing at the time of the relevant household survey using the best available Consumer Price Index (CPI). (Equivalently, the survey data on household consumption or income for the survey year are expressed in the prices of the ICP base year, and then converted to PPP dollars.) Then the poverty rate is calculated from that survey. All intertemporal comparisons are real, as assessed using the country-specific CPI. We make estimates at three-year intervals over the years 1981–2005. Interpolation/extrapolation methods are used to line up the survey-based estimates with these reference years, including 2005. We also present a new method of mixing survey data with national accounts (NAS) data to try to reduce survey-comparability problems. For this purpose, we treat the national accounts data on consumption as the data for predicting a Bayesian prior for the survey mean and the actual survey as the new information. Under log-normality with a common variance, the mixed posterior estimator is the geometric mean of the survey mean and its predicted value based on the NAS. These new data call for an upward revision of our past estimates of the extent of poverty in the world, judged by the

1580

QUARTERLY JOURNAL OF ECONOMICS

standards of the world’s poorest countries. The new PPPs imply that the cost of living in poor countries is higher than was thought, implying greater poverty at any given poverty line. Working against this effect, the new PPPs also imply a downward revision of the international value of the national poverty lines in the poorest countries. On top of this, we also find that an upward revision to the national poverty lines is called for, largely reflecting sample biases in the original data set used by RDV. The balance of these data revisions implies a higher count of global poverty by the standards of the world’s poorest countries. However, we find that the poverty profile across regions and the overall rate of progress against absolute poverty are fairly robust to these changes, and to other variations on our methodology. II. PURCHASING POWER PARITY EXCHANGE RATES International economic comparisons have long recognized that market exchange rates are deceptive, given that some commodities are not traded internationally; these include services but also many goods, including some food staples. Furthermore, there is likely to be a systematic effect, stemming from the fact that low real wages in developing countries entail that nontraded goods tend to be relatively cheap. In the literature, this is known as the “Balassa–Samuelson effect” (Balassa 1964; Samuelson 1964), which is the most widely accepted theoretical explanation for an empirical finding known as the “Penn effect”—that richer countries tend to have higher price indices, as given by the ratios of their PPPs to the market exchange rate.3 Thus GDP comparisons based on market exchange rates tend to understate the real incomes of developing countries. Similarly, market exchange rates overstate the extent of poverty in the world when judged relative to a given US$ poverty line. Global economic measurement, including poverty measurement, has relied instead on PPPs, which give conversion rates for a given currency with the aim of ensuring parity in terms of purchasing power over commodities, both internationally traded and nontraded. Here we only point to some salient features of the new PPPs relevant to measuring poverty in the developing world.4 We focus on the PPP for 3. The term “Penn effect” stems from the Penn World Tables (Summers and Heston 1991). 4. Broader discussions of PPP methodology can be found in Ackland, Dowrick, and Freyens (2007), World Bank (2008b), Deaton and Heston (2010), and Ravallion (2010).

POVERTY IN THE DEVELOPING WORLD

1581

individual consumption, which we use later in constructing our global poverty measures.5 The 2005 ICP is the most complete and thorough assessment to date of how the cost of living varies across the world, with 146 countries participating.6 The world was divided into six regions (Africa, Asia–Pacific, Commonwealth of Independent States, South America, Western Asia, and Eurosat–OECD) with different product lists for each. The ICP collected primary data on the prices for 600–1,000 (depending on the region) goods and services grouped under 155 “basic headings” corresponding to the expenditure categories in the national accounts; 110 of these relate to household consumption. The price surveys covered a large sample of outlets in each country and were done by the government statistics offices in each country, under supervision from regional and World Bank authorities. The price surveys for the 2005 ICP were done on a more scientific basis than prior rounds. Following the recommendations of the Ryten Report (United Nations 1998), stricter standards were used in defining internationally comparable qualities of the goods. Region-specific detailed product lists and descriptions were developed, involving extensive collaboration amongst the countries and the relevant regional ICP offices. Not having these detailed product descriptions, it is likely that the 1993 ICP used lower qualities of goods in poor countries than would have been found in (say) the U.S. market.7 This is consistent with the findings of Ravallion, Chen, and Sangraula (RCS) (2009) suggesting that a sizable underestimation of the 1993 PPP is implied by the 2005 data. Furthermore, the extent of this underestimation tends to be greater for poorer countries. The regional PPP estimates were linked through a common set of global prices collected in 18 countries spanning the regions, giving what the ICP calls “ring comparisons.” The design of these ring comparisons was also a marked improvement over past ICP rounds.8 5. This is the PPP for “individual consumption expenditure by households” in World Bank (2008b). It does not include imputed values of government services to households. 6. As compared to 117 in the 1993 ICP; the ICP started in 1968 with PPP estimates for just 10 countries, based on rather crude price surveys. 7. See Ahmad (2003) on the problems in the implementation of the 1993 ICP round. 8. The method of deriving the regional effects is described in Diewert (2008). Also see the discussion in Deaton and Heston (2010).

1582

QUARTERLY JOURNAL OF ECONOMICS

The World Bank uses a multilateral extension of Fisher price indices, known as the EKS method, rather than the Geary– Khamis (GK) method used by the Penn World Tables. The GK method overstates real incomes in poor countries (given that the international prices are quantity-weighted), imparting a downward bias to global poverty measures, as shown by Ackland, Dowrick, and Freyens (2007).9 There were other differences with past ICP rounds, though they were less relevant to poverty measurement.10 Changes in data and methodology are known to confound PPP comparisons across benchmark years (Dalgaard and Sørensen 2002; World Bank 2008a). It can also be argued that poverty comparisons over time for a given country should respect domestic prices.11 We follow standard practice in doing the PPP conversion only once, in 2005, for a given country; all estimates are then revised back in time consistently with the CPI for that country. We acknowledge, however, the national distributions formed this way may well lose purchasing power comparability as one goes further back in time from the ICP benchmark year. Some dramatic revisions to past PPPs are implied by the 2005 ICP, not least for the two most populous developing countries, China and India—neither of which actually participated in the price surveys for the 1993 ICP.12 The 1993 consumption PPP used for China (estimated from non-ICP sources) was 1.42 yuan to the US$ in 1993, whereas the new estimate based on the 2005 ICP is 3.46 yuan (4.09 if one excludes government consumption). The corresponding price index level (US$ = 100) went from 25% in 1993 to 52% in 2005. So the Penn effect is still evident, but it has declined markedly relative to past estimates, with a new PPP at about half the market exchange rate rather than onefourth. Adjusting solely for the differential inflation rates in the United States and China, one would have expected the 2005 PPP 9. Though this problem can be fixed; see Ikl´e (1972). In the 2005 ICP, the Africa region chose to use Ikl´e’s version of the GK method (African Development Bank 2007). 10. New methods for measuring government compensation and housing were used. Adjustments were also made for the lower average productivity of public sector workers in developing countries (lowering the imputed value of the services derived from public administration, education, and health). 11. Nuxoll (1994) argues that the real growth rates measured in domestic prices better reflect the trade-offs facing decision makers at country level, and thus have a firmer foundation in the economic theory of index numbers. 12. In India’s case, the 1993 PPP was an extrapolation from the 1985 PPP based on CPIs, whereas in China’s case the PPP was based on non-ICP sources and extrapolations using CPIs.

POVERTY IN THE DEVELOPING WORLD

1583

to be 1.80 yuan, not 3.46. Similarly, India’s 1993 consumption PPP was Rs 7.0, whereas the 2005 PPP is Rs 16, and the price level index went from 23% to 35%. If one updated the 1993 PPP for inflation one would have obtained a 2005 PPP of Rs 11 rather than Rs 16. Although there were many improvements in the 2005 ICP, the new PPPs still have some problems. Four concerns stand out in the present context. First, making the commodity bundles more comparable across countries (within a given region) invariably entails that some of the reference commodities are not typically consumed in certain countries, and prices are then drawn from untypical outlets such as specialist stores, probably at high prices. However, the expenditure weights are only available for the 115 basic headings (corresponding to the national accounts). So the prices for uncommonly consumed goods within a given basic heading may end up getting undue weight. This problem could be avoided by only pricing representative country-specific bundles, but this would reintroduce the quality bias discussed above, which has plagued past ICP rounds. Using region-specific bundles helps get around the problem, though it also arises in the ring comparisons used to compare price levels in different regions.13 Second, there is a problem of “urban bias” in the ICP surveys for some counties; the next section describes our methods of addressing this problem. Third, as was argued in RDV, the weights attached to different commodities in the conventional PPP rate may not be appropriate for the poor; Section VII examines the sensitivity of our results to the use of alternative “PPPs for the poor” available for a subset of countries from Deaton and Dupriez (2009). Fourth, the PPP is a national average. Just as the cost of living tends to be lower in poorer countries, one expects it to be lower in poorer regions within one country, especially in rural areas. Ravallion, Chen, and Sangraula (2007) have allowed for urban–rural costof-living differences facing the poor, and provided an urban–rural breakdown of our prior global poverty measures using the 1993 PPP. We plan to update these estimates in future work. What do these revisions to past PPPs imply for measures of global extreme poverty? Given that the bulk of the PPPs have risen for developing countries, the poverty count will tend to rise at any given poverty line in PPP dollars. However, the story is more 13. The OECD and Eurostat have used controls for “representativeness” (based on the price survey), following Cuthbert and Cuthbert (1988). This has not been done for developing countries.

1584

QUARTERLY JOURNAL OF ECONOMICS

complex, given that the same changes in the PPPs alter the (endogenous) international poverty line, which is anchored to the national poverty lines in the poorest countries in local currency units. Next we turn to the poverty lines, and then the household surveys, after which we will be able to put the various data together to see what they suggest about the extent of poverty in the world. III. NATIONAL AND INTERNATIONAL POVERTY LINES We use a range of international lines, representative of the national lines found in the world’s poorest countries. For this purpose, RCS compiled a new set of national poverty lines for developing countries drawn from the World Bank’s country-specific Poverty Assessments (PAs) and the Poverty Reduction Strategy Papers (PRSP) done by the governments of the countries concerned. These documents provide a rich source of data on poverty at the country level, and almost all include estimates of national poverty lines. The RCS data set was compiled from the most recent PAs and PRSPs over the years 1988–2005. In the source documents, each poverty line is given in the prices for a specific survey year (for which the subsequent poverty measures are calculated). In most cases, the poverty line was also calculated from the same survey (though there are some exceptions, for which preexisting national poverty lines, calibrated to a prior survey, were updated using the consumer price index). About 80% of these reports used a version of the “cost of basic needs” method in which the food component of the poverty line is the expenditure needed to purchase a food bundle specific to each country that yields a stipulated food energy requirement.14 To this is added an allowance for nonfood spending, which is typically anchored to the nonfood spending of people whose food spending, or sometimes total spending, is near the food poverty line. There are some notable differences between the old (RDV) and new (RCS) data sets on national poverty lines. The RDV data were for the 1980s (with a mean year of 1984), whereas the new and larger compilation in RCS is post-1990 (mean of 1999); in no case do the proximate sources overlap. The RCS data cover 75 developing countries, whereas the earlier data included only 22. The 14. This method, and alternatives, are discussed in detail in Ravallion (1994, 2008c).

POVERTY IN THE DEVELOPING WORLD

1585

RDV data set used rural poverty lines when there was a choice, whereas the RCS data set estimated national average lines. And the RDV data set was unrepresentative of the poorest region, SubSaharan Africa (SSA), with only four countries from that region (Burundi, South Africa, Tanzania, and Zambia), whereas the RCS data set has a good spread across regions. The sample bias in the RDV data set was unavoidable at the time (1990), but it can now be corrected. Although there are similarities across countries in how poverty lines are set, there is considerable scope for discretion. National poverty lines must be considered socially relevant in the specific country.15 If a proposed poverty line is widely seen as too frugal by the standards of a society, then it will surely be rejected. Nor will a line that is too generous be easily accepted. The stipulated food-energy requirements are similar across countries, but the food bundles that yield a given nutritional intake can vary enormously (as in the share of calories from course starchy staples rather than more processed food grains, and the share from meat and fish). The nonfood components also vary. The judgments made in setting the various parameters of a poverty line are likely to reflect prevailing notions of what poverty means in each country. There must be a lower bound to the cost of the nutritional requirements for any given level of activity (with the basal metabolic rate defining an absolute lower bound). The cost of the (food and nonfood) goods needed for social needs must also be bounded below (as argued by Ravallion and Chen [2010]). The poverty lines found in many poor countries are certainly frugal. For example, the World Bank (1997) gives the average daily food bundle consumed by someone living in the neighborhood of India’s national poverty in 1993. The daily food bundle comprised 400 g of coarse rice and wheat and 200 g of vegetables, pulses, and fruit, plus modest amounts of milk, eggs, edible oil, spices, and tea. After buying such a food bundle, one would have about $0.30 left (at 1993 PPP) for nonfood items. India’s official line is frugal by international standards, even among low-income countries (Ravallion 2008a). To give another example, the daily food bundle used by Bidani and Ravallion (1993) to construct Indonesia’s poverty line comprises 300 g of rice, 100 g of tubers, and amounts of vegetables, 15. This is no less true of the poverty lines constructed for World Bank Poverty Assessments, which emerge out of close collaboration between the technical team (often including local statistical staff and academics) and the government of the country concerned.

1586

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I National Poverty Lines Plotted against Mean Consumption at 2005 PPP Bold symbols are fitted values from a nonparametric regression.

fruits, and spices similar to those in the India example but also includes fish and meat (about 140 g in all per day). Such poverty lines are clearly too low to be acceptable in rich countries, where much higher overall living standards mean that higher standards are also used for identifying the poor. For example, the U.S. official poverty line in 2005 for a family of four was $13 per person per day (http://aspe.hhs.gov/poverty/05poverty.shtml). Similarly, we can expect middle-income countries to have higher poverty lines than low-income countries. The expected pattern in how national poverty lines vary is confirmed by Figure I, which plots the poverty lines compiled by RCS in 2005 PPP dollars against log household consumption per capita, also in 2005 PPP dollars, for the 74 countries with complete data. The figure gives a nonparametric regression of the national poverty lines against log mean consumption. Above a certain point, the poverty line rises with mean consumption. The overall elasticity of the poverty line to mean consumption is about 0.7. However, the slope is essentially zero among the poorest 20 or so countries, where absolute poverty clearly dominates. The gradient evident in Figure I is driven more by the nonfood component of

POVERTY IN THE DEVELOPING WORLD

1587

FIGURE II Comparison of New and Old National Poverty Lines at 1993 PPP Bold symbols are fitted values from a nonparametric regression.

the poverty lines (which accounts for about 60% of the overall elasticity) than the food component, although there is still an appreciable share attributable to the gradient in food poverty lines (RCS). To help see how this new compilation of national poverty lines compares to those used to set the original “$1-a-day” line, Figure II gives both the RCS and RDV lines evaluated at 1993 prices and converted to dollars using the 1993 PPPs; both sets of national poverty lines are plotted against consumption per capita at 1993 PPP. The relationship between the RCS national poverty lines and consumption per capita (at 1993 PPP) looks similar to Figure I, although the 1993 PPPs suggest a slightly steeper gradient amongst the poorest countries. But the more important observation from Figure II is that the RDV lines are lower at given mean consumption; the absolute gap diminishes as consumption falls, but still persists among the poorest countries. For the poorest fifteen countries ranked by consumption per capita at 1993 PPP, the mean poverty line in the RCS data set is $43.92 ($1.44 a day16 ) versus $33.51 ($1.10 a day) using the old (RDV) series 16. Note that this is at 1993 PPP; $1.44 in 1993 prices represents $1.95 a day at 2005 U.S. prices.

1588

QUARTERLY JOURNAL OF ECONOMICS

for eight countries with consumption below the upper bound of consumption for those fifteen countries. The RCS sample is more recent, and possibly there has been some upward drift in national poverty lines over time, although that does not seem very likely given that few growing developing countries have seen an upward revision to their poverty lines, which can be politically difficult. (Upward revisions have a long cycle; for example, China and India are only now revising upward their official poverty lines, which stood for 30–40 years.) The other differences in the two samples noted above may well be more important in explaining the upward shift seen in Figure II in moving from the RDV to RCS samples. For example, there is some evidence that poverty lines for SSA tend to be higher than for countries at similar mean consumption levels (RCS), and (as noted above) SSA was underrepresented in the original RDV data set of national poverty lines.17 We use five international poverty lines at 2005 PPP: (i) $1.00 a day, which is very close to India’s national poverty line;18 (ii) $1.25, which is the mean poverty line for the poorest fifteen countries;19 (iii) $1.45, obtained by updating the 1993 $1.08 line used by Chen and Ravallion (2001, 2004, 2007) for inflation in the United States; (iv) $2.00, which is the median of the RCS sample of national poverty lines for developing and transition economies and is also approximately the line obtained by updating the $1.45 line at 1993 PPP for inflation in the United States; and (v) $2.50, twice the $1.25 line, which is also the median poverty line of all except the poorest fifteen countries in the RCS data set of national poverty lines. The range from $1.00 to $1.45 is roughly the 95% confidence 17. The residuals in Figure I are about $0.44 per day higher for SSA on average, with a standard error of $0.27. 18. India’s official poverty lines for 2004/2005 were Rs 17.71 and Rs 11.71 per day for urban and rural areas. Using our urban and rural PPPs for 2005, these represent $1.03 per day (Ravallion 2008a). An Expert Group constituted by the Planning Commission (2009) has recently recommended a higher rural poverty line, although retaining the prior official line for urban areas. The implied new national line is equivalent to $1.17 per day for 2005 when evaluated at our implicit urban and rural PPPs. Note that the Expert Group does not claim that the higher line is a “relative poverty” effect, but rather that it corrects for claimed biases in past price deflators. 19. The fifteen countries are mostly in SSA and comprise Malawi, Mali, Ethiopia, Sierra Leone, Niger, Uganda, Gambia, Rwanda, Guinea-Bissau, Tanzania, Tajikistan, Mozambique, Chad, Nepal, and Ghana. Their median poverty line is $1.27 per day. Note that this is a set of reference countries different from those used by RDV. Deaton (2010) questions this change in the set of reference countries. However, it would be hard to justify keeping the reference group fixed over time, given what we now know about the bias in the original RDV sample of national lines.

POVERTY IN THE DEVELOPING WORLD

1589

interval for the mean poverty line for the poorest fifteen countries (RCS). To test the robustness of qualitative comparisons, we also estimate the cumulative distribution functions (CDFs) up to a maximum poverty line, which we set at the U.S. line of $13 per day.20 Although we present results for multiple poverty lines, we consider the $1.25 line the closest in sprit to the original idea of the “$1-a-day” line. The use of the poorest fifteen countries as the reference group has a strong rationale. The relationship between the national poverty lines and consumption per person can be modeled very well (in terms of goodness of fit) by a piecewise linear function that has zero slope up to some critical level of consumption, and rises above that point. The econometric tests reported in RCS imply that national poverty lines tend to rise with consumption per person when it exceeds about $2 per day, which is very near the upper bound of the consumption levels found among these fifteen countries.21 Of course, there is still variance in the national poverty lines at any given mean, including among the poorest countries; RCS estimate the robust standard error of the $1.25 line to be $0.10 per day. We use the same PPPs to convert the international lines to local currency units (LCUs). Three countries were treated differently, China, India, and Indonesia. In all three we used separate urban and rural distributions. For China, the ICP survey was confined to 11 cities, and the evidence suggests that the cost of living is lower for the poor in rural areas (Chen and Ravallion 2010). We treat the ICP PPP as an urban PPP for China and use the ratio of urban to rural national poverty lines to derive the corresponding rural poverty line in local currency units. For India, the ICP included rural areas, but they were underrepresented. We derived urban and rural poverty lines consistent with both the urban– rural differential in the national poverty lines and the relevant 20. First-order dominance up to a poverty line of zmax implies that all standard (additively separable) poverty measures rank the distributions identically for all poverty lines up to zmax ; see Atkinson (1987). (When CDFs intersect, unambiguous rankings may still be possible for a subset of poverty measures.) 21. RCS use a suitably constrained version of Hansen’s (2000) method for estimating a piecewise linear (“threshold”) model. (The constraint is that the slope of the lower linear segment must be zero and there is no potential discontinuity at the threshold.) This method gave an absolute poverty line of $1.23 (t = 6.36) and a threshold level of consumption (above which the poverty line rises linearly) very close to the $60 per month figure used to define the reference group. Ravallion and Chen (2010) use this piecewise linear function in measuring “weakly relative poverty” in developing countries.

1590

QUARTERLY JOURNAL OF ECONOMICS

features of the design of the ICP samples for India; further details can be found in Ravallion (2008a). For Indonesia, we converted the international poverty line to LCUs using the official consumption PPP from the 2005 ICP. We then unpack that poverty line to derive implicit urban and rural lines that are consistent with the ratio of the national urban-to-rural lines for Indonesia. IV. HOUSEHOLD SURVEYS AND POVERTY MEASURES We have estimated all poverty measures ourselves from the primary sample survey data, rather than relying on preexisting poverty or inequality measures of uncertain comparability. The primary data come in various forms, ranging from micro data (the most common) to specially designed grouped tabulations from the raw data, constructed following our guidelines.22 All our previous estimates have been updated to ensure internal consistency. We draw on 675 nationally representative surveys for 115 countries.23 Taking the most recent survey for each country, about 1.23 million households were interviewed in the surveys used for our 2005 estimate. The surveys were mostly done by governmental statistics offices as part of their routine operations. Not all available surveys were included; a survey was dropped if there were known to be serious problems of comparability with the rest of the data set.24 IV.A. Poverty Measures Following past practice, poverty is assessed using household expenditure on consumption per capita or household income per capita as measured from the national sample surveys.25 Households are ranked by consumption (or income) per person. 22. In the latter case we use parametric Lorenz curves to fit the distributions. These provide a more flexible functional form than the log-normality assumption used by (inter alia) Bourguignon and Morrisson (2002) and Pinkovskiy and Sala-iMartin (2009). Log-normality is a questionable approximation; the tests reported in Lopez and Serv´en (2006) reject log-normality of consumption, though it performs better for income. Note also that the past papers in the literature have applied log-normality to distributional data for developing countries that have already been generated by our own parametric Lorenz curves, as provided in the World Bank’s World Development Indicators. This overfitting makes the fit of log-normal distribution to these secondary data look deceptively good. 23. A full listing is found in Chen and Ravallion (2009). 24. Also, we have not used surveys for 2006 or 2007 when we already have a survey for 2005—the latest year for which we provide estimates in this paper. 25. The use of a “per capita” normalization is standard in the literature on developing countries. This stems from the general presumption that there is rather little scope for economies of size in consumption for poor people. However, that assumption can be questioned; see Lanjouw and Ravallion (1995).

POVERTY IN THE DEVELOPING WORLD

1591

The distributions are weighted by household size and sample expansion factors. Thus our poverty counts give the number of people living in households with per capita consumption or income below the international poverty line. When there is a choice we use consumption rather than income, in the expectation that consumption is the better measure of current economic welfare.26 Although intertemporal credit and risk markets do not appear to work perfectly, even poor households have opportunities for saving and dissaving, which they can use to protect their living standards from income fluctuations, which can be particularly large in poor agrarian economies. A fall in income due to a crop failure in one year does not necessarily mean destitution. There is also the (long-standing) concern that measuring economic welfare by income entails double counting over time; saving (or investment) is counted initially in income and then again when one receives the returns from that saving. Consumption is also thought to be measured more accurately than income, especially in developing countries. Of the 675 surveys, 417 allow us to estimate the distribution of consumption; this is true of all the surveys used in the Middle East and North Africa (MENA), South Asia, and SSA, although income surveys are more common in Latin America. The measures of consumption (or income, when consumption is unavailable) in our survey data set are reasonably comprehensive, including both cash spending and imputed values for consumption from own production. But we acknowledge that even the best consumption data need not adequately reflect certain “nonmarket” dimensions of welfare, such as access to certain public services, or intrahousehold inequalities. Furthermore, with the expansion in government spending on basic education and health in developing countries, it can be argued that the omission of the imputed values for these services from survey-based consumption aggregates will understate the rate of poverty reduction. How much so is unclear, particularly in the light of mounting evidence from micro studies on absenteeism of public teachers and healthcare workers in a number of developing countries.27 However, 26. See Ravallion (1994), Slesnick (1998), and Deaton and Zaidi (2002). Consumption may also be a better measure of long-term welfare, though this is less obvious (Chaudhuri and Ravallion 1994). 27. See Chaudhury et al. (2006). Based on such evidence, Deaton and Heston (2010, p. 44) remark that “To count the salaries of AWOL government employees as ‘actual’ benefits to consumers adds statistical insult to original injury.”

1592

QUARTERLY JOURNAL OF ECONOMICS

there have clearly been some benefits to poor people from higher public spending on these services. Our sensitivity tests in Section VII, in which we mix survey means with NAS consumption aggregates (which, in principle, should include the value of government services to households), will help address this concern. These and other limitations of consumption as a welfare metric also suggest that our poverty measures need to be supplemented by other data, such as on education attainments and infant and child mortality, to obtain a complete picture of how living standards are evolving. We use standard poverty measures for which the aggregate measure is the (population-weighted) sum of individual measures. In this paper we report three such poverty measures.28 The first measure is the headcount index given by the percentage of the population living in households with consumption or income per person below the poverty line. We also give estimates of the number of poor, as obtained by applying the estimated headcount index to the population of each region under the assumption that the countries without surveys are a random subsample of the region. Our third measure is the poverty gap index, which is the mean distance below the poverty line as a proportion of the line where the mean is taken over the whole population, counting the nonpoor as having zero poverty gaps. Having converted the international poverty line at PPP to local currency in 2005, we convert it to the prices prevailing at each survey date using the most appropriate available countryspecific CPI.29 The weights in this index may or may not accord well with consumer budget shares at the poverty line. In periods of relative price shifts, this will bias our comparisons of the incidence of poverty over time, depending on the extent of (utilitycompensated) substitution possibilities for people at the poverty line. In the aggregate, 90% of the population of the developing world is represented by surveys within two years of 2005.30 Survey coverage by region varies from 74% of the population of the MENA 28. The website we have created to allow replication of these estimates, PovcalNet, provides a wider range of measures from the literature on poverty measurement. 29. Note that the same poverty line is generally used for urban and rural areas. There are three exceptions, China, India, and Indonesia, where we estimate poverty measures separately for urban and rural areas and use sector-specific CPIs. 30. Some countries have graduated from the set of developing countries; we apply the same definition over time to avoid selection bias. In this paper our definition is anchored to 2005.

POVERTY IN THE DEVELOPING WORLD

1593

to 98% of the population of South Asia. Some countries have more surveys than others; for the 115 countries, 14 have only one survey, 17 have two, and 14 have three, whereas 70 have four or more over the period, of which 23 have 10 or more surveys. Naturally, the further back we go, the smaller the number of surveys—reflecting the expansion in household survey data collection for developing countries since the 1980s. Because the PPP conversion is only done in 2005, estimates may well become less reliable earlier in time, depending on the quality of the national CPIs. Coverage also deteriorates in the last year or two of the series, given the lags in survey processing. We made the judgment that there were too few surveys prior to 1981 or after 2005. The working paper version (Chen and Ravallion 2009) gives further details, including the number of surveys by year, the lags in survey availability, and the proportion of the population represented by surveys by year. Most regions are quite well covered from the latter half of the 1980s (East and South Asia being well covered from 1981 onward).31 Unsurprisingly, we have weak coverage in Eastern Europe and Central Asia (EECA) for the 1980s; many of these countries did not officially exist then, so we have to rely heavily on back projections. More worrying is the weak coverage for SSA in the 1980s; indeed, our estimates for the early 1980s rely heavily on projections based on distributions around 1990. IV.B. Heterogeneity and Measurement Errors in Surveys Survey instruments differ between countries, including how the questions are asked (such as recall periods), response rates, whether the surveys are used to measure consumption or income, and what gets included in the survey’s aggregate for consumption or income. These differences are known to matter to the statistics calculated from surveys, including poverty and inequality measures. It is questionable whether survey instruments should be identical across countries; some adaptation to local circumstances may well make the results more comparable even though the surveys differ. Nonetheless, the heterogeneity is a concern. The literature on measuring global poverty and inequality has dealt with this concern in two ways. The first makes an effort to iron out obvious comparability problems using the micro data, 31. China’s survey data for the early 1980s are probably less reliable than in later years, as discussed in Chen and Ravallion (2004), where we also describe our methods of adjusting for certain comparability problems in the China data, including changes in valuation methods.

1594

QUARTERLY JOURNAL OF ECONOMICS

either by reestimating the consumption/income aggregates or by the more radical step of dropping a survey. It is expected that aggregation across surveys will help reduce the problem. But beyond this, the problem is essentially ignored. This is the approach we have taken in the past, and for our benchmark estimates below. We call this the “survey-based method.” The second approach rescales the survey means to be consistent with the national accounts (NAS) but assumes that the surveys get the relative distribution (“inequality”) right. Thus all levels of consumption or income in the survey are multiplied by the ratio of the per capita NAS aggregate (consumption or GDP) to the survey mean.32 We can call this the “rescaling method.” The choice depends in part on the data and application. The first method is far more data-intensive, as it requires the primary data, which rules it out for historical purposes (indeed, for estimates much before 1980). For example, Bourguignon and Morrisson (2002) had no choice but to use the rescaling method, given that they had to rely on secondary sources (notably prior inequality statistics) to estimate aggregate poverty and inequality measures back to 1820. Arguments can also be made for and against each approach. It is claimed by proponents of the rescaling method that it corrects for survey mismeasurement. In this view, NAS consumption is more accurate because it captures things that are often missing from surveys, such as imputed rents for owner-occupied housing and government-provided services to households. Although this is true in principle, compliance with the UN Statistical Division’s System of National Accounts (SNA) is uneven across countries in practice. Most developing countries still have not fully implemented SNA guidelines, including those for estimating consumption, which is typically calculated residually at the commodity level. In this and other respects (including how output is measured) the NAS is of questionable reliability in many low-income countries.33 Given how consumption is estimated in practice in the NAS in most low-income countries, we would be loath to assume it is more accurate than a well-designed survey. 32. In one version of this method, Bhalla (2002) replaces the survey mean by consumption from the NAS. Instead, Bourguignon and Morrisson (2002), Sala-iMartin (2006), and Pinkovskiy and Sala-i-Martin (2009) anchor their measures to GDP per capita rather than to consumption. 33. As Deaton and Heston (2010, p. 5) put it, “The national income accounts of many low-income countries remain very weak, with procedures that have sometimes not been updated for decades.”

POVERTY IN THE DEVELOPING WORLD

1595

Proponents of the survey-based method acknowledge that there are survey measurement errors but question the assumptions of the rescaling method that the gaps between the survey means and NAS aggregates are due solely to underestimation in the surveys and that the measurement errors are distributionneutral, such that the surveys get inequality right. The discrepancy between the two data sources reflects many factors, including differences in what is included.34 Selective compliance with the randomized assignment in a survey and underreporting is also playing a role. Survey statisticians do not generally take the view that nonsampling errors affect only the mean and not inequality. More plausibly, underestimation of the mean by surveys due to selective compliance comes with underestimation of inequality.35 For instance, high-income households might be less likely to participate because of the high opportunity cost of their time or concerns about intrusion in their affairs.36 Naturally evidence on this is scarce, but in one study of compliance with the “long form” of the U.S. Census, Groves and Couper (1998, Chapter 5) found that higher socioeconomic status tended to be associated with lower compliance. Estimates by Korinek, Mistiaen, and Ravallion (2007) of the microcompliance function (the individual probability of participating in a survey as a function of own income) for the Current Population Survey in the United States suggest a steep economic gradient, with very high compliance rates for the poor, falling to barely 50% for the rich. Korinek, Mistiaen, and Ravallion (2006) examine the implications of selective compliance for inequality and poverty measurement and find little bias in the poverty measures but sizable underestimation of inequality in the United States. In other words, their results suggest that the surveys underestimate both the mean and inequality but get poverty roughly right;

34. For example, NAS private consumption includes imputed rents for owneroccupied housing, imputed services from financial intermediaries, and the expenditures of nonprofit organizations; none of these are included in consumption aggregates from standard household surveys. Surveys, on the other hand, are probably better at picking up consumption from informal-sector activities. For further discussion, see Ravallion (2003) and Deaton (2005). In the specific case of India (with one of the largest gaps between the survey-based estimates of mean consumption and that from the NAS), see Central Statistical Organization (2008). 35. Although the qualitative implications for an inequality measure of even a monotonic income effect on compliance are theoretically ambiguous (Korinek, Mistiaen, and Ravallion 2006). 36. Groves and Couper (1998) provide a useful overview of the arguments and evidence on the factors influencing survey compliance.

1596

QUARTERLY JOURNAL OF ECONOMICS

replacing the survey mean with consumption from the NAS would underestimate poverty. This may be a less compelling argument for some other sources of divergence between the survey mean and NSS consumption per person. Suppose, for example, that the surveys exclude imputed rent for owner-occupied housing (practices are uneven in how this is treated) and that this is a constant proportion of expenditure. Then the surveys get inequality right and the mean wrong. Similarly, the private consumption aggregate in the NAS should include government expenditures on services consumed by households, which are rarely valued in surveys. Of course, it is questionable whether these items could be treated as a constant proportion of expenditure. The implications of measurement errors also depend on how the poverty line is set. Here it is important to note that the underlying national poverty lines were largely calibrated to the surveys. Measurement errors will be passed on to the poverty lines in a way that attenuates the bias in the final measure of poverty. By the most common methods of setting poverty lines, underestimation of nonfood spending in the surveys will lead to underestimation of the poverty line, which is anchored to the spending of sampled households living near the food poverty line (or with food-energy intakes near the recommended norms). Correcting for underestimation of nonfood spending in surveys would then require higher poverty lines. The poverty measures based on these poverty lines will then be more robust to survey measurement errors than would be the case if the line was set independent of the surveys. IV.C. A Mixed Method Arguably the more important concern here is the heterogeneity of surveys, given that the level of the poverty line is always somewhat arbitrary. In an interesting variation on the rescaling method, Karshenas (2003) replaces the survey mean by its predicted value from a regression on NAS consumption per capita. So Karshenas uses a stable linear function of NAS consumption, with mean equal to the overall mean of the survey means. This assumes that national accounts consumption data are comparable and ignores the country-specific information on the levels in surveys. As noted above, that is a questionable assumption. However, unlike other examples of rescaling methods, Karshenas assumes that the surveys are correct on average and focuses instead on the

POVERTY IN THE DEVELOPING WORLD

1597

problem of survey comparability, for which purpose the poverty measures are anchored to the national accounts data. Where we depart from the Karshenas method is that we do not ignore the country-specific survey means. When one has two less-than-ideal measures of roughly the same thing, it is natural to combine them. For virtually all developing countries, surveys are far less frequent than NAS data. Because one is measuring poverty at the survey date, the survey can be thought of as the Bayesian posterior estimate, whereas NAS consumption is the Bayesian prior. A result from Bayesian statistics then provides an interpretation of a mixing parameter under the assumption that consumption is log-normally distributed with a common variance in the prior distribution as in the new survey data. That assumption is unlikely to hold; log-normality of consumption can be rejected statistically (Lopez and Serv´en 2006), and (as noted) it is unlikely that the prior based on the NAS would have the same relative distribution as the survey. However, this assumption does at least offer a clear conceptual foundation for a sensitivity test, given the likely heterogeneity in surveys. In particular, it can then be shown readily that if the prior is the expected value of the survey mean, conditional on national accounts consumption, and consumption is log-normally distributed with a common variance, then the posterior estimate is the geometric mean of the survey mean and its expected value.37 Over time, the relevant growth rate is the (arithmetic) mean of the growth rates from the two data sources. V. BENCHMARK ESTIMATES We report aggregate results for nine “benchmark years,” at three-yearly intervals over 1981–2005, for the regions of the developing world and (given their populations) China and India.38 Jointly with this paper, we have updated the PovcalNet website to provide public access to the underlying country-level data set, so that users can replicate these calculations and try different assumptions, including different poverty measures, poverty lines, and country groupings, including deriving estimates for individual countries. The PovcalNet site will also provide updates as new data come in. 37. The working paper version (Chen and Ravallion 2009) provides a proof. 38. Chen and Ravallion (2004) describe our interpolation and projection methods to deal with the fact that national survey years differ from our benchmark years.

1598

QUARTERLY JOURNAL OF ECONOMICS TABLE I HEADCOUNT INDICES OF POVERTY (% BELOW EACH LINE) 1981

1984

$1.00 $1.25 $1.45 $2.00 $2.50

41.4 51.8 58.4 69.2 74.6

34.4 46.6 54.4 67.4 73.7

$1.00 $1.25 $1.45 $2.00 $2.50

29.4 39.8 46.6 58.6 65.9

27.6 38.3 45.5 58.1 66.7

1987

1999

2002

2005

(a) Aggregate for developing world 29.8 29.5 27.0 23.1 41.8 41.6 39.1 34.4 49.9 49.4 47.2 42.6 64.2 63.2 61.5 58.2 71.6 70.4 69.2 67.2

22.8 33.7 41.6 57.1 65.9

20.3 30.6 38.1 53.3 62.4

16.1 25.2 32.1 47.0 56.6

(b) Excluding China 24.4 23.3 35.0 34.1 42.3 41.6 55.6 55.6 65.4 66.0

22.3 33.1 40.8 55.6 67.4

20.7 31.3 38.9 54.0 66.0

18.6 28.2 37.0 50.3 62.9

26.9 37.5 44.5 57.2 67.3

1990

1993

1996

22.9 33.8 41.4 55.9 67.9

Note. The headcount index is the percentage of the relevant population living in households with consumption per person below the poverty line.

V.A. Aggregate Measures Table I gives our new estimates for a range of lines from $1.00 to $2.50 in 2005 prices. Table II gives the corresponding counts of the number of poor. We calculate the global aggregates under the assumption that the countries without surveys have the poverty rates of their region. The following discussion will focus more on the $1.25 line, though we test the robustness of our qualitative poverty comparisons to that choice. We find that the percentage of the population of the developing world living below $1.25 per day was halved over the 25-year period, falling from 52% to 25% (Table I). (Expressed as a proportion of the population of the world, the decline is from 42% to 21%; this assumes that there is nobody living below $1.25 per day in the developed countries.39 ) The number of poor fell by slightly over 500 million, from 1.9 billion to 1.4 billion over 1981–2005 (Table II). The trend rate of decline in the $1.25 a day poverty rate over 1981–2005 was 1% per year; when the poverty rate is regressed on time the estimated trend is −0.99% per year with a standard error of 0.06% (R2 = .97). This is slightly higher than the trend we had obtained using the 1993 PPPs, which was −0.83% per year (standard error = 0.11%). When this trend is simply projected 39. The population of the developing world in 2005 was 5,453 million, representing 84.4% of the world’s total population; in 1981, it was 3,663 million, or 81.3% of the total.

1,515.0 1,896.2 2,137.7 2,535.1 2,731.6

784.5 1,061.1 1,244.0 1,563.0 1,759.5

$1.00 $1.25 $1.45 $2.00 $2.50

$1.00 $1.25 $1.45 $2.00 $2.50

1981

786.2 1,088.3 1,293.2 1,652.1 1,895.4

1,334.7 1,808.2 2,111.5 2,615.4 2,858.7

1984

814.9 1,134.3 1,348.9 1,732.7 2,037.6

1,227.2 1,720.0 2,051.7 2,639.7 2,944.6

1987

1993

1996

(b) Excluding China 787.6 793.4 1,130.2 1,162.3 1,365.3 1,418.9 1,795.1 1,895.2 2,110.2 2,250.4

823.2 1,213.4 1,488.1 2,009.9 2,439.2

(a) Aggregate for developing world 1,286.7 1,237.9 1,111.9 1,813.4 1,794.9 1,656.2 2,153.5 2,165.0 2,048.1 2,755.9 2,821.4 2,802.1 3,071.0 3,176.7 3,231.4

1990

TABLE II NUMBERS OF POOR (MILLIONS)

843.2 1,249.5 1,541.7 2,101.9 2,546.4

1,145.6 1,696.2 2,095.7 2,872.1 3,316.6

1999

821.9 1,240.0 1,543.5 2,140.8 2,615.6

1,066.6 1,603.1 1,997.9 2,795.7 3,270.6

2002

769.9 1,169.0 1,535.2 2,087.9 2,611.0

876.0 1,376.7 1,751.7 2,561.5 3,084.7

2005

POVERTY IN THE DEVELOPING WORLD

1599

1600

QUARTERLY JOURNAL OF ECONOMICS

forward to 2015, the estimated headcount index for that year is 16.6% (standard error of 1.5%). Given that the 1990 poverty rate was 41.6%, the new estimates indicate that the developing world as a whole is on track to achieving the Millennium Development Goal (MDG) of halving the 1990 poverty rate by 2015. The 1% per year rate of decline in the poverty rate also holds if one focuses on the period since 1990 (not just because this is the base year for the MDG but also recalling that the data for the 1980s are weaker). The $1.25 poverty rate fell 10% in the ten years of the 1980s (from 52% to 42%) and a further 17% in the 16 years from 1990 to 2005. It is notable that 2002–2005 suggests a higher (absolute and proportionate) drop in the poverty rate than other periods. Given that lags in survey data availability mean that our 2005 estimate is more heavily dependent on nonsurvey data (notably the extrapolations based on NAS consumption growth rates), there is a concern that this sharper decline over 2002–2005 might be exaggerated. However, that does not seem likely. The bulk of the decline is in fact driven by countries for which survey data are available close to 2005. The region for which nonsurvey data have played the biggest role for 2005 is SSA. If instead we assume that there was in fact no decline in the poverty rate over 2002– 2005 in SSA, then the total headcount index (for all developing countries) for the $1.25 line in 2005 is 26.2%—still suggesting a sizable decline relative to 2002. China’s success against absolute poverty has clearly played a major role in this overall progress. The lower panels of Tables I and II repeat the calculations excluding China. The $1.25 a day poverty rate falls from 40% to 28% over 1981–2005, with a rate of decline that is less than half the trend including China; the regression estimate of the trend falls to −0.43% per year (standard error of 0.03%; R2 = .96), which is almost identical to the rate of decline for the non-China developing world that we had obtained using the 1993 PPPs (which gave a trend of −0.44% per year, standard error = 0.01%). Based on our new estimates, the projected value for 2015 is 25.1% (standard error = 0.8%), which is well over half the 1990 value of 35%. So the developing world outside China is not on track to reach the MDG for poverty reduction. Our estimates suggest less progress (in absolute and proportionate terms) in getting above the $2 per day line than the $1.25 line. The poverty rate by this higher standard has fallen from 70%

POVERTY IN THE DEVELOPING WORLD

1601

FIGURE III Cumulative Distributions for the Developing World

in 1981 to 47% in 2005 (Table I). The trend is about 0.8% per year (a regression coefficient on time of −0.84; standard error = 0.08); excluding China, the trend is only 0.3% per year (a regression coefficient of −0.26; standard error = 0.05%). This has not been sufficient to bring down the number of people living below $2 per day, which was about 2.5 billion in both 1981 and 2005 (Table II). Thus the number of people living between $1.25 and $2 a day has risen sharply over these 25 years, from about 600 million to 1.2 billion. This marked “bunching up” of people just above the $1.25 line suggests that the poverty rate according to that line could rise sharply with aggregate economic contraction (including real contraction due to higher prices). The qualitative conclusions that poverty measures have fallen over 1981–2005 and 1990–2005 are robust to the choice of poverty line over a wide range (and robust to the choice of poverty measure within a broad class of measures). Figure III gives the cumulative distribution functions up to $13 per day, which is the official poverty line per person for a family of four in the United States in 2005. First-order dominance is indicated. In 2005, 95.7% of the population of the developing world lived below the U.S. poverty line; 25 years earlier it was 96.7%.

1602

QUARTERLY JOURNAL OF ECONOMICS

V.B. Regional Differences Table III gives the estimates over 1981–2005 for four lines, $1.00, $1.25, $2.00, and $2.50. There have been notable changes in regional poverty rankings over this period. Looking back to 1981, East Asia had the highest incidence of poverty, with 78% of the population living below $1.25 per day and 93% below the $2 line. South Asia had the next highest poverty rate, followed by SSA, LAC, MENA, and lastly EECA. Twenty years later, SSA had swapped places with East Asia, where the $1.25 headcount index had fallen to 17%, with South Asia staying in second place. EECA had overtaken MENA. The regional rankings are not robust to the poverty line. Two changes are notable. At lower lines (under $2 per day) SSA has the highest incidence of poverty, but this switches to South Asia at higher lines. (Intuitively, this difference reflects the higher inequality found in Africa than in South Asia.) Second, MENA’s poverty rate exceeds LAC’s at $2 or higher, but the ranking reverses at lower lines. The composition of world poverty has changed noticeably over time. The number of poor has fallen sharply in East Asia but risen elsewhere. For East Asia, the MDG of halving the 1990 “$1-per-day” poverty rate by 2015 was already reached a little after 2002. Again, China’s progress against absolute poverty was a key factor; looking back to 1981, China’s incidence of poverty (measured by the percentage below $1.25 per day) was roughly twice that for the rest of the developing world; by the mid-1990s, the Chinese poverty rate had fallen well below average. There were over 600 million fewer people living under $1.25 per day in China in 2005 than 25 years earlier. Progress was uneven over time, with setbacks in some periods (the late 1980s) and more rapid progress in others (the early 1980s and mid 1990s). Ravallion and Chen (2007) identify a number of factors (including policies) that account for this uneven progress against poverty over time (and space) in China. Over 1981–2005, the $1.25 poverty rate in South Asia fell from almost 60% to 40%, which was not sufficient to bring down the number of poor (Table IV). If the trend over this period in South Asia were to continue until 2015, the poverty rate would fall to 32.5% (standard error = 1.2%), which is more than half its 1990 value. So South Asia is not on track to attaining the MDG without a higher trend rate of poverty reduction. Note, however, that this conclusion is not robust to the choice of the poverty line.

1981

66.8 73.5 0.7 7.7 3.3 41.9 42.1 42.6 41.4

77.7 84.0 1.7 11.5 7.9 59.4 59.8 53.7 51.8

Region

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

50.8 53.7 4.3 9.1 4.1 46.9 49.4 57.1 39.1

1993

(b) % living below $1.25 a day 65.5 54.2 54.7 69.4 54.0 60.2 1.3 1.1 2.0 13.4 12.6 9.8 6.1 5.7 4.3 55.6 54.2 51.7 55.5 53.6 51.3 56.2 54.8 57.9 46.6 41.8 41.6

1990 35.4 37.7 2.1 6.0 1.5 29.3 31.1 46.4 27.0

1987

(a) % living below $1.00 a day 49.9 38.9 39.1 52.9 38.0 44.0 0.6 0.5 0.9 9.2 8.9 6.6 2.4 2.3 1.7 38.0 36.6 34.0 37.6 35.7 33.3 45.2 44.1 47.5 34.4 29.8 29.5

1984

36.0 36.4 4.6 10.8 4.1 47.1 46.6 58.7 34.4

23.4 23.7 2.5 7.3 1.6 29.1 28.6 47.6 23.1

1996

35.5 35.6 5.1 10.8 4.2 44.1 44.8 58.2 33.7

23.5 24.1 3.1 7.4 1.7 26.9 27.0 47.0 22.8

1999

27.6 28.4 4.6 11.0 3.6 43.8 43.9 55.1 30.6

17.8 19.1 2.7 7.7 1.4 26.5 26.3 43.8 20.3

2002

TABLE III REGIONAL BREAKDOWN OF HEADCOUNT INDEX FOR INTERNATIONAL POVERTY LINES OF $1.00–$2.50 A DAY OVER 1981–2005

16.8 15.9 3.7 8.2 3.6 40.3 41.6 50.9 25.2

9.3 8.1 2.2 5.6 1.6 23.7 24.3 39.9 16.1

2005 POVERTY IN THE DEVELOPING WORLD

1603

1981

92.6 97.8 8.3 22.5 26.7 86.5 86.6 74.0 69.2

95.4 99.4 15.2 29.2 39.0 92.6 92.5 81.0 74.6

Region

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

83.7 86.5 15.1 25.9 31.4 88.6 89.9 82.5 69.2

1993

(d) % living below $2.50 a day 93.5 89.7 87.3 97.4 92.4 91.6 12.5 11.2 12.0 32.4 29.6 26.0 34.8 34.6 31.2 91.5 90.8 90.3 91.5 90.8 90.2 82.3 81.0 82.5 73.7 71.6 70.4

1990 75.8 78.6 10.3 19.3 19.8 79.7 81.7 76.0 61.5

1987

(c) % living below $2.00 a day 88.5 81.6 79.8 92.9 83.7 84.6 6.5 5.6 6.9 25.3 23.3 19.7 23.1 22.7 19.7 84.8 83.9 82.7 84.8 83.8 82.6 75.7 74.2 76.2 67.4 64.2 63.2

1984

TABLE III (CONTINUED)

74.9 76.4 18.3 28.8 32.5 88.5 88.7 84.2 67.2

64.1 65.1 11.9 21.8 20.2 79.9 79.8 77.9 58.2

1996

71.7 71.7 21.4 28.0 30.8 86.7 87.6 83.8 65.9

61.8 61.4 14.3 21.4 19.0 77.2 78.4 77.6 57.1

1999

62.6 61.6 17.8 28.4 29.5 86.5 86.9 82.5 62.4

51.9 51.2 12.0 21.7 17.6 77.1 77.5 75.6 53.3

2002

50.7 49.5 12.9 22.1 28.4 84.4 85.7 80.5 56.6

38.7 36.3 8.9 16.6 16.9 73.9 75.6 73.0 47.0

2005

1604 QUARTERLY JOURNAL OF ECONOMICS

921.7 730.4 3.0 28.0 5.6 387.3 296.1 169.4 1,515.0

1,071.5 835.1 7.1 42.0 13.7 548.3 420.5 213.7 1,896.2

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

1981

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

Region 404.9 288.7 11.7 35.6 4.1 368.0 271.3 287.6 1,111.9 622.3 442.8 21.8 52.2 10.6 594.4 441.8 355.0 1,656.2

(a) Number living below $1.00 a day 721.8 590.2 623.4 588.7 548.5 412.4 499.1 444.4 2.4 2.1 4.1 10.1 35.8 36.9 29.0 27.6 4.6 4.7 3.8 3.7 374.3 384.4 381.2 348.8 282.2 285.3 282.5 280.1 195.9 209.0 245.2 259.0 1,334.7 1,227.2 1,286.7 1,237.9 (b) Number living below $1.25 a day 947.3 822.4 873.3 845.3 719.9 585.7 683.2 632.7 5.7 4.8 9.1 20.1 52.3 52.3 42.9 41.8 11.6 11.9 9.7 9.8 547.6 569.1 579.2 559.4 416.0 428.0 435.5 444.3 243.8 259.6 299.1 318.5 1,808.2 1,720.0 1,813.4 1,794.9

1990

1996

1987

1993

1984

635.1 446.7 24.3 54.8 11.5 588.9 447.2 381.6 1,696.2

420.8 302.4 14.4 37.8 4.7 359.5 270.1 308.4 1,145.6

1999

506.8 363.2 21.7 58.4 10.3 615.9 460.5 390.0 1,603.1

326.8 244.7 12.6 40.7 3.9 372.5 276.1 310.1 1,066.6

2002

316.2 207.7 17.3 46.1 11.0 595.6 455.8 390.6 1,376.7

175.6 106.1 10.2 30.7 4.7 350.5 266.5 304.2 876.0

2005

TABLE IV REGIONAL BREAKDOWN OF NUMBER OF POOR (MILLIONS) FOR INTERNATIONAL POVERTY LINES OF $1.00–$2.50 A DAY OVER 1981–2005

POVERTY IN THE DEVELOPING WORLD

1605

1,277.7 972.1 35.0 82.3 46.3 799.5 608.9 294.2 2,535.1

1,315.8 987.5 64.3 106.9 67.6 855.0 650.3 322.0 2,731.6

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

1981

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

Region 1,108.1 792.2 56.2 105.7 52.2 1,008.8 757.1 471.1 2,802.1 1,293.9 930.2 86.4 139.5 83.8 1,118.5 841.1 509.4 3,231.4

(c) Number living below $2.00 a day 1,280.2 1,238.5 1,273.7 1,262.1 963.3 907.1 960.8 926.3 28.4 25.1 31.9 48.6 98.8 96.3 86.3 88.9 43.9 47.1 44.4 48.0 835.9 881.5 926.0 950.0 635.6 669.0 701.6 735.0 328.3 351.3 393.6 423.8 2,615.4 2,639.7 2,755.9 2,821.4 (d) Number living below $2.50 a day 1,352.8 1,361.9 1,393.7 1,393.7 1,009.8 1,001.7 1,040.4 1,019.0 54.4 50.2 55.7 71.0 126.3 122.6 113.9 119.5 66.1 71.8 70.3 75.9 902.1 954.6 1,011.0 1,056.1 686.1 725.0 766.5 808.8 356.9 383.5 426.4 460.6 2,858.7 2,944.6 3,071.0 3,176.7

1990

1996

1987

1993

1984

TABLE IV (CONTINUED)

1,282.8 899.2 101.2 142.1 84.2 1,156.8 875.2 549.5 3,316.6

1,104.9 770.2 67.6 108.5 51.9 1,030.8 782.8 508.5 2,872.1

1999

1,150.5 788.8 84.0 150.5 85.2 1,216.3 911.4 584.0 3,270.6

954.1 654.9 56.8 114.6 50.9 1,083.7 813.1 535.6 2,795.7

2002

955.2 645.6 61.0 121.8 86.7 1,246.2 938.0 613.7 3,084.7

728.7 473.7 41.9 91.3 51.5 1,091.5 827.7 556.7 2,561.5

2005

1606 QUARTERLY JOURNAL OF ECONOMICS

POVERTY IN THE DEVELOPING WORLD

1607

If instead we use a lower line of $1.00 per day at 2005 prices, then the poverty rate would fall to 15.7% (standard error = 1.3%) by 2015, which is less than half the 1990 value of 34.0%. Not surprisingly (given its population weight), the same observations hold for India, which is not on track for attaining the MDG using the $1.25 line but is on track using the $1.00 line, which is also closer to the national poverty line in India.40 The extent of the “bunching up” that has occurred between $1.25 and $2 per day is particularly striking in both East and South Asia, where we find a total of about 900 million people living between these two lines, roughly equally split between the two sides of Asia. Although this points again to the vulnerability of the poor, by the same token it also suggests that substantial further impacts on poverty can be expected from economic growth, provided that it does not come with substantially higher inequality. We find a declining trend in LAC’s poverty rate but not enough to reduce the count of the number of poor over the 1981–2005 period as a whole, though with more encouraging signs of progress since 1999. MENA has experienced a fairly steady decline in the poverty rate, though (again) not sufficient to avoid a rising count of poor in that region. We find generally rising poverty in EECA using the lower lines ($1.00 and $1.25 a day) though there are very few people who are poor by this standard in EECA. The $2.50-a-day line is more representative of the poverty lines found in the relatively poorer countries of EECA. By this standard, the poverty rate in EECA has shown little clear trend over time in either direction, though there are encouraging signs of a decline in poverty since the late 1990s. The paucity of survey data for EECA in the 1980s should also be recalled. Thus our estimates are heavily based on extrapolations, which do not allow for any changes in distribution. One would expect that distribution was better from the point of view of the poor in EECA in the 1980s, in which case poverty would have been even lower than we estimate—and the increase over time even larger. The incidence of poverty in SSA fell only slightly over the period as a whole, from 54% of the population living under $1.25 a day in 1981 to 51% in 2005. The number of poor by our new $1.25a-day standard has almost doubled in SSA over 1981–2005, from 40. The corresponding poverty rates for the $1.00 line in India are 42.1% (1981), 37.6%, 35.7%, 33.3%, 31.1%, 28.6%, 27.0%, 26.3%, and 24.3% (2005).

1608

QUARTERLY JOURNAL OF ECONOMICS

214 million to over 390 million. The share of the world’s poor by this measure living in Africa has risen from 11% in 1981 to 28% in 2005. The trend increase in SSA’s share of poverty is 0.67% points per year (standard error = 0.04% points), implying that one-third of the world’s poor will live in this region by 2015 (more precisely, the projected poverty rate for that year is 33.7%, with a standard error of 0.8%). However, there are signs of progress against poverty in SSA since the mid-1990s. The $1.25-a-day poverty rate for SSA peaked at 59% in 1996 and fell steadily after, though not enough to bring down the count of poor given population growth. The decline is proportionately higher the lower the poverty line; for the $1-a-day line, the poverty rate in 2005 is 16% lower than its 1996 value. V.C. Poverty Gaps Table V gives the PG indices for $1.25 and $2.00 a day. The aggregate PG for 2005 is 7.6% for the $1.25 line and 18.6% for the $2 line. The GDP per capita of the developing world was $11.30 per day in 2005 (at 2005 PPP). The aggregate poverty gap for the $1.25 line is 0.84% of GDP, whereas it is 3.29% for the $2 line. World (including the OECD countries) GDP per capita was $24.58 per day, implying that the global aggregate PG was 0.33% of global GDP using the $1.25 line and 1.28% using $2.41 Comparing Tables III and V, it can be seen that the regional rankings in terms of the poverty gap index are similar to those for the headcount index, and the changes over time follow similar patterns. The PG measures magnify the interregional differences seen in the headcount indices. The most striking feature of the results in Table III is the depth of poverty in Africa, with a $1.25per-day poverty gap index of almost 21%—roughly twice that for the next poorest region by this measure (South Asia). For the $1.25 line, Africa’s aggregate poverty gap represents 3.2% of the region’s GDP; for the $2 line, it is 9.0%.42 Table VI gives the mean consumption of the poor.43 For 2005, those living below the $1.25-a-day line had a mean consumption 41. This assumes that nobody lives below our international poverty line in the OECD countries. Under this assumption, the aggregate poverty gap as a percentage of global GDP is PG · (z/ y¯ ) · (N/NW), where PG is the poverty gap index (in %), z is the poverty line, y¯ is global GDP per capita, N is the population of the developing world, and NW is world population. 42. The GDP per capita of SSA in 2005, at 2005 PPP, was $8.13 per day. 43. The mean consumption of the poor is (1 − PG/H)z, where PG is the poverty gap index, H is the headcount index, and z is the poverty line.

35.5 39.3 0.4 4.0 1.6 19.6 19.6 22.9 21.3

54.7 59.3 1.9 8.9 7.4 40.7 40.8 38.8 36.5

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

44.9 47.3 1.5 10.2 6.1 38.4 38.2 40.6 32.5

24.2 25.6 0.3 4.7 1.3 17.5 17.2 24.6 16.8

1984

1993 16.4 17.6 1.6 3.3 0.8 12.9 13.6 25.6 12.9 34.8 36.6 3.7 7.4 4.8 32.8 34.1 41.4 27.5

1990

(a) $1.25 18.8 18.2 18.5 20.7 0.3 0.6 4.7 3.6 1.2 0.9 16.4 15.2 15.8 14.6 24.3 26.6 14.5 14.2 (b) $2.00 38.0 37.4 38.2 40.9 1.3 2.0 9.7 7.8 5.9 4.8 37.2 35.7 36.7 35.3 39.8 42.2 29.5 29.1

1987

25.9 26.3 4.1 8.6 4.8 32.7 32.4 42.3 24.7

10.5 10.7 1.7 3.9 0.8 12.6 12.4 25.9 11.0

1996

25.5 25.6 4.5 8.6 4.6 31.0 31.3 42.1 24.3

10.7 11.1 1.6 4.2 0.8 11.7 11.7 25.7 10.9

1999

20.2 20.6 3.8 8.7 4.1 30.8 30.8 39.7 22.1

8.0 8.7 1.3 4.2 0.7 11.5 11.4 23.5 9.6

2002

13.0 12.2 3.0 6.7 4.0 28.7 29.5 37.0 18.6

4.0 4.0 1.1 3.2 0.8 10.3 10.5 21.1 7.6

2005

Note. The poverty gap index is the mean distance below the poverty line as a proportion of the line where the mean is taken over the whole population, counting the nonpoor as having zero poverty gaps.

1981

Region

TABLE V POVERTY GAP INDEX (×100) BY REGION OVER 1981–2005

POVERTY IN THE DEVELOPING WORLD

1609

1981

0.68 0.67 0.97 0.82 0.99 0.84 0.84 0.72 0.74

0.80 0.79 1.55 1.22 1.44 1.05 1.06 0.98 0.94

Region

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

East Asia and Pacific Of which China Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

0.97 0.98 1.56 1.22 1.47 1.10 1.10 0.94 1.03

0.79 0.79 0.95 0.82 0.99 0.86 0.86 0.70 0.80

1984

1990 0.83 0.82 0.84 0.79 0.99 0.88 0.89 0.68 0.82 1.05 1.03 1.51 1.23 1.48 1.14 1.15 0.91 1.08

1987 (a) $1.25 0.81 0.82 0.95 0.78 0.99 0.87 0.88 0.70 0.82 (b) $2.00 1.06 1.09 1.57 1.20 1.46 1.11 1.12 0.94 1.08 1.07 1.07 1.38 1.21 1.49 1.19 1.17 0.92 1.11

0.85 0.84 0.79 0.80 1.01 0.91 0.91 0.69 0.84

1993

1.18 1.19 1.37 1.21 1.50 1.18 1.19 0.89 1.14

0.88 0.88 0.78 0.79 1.01 0.91 0.92 0.70 0.85

1996

TABLE VI MEAN CONSUMPTION OF THE POOR ($ PER DAY) BY REGION OVER 1981–2005

1.17 1.17 1.31 1.21 1.48 1.20 1.20 0.92 1.15

0.87 0.86 0.86 0.77 1.00 0.92 0.92 0.70 0.85

1999

1.20 1.19 1.29 1.24 1.50 1.20 1.21 0.95 1.16

0.89 0.87 0.91 0.78 1.01 0.92 0.93 0.72 0.86

2002

1.31 1.33 1.25 1.26 1.50 1.22 1.22 0.99 1.21

0.95 0.94 0.89 0.77 0.98 0.93 0.93 0.73 0.87

2005

1610 QUARTERLY JOURNAL OF ECONOMICS

POVERTY IN THE DEVELOPING WORLD

1611

FIGURE IV Poverty Rates Using Old and New Poverty Lines

of $0.87 (about 3.5% of global GDP per capita). The overall mean consumption of the poor tended to rise over time, from $0.74 per day in 1981 to $0.87 in 2005 by the $1.25 line, and from $0.94 to $1.21 for the $2 line. Over time, poverty has become shallower in the world as a whole. The mean consumption of Africa’s poor not only is lower than that for other regions, but also has shown very little increase over time (Table VI). The same persistence in the depth of poverty is evident in MENA and LAC, though the poor have slightly higher average levels of living in both regions. The mean consumption of EECA’s poor has actually fallen since the 1990s, even though the overall poverty rate was falling. VI. COMPARISONS WITH PAST ESTIMATES Both the $1.25 and $1.45 lines indicate a substantially higher poverty count in 2005 than obtained using our old $1.08 line in 1993 prices; Figure IV compares the poverty rates estimated using the latter line with those obtained using either the $1.00- or $1.25a-day lines at 2005 PPP. Focusing on the $1.25 line, we find that 25% of the developing world’s population in 2005 is poor, versus 17% using the old line at 1993 PPP—representing an extra 400 million people living in poverty. (As can be seen in Figure IV, the series for $1.00 a day at 2005 PPP tracks closely that for $1.08 at 1993 PPP.)

1612

QUARTERLY JOURNAL OF ECONOMICS

It is notable that the conclusion that the global poverty count has risen is also confirmed if one does not update the nominal value of the 1993 poverty line (ignoring inflation in the United States). Using the $1.08 line for 2005, one obtains an aggregate poverty rate of 19% (1,026 million people) for 2005. The 2005 line that gives the same headcount index for 2005 as the $1.08 line at 1993 PPP turns out to be lower, at $1.03 a day. Although the adjustment for U.S. inflation clearly gives a poverty line for 2005 that is “too high,” the 2005 line must presumably exceed the 1993 nominal line to have comparable purchasing power. So, as long as it is agreed that $1 in 1993 international prices is worth more than $1 at 2005 prices, the qualitative result that the new ICP round implies a higher global poverty count is robust.44 To help understand why we get a higher poverty count for a given year, it is instructive to decompose the total change into its various components. Recall that there are three ways in which the data have been revised: new PPPs, new national poverty lines, and new surveys. The last effect turns out to be small. When we use the new survey database for 2005 to estimate the poverty rate based on the 1993 PPPs and the old $1.08 line we get a headcount index of 17.6% (957.4 million people) instead of 17.2%. So we will focus on the effect of the other two aspects of the data, by evaluating everything using the new survey database. Let ztn denote the new (“n”) vector of national poverty lines (from RCS), evaluated at the PPPs for the ICP of round t, and let zto be the corresponding vector of old (“o”) poverty lines for the n ) = $1.25 a day 1980s (from RDV). The international lines are f (z05 o in 2005 prices and f (z93 ) = $1.08 a day in 1993 prices or $1.45 in 2005 prices adjusting for U.S. inflation. Next let yt be a vector giving the distribution of consumption for the developing world in 2005 evaluated using ICP round t. Let P(ztk, yt )(k = o, n; t = 93, 05) be the poverty measure for 2005 (subsuming the function f ). So o n , y93 ) = 18% and P(z05 , y05 ) = 25%. P(z93 Now consider the following exact decomposition: n o (1) P z05 , y05 − P z93 , y93 = A + B + C, n n where A ≡ P(z05 , y05 ) − P(z93 , y05 ) is the partial effect of the PPP change via the international poverty line, holding the distribution

44. Deaton (2010) questions this claim by comparing an international line of $0.92 a day in 2005 prices with the old $1.08 line in 1993 prices. Yet the former line must have a lower real value.

POVERTY IN THE DEVELOPING WORLD

1613

and national poverty lines constant at their new values; B ≡ n n P(z93 , y05 ) − P(z93 , y93 ) is the partial effect of the change in distrin o , y93 ) − P(z93 , y93 ) bution due to the change in PPPs; and C ≡ P(z93 is the partial effect of updating the data set on national poverty lines. Note that A + B is the total effect (on both the poverty lines and the distribution) of the PPP revisions, holding the data on national poverty lines constant at their new values. There are two counterfactual terms in the decomposition in n n , y05 ) and P(z93 , y93 ). To evaluate these terms we (1), namely P(z93 need to use the $1.44-a-day line at 1993 PPP (Section III), rather than the $1.08 line, which was based on the old RDV compilation of poverty lines. In applying this line to the 2005 distribution n = $1.95 in 2005 we need to update for U.S. inflation, giving z93 n n , y93 ) = 29%. prices. We then obtain P(z93 , y05 ) = 46% and P(z93 Comparing these it can be seen that, holding the 1993 international poverty line constant (in real terms in the United States), the change in the PPPs added 17% to the poverty rate; this results from the higher cost of living in developing countries implied by the 2005 ICP results. (If instead one makes the comparison using the RDV data set on national poverty lines, one obtains o o , y05 ) − P(z93 , y93 ) = 14%.) P(z93 We find that the partial effect of the PPP revisions via the international poverty line is to bring the headcount index down substantially from 46% to 25% ( A = −21%). But there is a large and almost offsetting upward effect of the change in distribution (B = 17%). On balance the net effect of the change in the PPPs is to bring the poverty rate down from 29% to 25% (A + B = −4%). The fact that the PPP revisions on their own bring down the overall poverty count relative to a fixed set of national lines is not surprising, given that the poverty line is set at the mean of lines for the poorest countries and that the proportionate revisions to the PPPs tend to be greater for poorer countries. It can be shown that if the international poverty line is that of the poorest country, which also has the largest upward revision to its PPP, then the aggregate poverty rate will automatically fall, given that the national poverty lines are fixed in local currency units. The working paper version provides a proof of this claim (Chen and Ravallion 2009). Working against this downward effect of the new PPPs, there is an upward adjustment to the poverty count coming from the new data on national poverty lines, which (as we have seen in Figure II) tend to be higher for the poorest countries than those used by RDV for the 1980s. The updating of the data on

1614

QUARTERLY JOURNAL OF ECONOMICS

national poverty lines moved the global poverty rate from 18% to 29% (C = 11%). VII. SENSITIVITY TO OTHER METHODOLOGICAL CHOICES We have already seen how much impact the choice of poverty line has, though we have also noted that the qualitative comparisons over time are robust to the choice of line. In this section we consider sensitivity to two further aspects of our methodology: the first is our use of the PPP for aggregate household consumption and the second is our reliance on surveys for measuring average living standards. VII.A. Alternative PPPs The benchmark analysis has relied solely on the individual consumption PPPs (“P3s”) from the ICP. One deficiency of these PPPs is that they are designed for national accounting purposes not poverty measurement. Deaton and Dupriez (DD) (2009) have estimated “PPPs for the poor” (P4s) for a subset of countries with the required data.45 Constructing P4s requires reweighting the prices to accord with the consumption patterns of those living near the poverty line. Notice that there is a simultaneity in this problem, in that one cannot do the reweighting until one knows the poverty line, which requires the reweighted PPPs. Deaton and Dupriez (2009) implement an iterative solution to derive internally consistent P4s.46 They do this for three price index methods, namely the country product dummy (CPD) method and both Fisher and T¨ornqvist versions of the EKS method used by the ICP. The Deaton–Dupriez P4s cannot be calculated for all countries and they cannot cover the same consumption space as the P3s from the ICP. The limitation on country coverage stems from the fact that P4s require suitable household surveys, namely micro data from consumption expenditure surveys that can be mapped 45. The Asian Development Bank (2008) has taken the further step of implementing special price surveys for Asian countries to collect prices on qualities of selected items explicitly lower than those identified in the standard ICP. Using lower-quality goods essentially entails lowering the poverty line. In terms of the impact on the poverty counts for Asia in 2005, the ADB’s method is equivalent to using a poverty line of about $1.20 a day by our methods. (This calculation is based on a log-linear interpolation between the relevant poverty lines.) 46. In general there is no guarantee that there is a unique solution for this method, although DD provide a seemingly plausible restriction on the Engel curves that ensures uniqueness. They also use an exact, one-step solution for the T¨ornqvist index under a specific parametric Engel curve.

POVERTY IN THE DEVELOPING WORLD

1615

FIGURE V Aggregate Poverty Rates over Time for Alternative PPPs

into the ICP “basic heading” categories for prices; the DD P4s are available for sixty countries, which is about half of our sample. The sixty-country sample is clearly not representative of the developing world as a whole and in some specific regions, notably EECA, where the population share covered by surveys in the sixtycountry sample is only 8%, whereas overall coverage is 79%. As we will see, the sixty-country sample is poorer, in terms of the aggregate (population-weighted) poverty count. Also, some of the 110 basic headings for consumption in the ICP were dropped by DD in calculating their P4s. These included expenditures made on behalf of households by governments and nongovernmental organizations (such as on education and health care). Given that such expenditures are not typically included in household surveys, they cannot be included in DD’s P4s. DD also preferred to exclude housing rentals from their calculations on the grounds that they were hard to measure and that different practices for imputing rentals for owner-occupied housing had been used by the official ICP in different countries. There are other (seemingly more minor) differences in how DD calculated their P4s and the methods used by the ICP. Using the P4s at the country level kindly provided by Deaton and Dupriez, we have recalculated our global poverty measures.

16.8 3.7 8.2 3.6 40.3 50.9 25.2

East Asia and Pacific Eastern Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Sub-Saharan Africa Total (for sampled countries)

P3 60 14 $37.41 110

(3) P3 60 14 $32.88 102

(4)

Headcount index (% of population) 13.4 16.1 12.9 3.75 7.1 8.1 7.1 8.4 7.6 2.2 8.7 4.9 35.1 39.1 34.0 50.4 49.9 49.6 22.4 28.3 25.1

P3 115 15 $33.34 102

(2)

13.2 4.8 8.8 4.2 37.9 50.4 26.7

P4: CPD 60 14 Rs.576.86 102

(5)

12.7 4.1 8.5 4.2 35.6 50.9 25.7

P4: Fisher 60 14 Rs.557.00 102

(6)

12.5 4.6 7.9 4.2 34.0 49.8 24.9

P4: T¨ornqvist 60 14 Rs.547.83 102

(7)

Note. The Deaton–Dupriez P4 calculations are only possible for about half the countries in the full sample, given that consumption expenditure surveys are required. Also, one country drops out of the reference group for calculating the poverty line. The poverty lines for column (5)–(7) P4s are Deaton–Dupriez “world rupees.”

P3 115 15 $38.00 110

PPP No. countries for poverty measures No. countries for poverty line Poverty line (per month) No. basic headings

(1)

TABLE VII AGGREGATE POVERTY RATE AND REGIONAL PROFILE FOR 2005 UNDER ALTERNATIVE PPPS

1616 QUARTERLY JOURNAL OF ECONOMICS

POVERTY IN THE DEVELOPING WORLD

1617

In all cases we recalculate the international poverty line under the new PPPs, as well as (of course) the poverty measures. Table VII gives the results by region for 2005, whereas Figure V plots the estimates by year. In both cases we give our benchmark estimates for the official ICP PPP for individual consumption using all 110 basic headings for consumption and results for the 102 basic headings comprising those that can be matched to surveys less the extra few categories that DD chose not to include. Because the sixteen countries used by DD did not include one of the fifteen countries in our reference group, the poverty line is recalculated for fourteen countries, giving a line of $1.23 a day ($37.41 per month). With the help of the World Bank’s ICP team, we also recalculated the official P3s for consumption using the set of basic headings chosen by DD. Column (1) reproduces the estimates from Table III, whereas column (2) gives the corresponding estimates for the full sample of countries using P3s calibrated to the 102 basic headings used by DD. Columns (3) and (4) give the results corresponding to columns (1) and (2) using the sixty-country subsample used by DD. Columns (5)–(7) give our estimates of the poverty measures using the P4s from DD, for each of their three methods. We give (populationweighted) aggregate results for the sample countries.47 It can be seen from Table VII that the switch from 110 to 102 basic headings reduces the aggregate poverty measures by about three percentage points, whereas switching from the 115country sample to the 60-country sample has the opposite effect, adding three points. The pure effect of switching from P3 to P4 is indicated by comparing column (4) with columns (5)–(7). This change has only a small impact using the EKS method (for either the Fischer or T¨ornqvist indices), though it has a slightly larger effect using the CPD method. On balance, the aggregate poverty count turns out to be quite similar between the P4s and our main estimates using standard P3s on the full sample. If one assumes that the countries without household surveys have the regional average poverty rates, then the Fisher P4 gives a count of 1,402 million for the number of poor, whereas the CPD and T¨ornqvist P4s give counts of 1,454 and 1,359 million, respectively, as compared to 1,377 million using 47. Note that this is a slightly different aggregation method from our earlier results, which assumed that the sample was representative at regional level. That is clearly not plausible for the sixty-country sample used by DD. We have recalculated the aggregates for the 115-country sample under the same basis as for the 60-country sample.

1618

QUARTERLY JOURNAL OF ECONOMICS

standard P3s. The regional profile is also fairly robust, the main difference being lower poverty rates in EECA using P4s, although the poor representation of EECA countries in the sixty-country sample used by DD is clearly playing a role here. The reduction in coverage of consumption items makes a bigger difference, with a higher poverty count in the aggregate (28% for these sixty countries using the standard PPP, versus 25% using the PPP excluding housing), due mainly to higher poverty rates in East and South Asia when all 110 basic headings for consumption are included. The trends are also similar (Figure V). This is not surprising given that, when the usual practice of doing the PPP conversion at only the benchmark year and then using national data sources over time is followed, the real growth rates and distributions at country level are unaffected. VII.B. Mixing National Accounts and Surveys Next we test sensitivity to using instead the geometric mean of the survey mean and its expected value given NAS consumption; as noted in Section IV, this can be given a Bayesian interpretation under certain assumptions. Table VIII gives the estimates implied by the geometric mean; in all other respects we follow the benchmark methodology. The expected value was formed by a separate regression at each reference year; a very good fit was obtained using a log-log specification (adding squared and cubed values of the log of NAS consumption per capita did little or nothing to increase the adjusted R2 ). In the aggregate for most years, and most regions, the level of poverty is lower using the mixed method than the survey-means only method. In the aggregate, the 2005 poverty rate is 18.6% (1,017 million people) using the geometric mean versus 25.2% (1,374 million) using unadjusted survey means. Nonetheless, the mixed method still gives a higher poverty rate for 2005 than implied by the 1993 PPPs. Using the $2.00 line, the 2005 poverty rate falls from 47.0% to 41.0%. Figure VI compares the aggregate headcount indices for $1.25 a day between the benchmark and mixed method. The trend rate of poverty reduction is almost identical between the two, at about 1% per year. (Using the mixed method, the OLS trend is −0.98% per year, with a standard error of 0.04%, versus −0.99% with a standard error of 0.06% using only the survey means.) The linear projection to 2015 implies a poverty rate of 9.95% (s.e. = 1.02%), less than one-third of its 1990 value.

1619

POVERTY IN THE DEVELOPING WORLD TABLE VIII HEADCOUNT INDEX USING MIXED METHOD (%)

1981 1984 1987 1990 1993 1996 1999 2002 2005 East Asia and Pacific Of which China Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total East Asia and Pacific Of which China Europe and Central Asia Latin America and Caribbean Middle East and North Africa South Asia Of which India Sub-Saharan Africa Total

(a) $1.25 a day 67.1 57.4 49.4 48.5 40.6 28.4 26.5 20.3 12.1 73.0 62.3 51.9 55.5 45.0 30.6 29.0 22.4 12.1 1.9 1.7 1.7 2.6 4.8 6.1 5.7 3.8 3.1 13.9 16.3 16.7 18.0 15.0 15.8 14.0 15.3 9.8 7.6

6.5

6.4

5.0

5.0

5.4

5.4

4.4

4.4

42.7 42.3 51.9 43.6

39.3 38.7 54.0 39.6

39.0 38.0 53.7 36.6

33.6 32.2 55.6 35.3

30.4 30.4 55.9 31.7

28.1 26.4 56.5 27.2

28.1 26.4 56.9 26.5

26.2 25.1 55.3 23.7

21.6 20.3 51.0 18.6

(b) $2.00 a day 89.8 86.0 81.4 78.6 95.4 91.6 85.7 85.4 6.9 6.4 5.8 7.4 26.5 30.3 29.2 31.8

73.0 78.4 11.8 28.2

59.6 63.0 14.7 29.3

56.2 58.8 14.8 26.4

46.6 34.0 48.8 33.9 10.8 8.2 28.7 19.6

26.7 24.4 24.0 20.7 20.5 20.7 19.8 17.5 15.8 77.4 77.0 73.1 66.0

75.0 74.6 74.8 64.4

74.7 74.2 74.2 62.5

70.1 69.3 75.2 60.7

67.9 68.4 75.2 58.4

65.4 64.2 76.6 53.7

64.5 63.9 76.9 52.2

62.6 62.4 76.0 48.2

56.8 57.0 73.4 41.0

The mixed method gives a higher poverty rate for LAC and MENA and makes negligible difference for SSA. Other regions see a lower poverty rate. The $1.25-a-day rate for East Asia in 2005 falls from 17% to 12%. The largest change is for South Asia, where by 2005 the poverty rate for India falls to about 20% using the mixed method versus 42% using the unadjusted survey means; the proportionate gap was considerable lower in 1981 (42% using the mixed method versus 60% using the survey mean alone). India accounts for a large share of the discrepancies between the levels of poverty between the benchmark and the mixed method, reflecting both the country’s population weight and the large gap that has emerged in recent times between the surveybased and national accounts consumption aggregates for India. Figure VI also gives the complete series for $1.25 a day excluding India; it can be seen that the gap between the two methods narrows over time. If we focus on the poverty rates for the developing

1620

QUARTERLY JOURNAL OF ECONOMICS

FIGURE VI Aggregate Poverty Rates over Time for Benchmark and Mixed Method

world excluding India then the difference between the mixed method and the benchmark narrows considerably, from 21.1% (918 million people) to 18.2% (794 million) in 2005. (The $2.00 poverty rates are 39.8% and 36.7% respectively.) In 2005, about two-thirds of the drop in the count of the number of people living under $1.25 a day in moving from the benchmark to the mixed method is due to India.

VIII. CONCLUSIONS Global poverty measurement combines data from virtually all branches of the statistical system. The measures reported here bring together national poverty lines, household surveys, census data, national accounts and both national and international price data. Inevitably there are comparability and consistency problems when combining data from such diverse sources. Price indices for cross-country comparisons do not always accord well with those used for intertemporal comparisons within countries. In some countries, the surveys give a different picture of average living standards to the national accounts, and the methods used in both surveys and national accounts differ across countries.

POVERTY IN THE DEVELOPING WORLD

1621

However, thanks to the efforts and support of governmental statistics offices and international agencies, and improved technologies, the available data on the three key ingredients in international poverty measurement—national poverty lines, representative samples of household consumption expenditures (or incomes) and data on prices—have improved greatly since global poverty monitoring began. The expansion of country-level poverty assessments since the early 1990s has greatly increased the data available on national poverty lines. Side-by-side with this, the country coverage of credible household survey data, suitable for measuring poverty, has improved markedly, the frequency of data has increased, public access to these data has improved, and the lags in data availability have been reduced appreciably. And with the substantial global effort that went into the 2005 International Comparison Program we are also in a better position to assure that the poverty lines used in different countries have similar purchasing power, so that two people living in different countries but with the same real standard of living are treated the same way. The results of the 2005 ICP imply a higher cost of living in developing countries than past ICP data have indicated; the “Penn effect” is still evident, but it has been overstated. We have combined the new data on prices from the 2005 ICP and household surveys with a new compilation of national poverty lines, which updates (by fifteen years on average) the old national lines used to set the original $1-a-day line. Importantly, the new compilation of national lines is more representative of developing countries, given that the sample size is larger and it corrects the sample biases in the old data set. The pure effect of the PPP revisions is to bring the poverty count down but this is outweighed by the higher level of the national poverty lines in the poorest countries, as used to determine the international line. Our new calculations using the 2005 ICP and new international poverty line of $1.25 a day imply that 25% of the population of the developing world, 1.4 billion people, were poor in 2005, which is 400 million more for that year 2005 than implied by our old international poverty line based on national lines for the 1980s and the 1993 ICP. In China alone, which had not previously participated officially in the ICP, the new PPP implies that an extra 10% of the population is living below our international poverty line. But the impact is not confined to China; there are upward revisions to our past estimates for all regions. The higher global count is in no small measure the result of correcting the sample

1622

QUARTERLY JOURNAL OF ECONOMICS

bias in the original compilation of national poverty lines used to set the old “$1-a-day” line. Although there are a number of data and methodological issues that caution against comparisons across different sets of PPPs, it is notable that our poverty count for 2005 is quite robust to using alternative PPPs anchored to the consumption patterns of those living near the poverty line. Of course, different methods of determining the international poverty line give different poverty counts. If we use a line of $1.00 a day at 2005 PPP (almost exactly India’s official poverty line) then we get a poverty rate of 16%—slightly under 900 million people—whereas if we use the median poverty line for all developing countries in our povertyline sample, namely $2.00 a day, then the poverty rate rises to 50%, slightly more than two billion people. As a further sensitivity test we have proposed a simple Bayesian method of mixing the data on consumption from the national accounts consumption with that from survey means, whereby the survey mean is replaced by the geometric mean of the survey mean and its predicted value based on prior national accounts data. This is justified only under potentially strong assumptions, notably that consumption is identically log-normally distributed between the (national-accounts-based) prior and the surveys. These assumptions can be questioned, but they do at least provide a clear basis for an alternative hybrid estimator. This gives a lower poverty count for 2005, namely 19% living below $1.25 a day rather than 25%. A large share of this gap—twothirds of the drop in the count of the number of poor in switching to the mixed method—is due to India’s (unusually large) discrepancy between consumption measured in the national accounts and that measured by surveys. Although the new data suggest that the developing world is poorer than we thought, it has been no less successful in reducing the incidence of absolute poverty since the early 1980s. Indeed, the overall rate of progress against poverty is fairly similar to past estimates and robust to our various changes in methodology. The trend rate of global poverty reduction of 1% per year turns out to be slightly higher than we had estimated previously, due mainly to the higher weight on China’s remarkable pace of poverty reduction. The trend is even higher if we use our Bayesian mixedmethod. The developing world as a whole is clearly still on track to attaining the first Millennium Development Goal of halving the 1990s “extreme poverty” rate by 2015. China attained the

POVERTY IN THE DEVELOPING WORLD

1623

MDG early in the millennium, almost 15 years ahead of the target date. However, the developing world outside China will not attain the MDG without a higher rate of poverty reduction than we have seen over 1981–2005. The persistently high incidence and depth of poverty in SSA are particularly notable. There are encouraging signs of progress in this region since the late 1990s, although lags in survey data availability and problems of comparability and coverage leave us unsure about how robust this will prove to be. DEVELOPMENT RESEARCH GROUP, WORLD BANK DEVELOPMENT RESEARCH GROUP, WORLD BANK

REFERENCES Ackland, Robert, Steve Dowrick, and Benoit Freyens, “Measuring Global Poverty: Why PPP Methods Matter,” Australian National University, Canberra, Mimeo, 2007. African Development Bank, Comparative Consumption and Price Levels in African Countries (Tunis, Tunisia: African Development Bank, 2007). Ahmad, Sultan, “Purchasing Power Parity for International Comparison of Poverty: Sources and Methods,” Development Data Group, World Bank, 2003. Asian Development Bank, Comparing Poverty Across Countries: The Role of Purchasing Power Parities (Manila, the Philippines: Asian Development Bank, 2008). Atkinson, Anthony B., “On the Measurement of Poverty,” Econometrica, 55 (1987), 749–764. Balassa, Bela, “The Purchasing Power Parity Doctrine: A Reappraisal,” Journal of Political Economy, 72 (1964), 584–596. Bhalla, Surjit, Imagine There’s No Country: Poverty, Inequality and Growth in the Era of Globalization (Washington, DC: Institute for International Economics, 2002). Bidani, Benu, and Martin Ravallion, “A New Regional Poverty Profile for Indonesia,” Bulletin of Indonesian Economic Studies, 29 (1993), 37–68. Bourguignon, Franc¸ois, and Christian Morrisson, “Inequality among World Citizens: 1820–1992,” American Economic Review, 92 (2002), 727–744. Central Statistical Organization, Report of the Group for Examining Discrepancy in PFCE Estimates from NSSO Consumer Expenditure Data and Estimates Compiled by National Accounts Division (New Delhi: Central Statistical Organization, Ministry of Statistics, Government of India, 2008). Chaudhuri, Shubham, and Martin Ravallion, “How Well Do Static Indicators Identify the Chronically Poor?” Journal of Public Economics, 53 (1994), 367–394. Chaudhury, Nazmul, Jeffrey Hammer, Michael Kremer, Karthik Muralidharan, and F. Halsey Rogers, “Missing in Action: Teacher and Health Worker Absence in Developing Countries,” Journal of Economic Perspectives, 20 (2006), 91–116. Chen, Shaohua, and Martin Ravallion, “How Did the World’s Poor Fare in the 1990s?” Review of Income and Wealth, 47 (2001), 283–300. ——, “How Have the World’s Poorest Fared since the Early 1980s?” World Bank Research Observer, 19 (2004), 141–70. ——, “Absolute Poverty Measures for the Developing World, 1981–2004,” Proceedings of the National Academy of Sciences, 104 (2007), 16,757–16,762. ——, “The Developing World Is Poorer Than We Thought, But No Less Successful in the Fight against Poverty,” World Bank Policy Research Working Paper 4703, 2009.

1624

QUARTERLY JOURNAL OF ECONOMICS

——, “China Is Poorer Than We Thought, But No Less Successful in the Fight against Poverty,” in Debates on the Measurement of Poverty, Sudhir Anand, Paul Segal, and Joseph Stiglitz, eds. (Oxford, UK: Oxford University Press, 2010). Cuthbert, James R., and Margaret Cuthbert, “On Aggregation Methods of Purchasing Power Parities,” Working Paper 56, Department of Economics and Statistics, OECD, Paris, 1988. Dalgaard, Esben, and Henrik Sørensen, “Consistency between PPP Benchmarks and National Price and Volume Indices,” Paper Presented at the 27th General Conference of the International Association for Research on Income and Wealth, Sweden, 2002. Deaton, Angus, “Measuring Poverty in a Growing World (or Measuring Growth in a Poor World),” Review of Economics and Statistics, 87 (2005), 353–378. ——, “Price Indexes, Inequality, and the Measurement of World Poverty,” Presidential Address to the American Economics Association, Atlanta, GA, 2010. Deaton, Angus, and Olivier Dupriez, “Global Poverty and Global Price Indices,” Development Data Group, World Bank, Mimeo, 2009. Deaton, Angus, and Alan Heston, “Understanding PPPs and PPP-Based National Accounts,” American Economic Journal: Macroeconomics, forthcoming, 2010. Deaton, Angus, and Salman Zaidi, “Guidelines for Constructing Consumption Aggregates for Welfare Analysis,” World Bank Living Standards Measurement Study Working Paper 135, 2002. Diewert, Erwin, “New Methodology for Linking Regional PPPs,” International Comparison Program Bulletin, 5 (2008), 10–14. Groves, Robert E., and Mick P. Couper, Nonresponse in Household Interview Surveys (New York: Wiley, 1998). Hansen, Bruce E., “Sample Splitting and Threshold Estimation,” Econometrica, 68 (2000), 575–603. Ikl´e, Doris, “A New Approach to the Index Number Problem,” Quarterly Journal of Economics, 86 (1972), 188–211. Karshenas, Massoud, “Global Poverty: National Accounts Based versus Survey Based Estimates,” Development and Change, 34 (2003), 683–712. Korinek, Anton, Johan Mistiaen, and Martin Ravallion, “Survey Nonresponse and the Distribution of Income,” Journal of Economic Inequality, 4 (2006), 33–55. ——, “An Econometric Method of Correcting for Unit Nonresponse Bias in Surveys,” Journal of Econometrics, 136 (2007), 213–235. Lanjouw, Peter, and Martin Ravallion, “Poverty and Household Size,” Economic Journal, 105 (1995), 1415–1435. Lopez, Humberto, and Luis Serv´en, “A Normal Relationship? Poverty, Growth and Inequality,” World Bank Policy Research Working Paper 3814, 2006. Nuxoll, Daniel A., “Differences in Relative Prices and International Differences in Growth Rates,” American Economic Review, 84 (1994), 1423–1436. Pinkovskiy, Maxim, and Xavier Sala-i-Martin, “Parametric Estimations of the World Distribution of Income,” NBER Working Paper 15433, 2009. Planning Commission, Report of the Expert Group to Review the Methodology for Estimation of Poverty (New Delhi: Planning Commission, Government of India, 2009). Ravallion, Martin, Poverty Comparisons (Chur, Switzerland: Harwood Academic Press, 1994). ——, “Measuring Aggregate Economic Welfare in Developing Countries: How Well Do National Accounts and Surveys Agree?” Review of Economics and Statistics, 85 (2003), 645–652. ——, “A Global Perspective on Poverty in India,” Economic and Political Weekly, 43 (2008a), 31–37. ——, “On the Welfarist Rationale for Relative Poverty Lines,” in Social Welfare, Moral Philosophy and Development: Essays in Honour of Amartya Sen’s Seventy Fifth Birthday, Kaushik Basu and Ravi Kanbur, eds. (Oxford: Oxford University Press, 2008b). ——, “Poverty Lines,” in The New Palgrave Dictionary of Economics, Larry Blume and Steven Durlauf, eds. (London: Palgrave Macmillan, 2008c). ——, “Understanding PPPs and PPP-Based National Accounts: A Comment,” American Economic Journal: Macroeconomics, forthcoming, 2010.

POVERTY IN THE DEVELOPING WORLD

1625

Ravallion, Martin, and Shaohua Chen, “China’s (Uneven) Progress against Poverty,” Journal of Development Economics, 82 (2007), 1–42. ——, “Weakly Relative Poverty,” Review of Economics and Statistics, in press, 2010. Ravallion, Martin, Shaohua Chen, and Prem Sangraula, “New Evidence on the Urbanization of Global Poverty,” Population and Development Review, 33 (2007), 667–702. ——, “Dollar a Day Revisited,” World Bank Economic Review, 23 (2009), 163–184. Ravallion, Martin, Gaurav Datt, and Dominique van de Walle, “Quantifying Absolute Poverty in the Developing World,” Review of Income and Wealth, 37 (1991), 345–361. Sala-i-Martin, Xavier, “The World Distribution of Income: Falling Poverty and . . . Convergence. Period,” Quarterly Journal of Economics, 121 (2006), 351–397. Samuelson, Paul, “Theoretical Notes on Trade Problems,” Review of Economics and Statistics, 46 (1964), 145–154. Slesnick, Daniel, “Empirical Approaches to Measuring Welfare,” Journal of Economic Literature, 36 (1998), 2108–2165. Summers, Robert, and Alan Heston, “The Penn World Table (Mark 5): An Extended Set of International Comparisons, 1950–1988,” Quarterly Journal of Economics, 106 (1991), 327–368. United Nations, Evaluation of the International Comparison Programme (New York: UN Statistics Division, 1998). World Bank, World Development Report: Poverty (New York: Oxford University Press, 1990). ——, India: Achievements and Challenges in Reducing Poverty, Report No. 16483IN, 1997. ——, Comparisons of New 2005 PPPs with Previous Estimates (Revised Appendix G to World Bank [2008b]) (www.worldbank.org/data/icp, 2008a). ——, Global Purchasing Power Parities and Real Expenditures. 2005 International Comparison Program (www.worldbank.org/data/icp, 2008b).

POST-1500 POPULATION FLOWS AND THE LONG-RUN DETERMINANTS OF ECONOMIC GROWTH AND INEQUALITY∗ LOUIS PUTTERMAN AND DAVID N. WEIL We construct a matrix showing the share of the year 2000 population in every country that is descended from people in different source countries in the year 1500. Using the matrix to adjust indicators of early development so that they reflect the history of a population’s ancestors rather than the history of the place they live today greatly improves the ability of those indicators to predict current GDP. The variance of the early development history of a country’s inhabitants is a good predictor for current inequality, with ethnic groups originating in regions having longer histories of organized states tending to be at the upper end of a country’s income distribution.

I. INTRODUCTION Economists studying income differences among countries have been increasingly drawn to examine the influence of longterm historical factors. Although the theories underlying these analyses vary, the general finding is that things that were happening 500 or more years ago matter for economic outcomes today. Hibbs and Olsson (2004) and Olsson and Hibbs (2005), for example, find that geographic factors that predict the timing of the Neolithic revolution in a region also predict income and the quality of institutions in 1997. Comin, Easterly, and Gong (2006, 2010) show that the state of technology in a country 500, 2,000, or even 3,000 years ago has predictive power for the level of output today. Bockstette, Chanda, and Putterman (2002) find that an index of the presence of state-level political institutions from year 1 to 1950 has positive correlations, significant at the 1% level, with both 1995 income and 1960–1995 income growth. And Galor and Moav (2007) provide empirical evidence for a link from the timing of the transition to agriculture to current variations in life expectancy. ∗ We thank Charles Jones, Oded Galor, and seminar participants at Ben Gurion University, Brown University, the University of Haifa, Hebrew University of Jerusalem, the NBER Summer Institute, the Stockholm School of Economics, the CEGE annual conference at the University of California at Davis, Tel Aviv University, and University College London for helpful comments. We also thank Federico Droller, Bryce Millett, Momotazur Rahman, Isabel Tecu, Ishani Tewari, Yaheng Wang, and Joshua Wilde for valuable research assistance. Louis [email protected]; David [email protected] C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1627

1628

QUARTERLY JOURNAL OF ECONOMICS

Examining this sort of historical data immediately raises a problem, however: the further back into the past one looks, the more the economic history of a given place tends to diverge from the economic history of the people who currently live there. For example, the territory that is now the United States was inhabited in 1500 largely by hunting, fishing, and horticultural communities with pre-iron technology, organized into relatively small, pre-state political units.1 In contrast, a large fraction of the current U.S. population is descended from people who in 1500 lived in settled agricultural societies with advanced metallurgy, organized into large states. The example of the United States also makes it clear that, because of migration, the long-historical background of the people living in a given country can be quite heterogeneous. This observation, combined with the finding that the long history of a country’s residents affects the average level of income, naturally raises the question of whether heterogeneity in the background of a country’s residents is a determinant of income inequality within the country. Previous attempts to deal with the impact of migration in modifying the influence of long-term historical factors have been somewhat ad hoc. Hibbs and Olsson, for example, acknowledge the need to account for the movement of peoples and their technologies, but do so only by treating four non-European countries (Australia, Canada, New Zealand, and the United States) as if they were in Europe. Comin, Easterly, and Gong (2006) similarly add dummy variables to their regression model for countries with “major” European migration (the four mentioned above) and “minor” European migration (mostly in Latin America).2 In other cases, variables meant to measure other things may in fact be proxying for migration. For example, the measure of the origin of a country’s legal systems examined by La Porta et al. (1998) may be proxying for the origins of countries’ people. This is also true of Hall and Jones’s (1999) proportion speaking European languages measure. The apparent effect of institutions that were either brought along by European settlers or imposed by nonsettling colonial powers, as found in Acemoglu, Johnson, and Robinson 1. Anthropologists subscribing to cultural evolutionary models speak of political institutions evolving from the band to the tribe to the chiefdom and finally the state (see, for instance, Johnson and Earle [1987]). There were no pre-Columbian states north of the Rio Grande, according to such schema. 2. Comin, Easterly, and Gong use this technique in their 2006 working paper. In the 2010 version of the paper, they adjust for migration using Version 1.0 of our migration matrix.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1629

(2001, 2002), may be proxying for population shifts themselves, despite their attempt (discussed below) to control for the Europeandescended population share. In this paper we pursue the issue of migration’s role in shaping the current economic landscape in a much more systematic fashion than previous literature. (Throughout the paper, we use the term “migration” to refer to any movement of population across current national borders, although we are cognizant that these movements included transport of slaves and forced relocation as well as voluntary migration.) We construct a matrix detailing the year-1500 origins of the current population of almost every country in the world. In addition to the quantity and timing of migration, the matrix also reflects differential population growth rates among native and immigrant population groups. The matrix can be used as a tool to adjust historical data to reflect the status in the year 1500 of the ancestors of a country’s current population. That is, we can convert any measure applicable to countries into a measure applicable to the ancestors of the people who now live in each country. We use this technique to examine how early development impacted current income and inequality. The most thorough previous work along these lines is in the papers by Acemoglu, Johnson, and Robinson (AJR) mentioned above, where they calculate the share of the population that is of European descent for 1900 and 1975. There are a number of conceptual and operational differences between our approach and theirs. Our estimates break down ancestor populations much more finely than “European” and “non-European.” This distinction is important both in the Americas, where there is great variation in the fraction of the population descended from Amerindians vs. Africans, and also in other regions, where important nonnative populations are not descended from Europeans (consider the large Chinese-descended populations in Singapore and Malaysia, or Indian-descended populations in South Africa, Malaysia, and Fiji). Even when we use our matrix to construct a measure of the European population fraction, there are considerable differences between our data and AJR’s. They use as their measure of the European population the fraction of people who are “white,” whereas we also include an estimate of the fraction of European ancestors among mestizo populations. In Mexico, for example, AJR estimate the European population in 1975 to be 15%, even though (in their data) there is an additional 55% of the

1630

QUARTERLY JOURNAL OF ECONOMICS

population that is mestizo. Our estimate of the European share of ancestors for today’s Mexicans is 30%. The AJR estimates are primarily based on data in McEvedy and Jones (1978), which sometimes apply to whole regions, and occasionally involve extrapolation from as far in the past as 1800. Our data are based on a broader selection of more recent sources, including genetic analyses, encyclopedias, government reports, and compilations by religious groups, which are summarized in Appendix I and Online Appendix B.3 The correlation between our measure of the European fraction and the AJR measure is 0.89.4 The rest of this paper is structured as follows. Section II describes the construction of our migration matrix and then uses the matrix to lay out some of the important facts regarding the population movements that have reshaped genetic and cultural landscapes in the world since 1500. We find that a significant minority of the world’s countries have populations mainly descended from the people of other continents and that these countries themselves are quite heterogeneous. In Section III, we apply our migration matrix to analyze the determinants of current income. Using several measures of early development, we show that adjusting the data to reflect where people’s ancestors came from improves the ability of measures of early social and technological development to predict current levels of income. The positive effect of ancestry-adjusted early development on current income is robust to the inclusion of a variety of controls for geography, climate, and current language. We also examine the effect on current income of heterogeneity in early development. We find that, holding constant the average level of early development, heterogeneity in early development raises current income, a finding that might indicate spillovers of growth-promoting traits among national origin groups. In Section IV, we turn to the issue of inequality. We show that heterogeneity in the early development of a country’s ancestors predicts current income inequality and that this effect 3. Appendix I briefly describes our sources and methods. Online Appendix B provides further details, including summaries of the factors behind the estimate for each row. The entire matrix and all Appendices can be downloaded at http://www.econ.brown.edu/fac/Louis Putterman/. 4. The largest differences occur in the Americas. For example, for the five Central American countries of El Salvador, Nicaragua, Panama, Costa Rica, and Honduras, AJR use a uniform value of 20% European; our estimates range from 45% in Panama to 60% in Costa Rica. The largest outlier in the other direction is Trinidad and Tobago, which they list as 40% European and which is only 7% in our measure. Here they seem to have erroneously counted all non-Africans as European, despite the presence of a large Asian population.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1631

is robust to the inclusion of several other measures of the heterogeneity of the current population. We also show that ethnic groups originating in regions with higher levels of early development tend to be placed higher in a recipient country’s income distribution. Section V concludes.

II. LARGE-SCALE POPULATION MOVEMENTS SINCE 1500 We use the year 1500 as a rough starting point for the era of European colonization of the other continents. It is well known that most contemporary residents of countries such as Australia and the United States are not descendants of their territory’s inhabitants circa 1500 but of people who arrived subsequently from Europe, Africa, and other regions. But exactly what proportions of the ancestors of today’s inhabitants of each country derive from what regions and from the territories of which present-day countries has not been systematically studied. Accordingly, we examined a wide array of secondary compilations to form the best available estimates of where the ancestors of the long-term residents of today’s countries were living in 1500. Generally, these estimates have to work back from information presented in terms of ethnic groupings in modern populations. For example, sources roughly agree on the proportion of Mexico’s population considered to be mestizo, that is, to have both Spanish and indigenous ancestors, on the proportion having exclusively Spanish ancestors, on the proportion exclusively indigenous, and on the proportion descended from migrants from other countries. There is similar agreement about the proportion of Haitians descended from African slaves, the proportion of people of (East) Indian origin in Guyana, the proportion of “mixed” and “Asian” people in South Africa, and so on. A crucial and challenging piece of our methodology is the attribution, with proper weights, of mixed populations such as mestizos and mulattoes to their original source countries. Saying, for example, that Mexican mestizos are descended from Spanish immigrants and native Mexicans gives no information about the shares of these different groups in their ancestry. Socially constructed descriptions of race and ethnicity may differ from the mathematical contributions to individuals’ ancestry in which we are interested. Contributions from particular groups may be suppressed, exaggerated, or simply forgotten.

1632

QUARTERLY JOURNAL OF ECONOMICS

For these reasons, whenever possible we have used genetic evidence as the basis for dividing the ancestry of modern mixed groups that account for large fractions of their country’s population.5 The starting point for this analysis is differences in the frequencies with which different alleles (alternative DNA sequences at a fixed position on a chromosome) appear in ancestor populations from different parts of the world. Comparing the allele frequency in a modern population with the frequency in source populations, one can derive an estimate of the percentage contribution for each source. Early studies in this literature used blood group frequencies in modern populations to estimate ancestry. More recent studies use allele frequencies for multiple genes. In selecting among studies, we favored those based on larger samples with well-identified source populations as well as those done in more recent years using modern techniques.6 The genetic studies we consulted were sometimes of specific groups (such as mestizos) and sometimes of the population as a whole, unconditional on race or ethnicity. In the former case, we applied the genetic evidence to divide up ancestry in the particular mixed group, and multiplied by that group’s representation in the overall population.7 Examination of this genetic evidence produced a number of surprises regarding the ancestry of New World populations. For example, the usual historical narrative is that many native populations in the Caribbean, such as the Arawak who occupied the island of Hispaniola (present-day Haiti and the Dominican Republic), died out during the early decades of colonial rule due to disease and the effects of enslavement. However, genetic evidence 5. By “large,” we mean 30% or greater. In addition, we incorporated findings from genetic studies on U.S. African-Americans and on Puerto Ricans and Costa Ricans of primarily Spanish descent, for whom modern genetic studies indicate appreciable admixture (with Europeans and Amerindians, respectively) since 1500. 6. We focus on autosomal DNA, which is not sex-linked, in preference to information on either the Y chromosome, which indicates descent along the male line, or mitochondrial DNA, which indicates descent along the female line. However, evidence from sex-linked genes can provide a useful check on our historical understanding. For example, among many mixed populations in the Caribbean, Native American characteristics are far more common in mitochondrial DNA than on Y chromosomes, indicating that native men were largely unable to breed, whereas native women produced children with European and African men. 7. We used genetic evidence in our analyses of Belize, Bolivia, Brazil, Cape Verde, Chile, Colombia, Costa Rica, Cuba, the Dominican Republic, Ecuador, Guatemala, Mexico, Nicaragua, Paraguay, Peru, Puerto Rico, the United States, and Venezuela. We also searched for genetic data for other countries for which our conventional sources list large mixed-ancestry populations, but were unsuccessful in finding anything in the cases of El Salvador, Honduras, and Panama. See Section II.4 of Main Appendix 1.1 of Online Appendix B as well as the individual country entries in the regional Appendices for details.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1633

suggests that of the ancestors of current residents of the Dominican Republic alive in 1500, 3.6% were local Amerindians. In the case of Costa Rica, 86.5% of residents describe themselves as being of Spanish origin, but genetic evidence (unconditional on ethnicity or race) shows Costa Rican’s ancestry (apart from a small Chinese minority) to be 61% Spanish, 30% Amerindian, and 9% African. A final example: the genetic data we examined show a significant contribution of Africans (10%) to the ancestry of the mestizos who make up 60% of Mexico’s population. In cases where genetic evidence on the ancestry of mixed groups was not available, we relied on textual accounts and/or generalizations from countries with similar histories for which genetic data were available. Genetic information can distinguish only between broad ancestry groups, such as Africans, Native Americans, and Europeans. Beyond this genetic information, other sources were brought to bear to help in the decomposition of mixed categories. For example, we use an archive on the slave trade to estimate the proportion of slaves in a given region who originated from parts of Africa identifiable with certain presentday countries. We apply estimates of where the world’s Ashkenazi Jews and Gypsies lived in 1500 to map people with these ethnic identifications to specific countries of today. Similarly, in some countries such as the United States and Canada, national censuses contain information on the breakdown by specific country of ancestry. Using these methods, we constructed a matrix of migration since 1500. The matrix has 165 rows, each for a present-day country, and 172 columns (the same 165 countries plus seven other source countries with current populations of less than one half million). Its entries are the proportion of long-term residents’ ancestors estimated to have lived in each source country in 1500. Each row sums to one. To give an example, the row for Malaysia has five nonzero entries, corresponding to the five source countries for the current Malaysian population: Malaysia (0.60), China (0.26), India (0.075), Indonesia (0.04) and the Philippines (0.025). Throughout our analysis, we take a “fractional” view of ancestry and descent. Thus matrix entries measure the fraction of a country’s ancestry attributable to different source countries, without distinguishing between whether descendants from those source countries have mixed together or remained ethnically pure (although we did use this information in constructing the matrix). Similarly, when we calculate the number of descendants from a

1634

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I (a) Distribution of Countries by Proportion of Ancestors from Own or Immediate Neighboring Country; (b) Distribution of World Population by Proportion of Ancestors from Own or Immediate Neighboring Country

source country we add up people based on the fraction of their ancestry attributable to the source country. The principal diagonal of the matrix provides a quick indication of differences in the degree to which countries are now populated by the ancestors of their historical populations. The diagonal entries for China and Ethiopia (with shares below one-half percent being ignored) are 1.0, whereas the corresponding entries for Jamaica, Haiti, and Mauritius are 0.0 and that of Fiji is close to 0.5. In some cases, the diagonal entry may give a misleading impression without further analysis; for example, the diagonal entry for Botswana is 0.31 because only 31% of Botswanans’ ancestors are estimated to have lived in present-day Botswana in 1500, but another 67% were Africans who migrated to Botswana from what is now neighboring South Africa in the seventeenth and eighteenth centuries. Figures Ia and Ib are histograms of the proportions of countries and people, respectively, falling into decile bands with respect to the proportion of the current people’s ancestors residing

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1635

in the same or an immediate neighboring country in 1500.8 The figures show bimodal distributions, with 9.7% of countries having 0% to 10% indigenous or near-indigenous ancestry and 70.3% of countries having 90% to 100% such ancestry. Altogether, 80.9% of the world’s people (excluding those in the smallest countries, which are not covered) live in countries that are more than 90% indigenous in population, whereas 10.0% live in countries that are less than 30% indigenous, with the rest (dominated by Central America, the Andes, and Malaysia) falling in between. The compositions of nonindigenous populations are also of interest. The populations of Australia, New Zealand, and Canada are overwhelmingly of European origin, whereas Central American and Andean countries have both large Amerindian and substantial European-descended populations, and Caribbean countries and Brazil have substantial African-descended populations. Guyana, Fiji, Malaysia, and Singapore are among the countries with substantial minorities descended from South Asians, whereas Malaysia and Singapore also have large Chinesedescended populations.9 We illustrate differences both in the proportions of people of nonlocal descent and in the composition of those people by means of Figure II. Country shading indicates the proportion of the population not descended from residents of the same or immediate neighboring countries. Pie charts, drawn for thirteen macro-regions, show the average proportions descended from European migrants, from migrants (or slaves) from Africa, and from migrants from other regions, as well as the proportion descended from people of the same region.10 In terms 8. We define an immediate neighbor as sharing a land boundary or being separated by less than 24 miles of water. Data are from the Correlates of War Project (2000). 9. The populations of Hong Kong and Taiwan are also overwhelmingly descended from Chinese who came to their territories after 1500, giving those entities 97.1% and 98% ancestry from what is now China, according to the matrix. 10. Regions were defined with the aim of keeping their number small enough for purposes of display and grouping countries with similar population profiles. The Caribbean includes Cuba, the Dominican Republic, Haiti, Jamaica, Puerto Rico, and Trinidad and Tobago. Europe is inclusive of the Russian Republic. North Africa and West and Central Asia includes all African and Asian countries bordering the Mediterranean, including Turkey, the traditional Middle East, Afghanistan, and former Soviet republics in the Caucasus and Central Asia. South Asia includes Pakistan, India, Bangladesh, Sri Lanka, Nepal, and Bhutan. East Asia includes Mongolia, China, Hong Kong, North and South Korea, Japan, and Taiwan. Southeast Asia includes the remainder of Asia plus New Guinea and Fiji. Note that for calculation of the pie chart shares, ancestors are assumed to be from “the same region” if they are from countries in the regions thus indicated. This assumption means that Europeans are left out of the “European migrant” category of the pie charts if they live in Europe, even if they have migrated within the continent, and likewise for sub-Saharan Africans in SSA.

QUARTERLY JOURNAL OF ECONOMICS

FIGURE II Regional Ethnic Origins

1636

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1637

of territory, about half the world’s land mass (excluding Greenland and Antarctica), comprising almost all of Africa, Europe, and Asia, is in countries with almost entirely indigenous populations (shown in black), whereas about one-third has less than 20% indigenous inhabitants, and the remainder, dominated by Central America, the Andes, and Malaysia, falls somewhere in between. The heterogeneity of regions in the Americas and Australia/New Zealand is highlighted by the pie charts, showing strong European dominance in Australia/New Zealand, the United States, Canada, and eastern South America, stronger indigenous presence in the Andes, and strong African representation in the Caribbean. We consider the effects of this heterogeneity in Section IV. Although we are mostly interested in using the migration matrix to better understand the determinants of long-run economic performance in countries as presently populated, the versatility of the data can be illustrated by using them to calculate the number of descendants of populations that lived five centuries ago and to see how they have fared. Given data on country populations in 2000, the matrix will tell the total number of people today who are descended from each 1500 source country, and where on the globe they are to be found. For instance, using 2000 population figures from the Penn World Tables 6.2, we find that there were 32.9 million descendants of 1500’s Irish alive at the turn of the millennium, of whom 11.3% lived in Ireland itself, 77.2% in the United States, 5.0% in Australia, and 4.1% in Canada. Combining the information in the matrix with population data for the years 1500 and 2000 yields a number of interesting insights. Because population data for 1500 are very noisy, particularly at the country level, we confine our analysis to looking at 11 large regions.11 The first two columns of Table I list the estimated population of each region in 1500 and 2000. The third column shows the increase in total population over the 500-year period. The primary determinant of this increase in density is the level of economic development in 1500. Europe, East Asia, and South Asia, which were highly developed, had the smallest increases in density. The United States and Canada, Australia and New Zealand, and the Caribbean, which were relatively lightly populated, lacked urban centers and were still home to many 11. Data are from McEvedy and Jones (1978). The regions are the same as those in Figure II, except that the three parts of South America are collapsed into a single region.

U.S. and Canada Mexico and Central America The Caribbean South America Europe North Africa/West and Central Asia South Asia East Asia Southeast Asia Australia and New Zealand Sub-Saharan Africa

Region 315 137 34.4 349 680 530 1,320 1,490 555 22.9 656

0.186 7.65 77.7 35.5

103 132 18.7 0.200

38.3

Population 2000 (millions)

1.12 5.80

Population 1500 (millions)

17.1

12.8 11.3 29.7 114

185 45.6 8.76 14.9

281 23.6

Population growth factor

19.5

12.9 11.6 28.5 3.68

17.8 10.5 16.0 14.6

9.14 16.8

Descendants per person of 1500

Fraction of descendants of 1500 population that live in same region 1.00 .846 .381 .988 .535 .958 .990 .976 .988 1.00 .862

Fraction of current population descended from region’s 1500 ancestors .0325 .602 .0367 .227 .975 .939 .999 1.00 .946 .0322 .981

TABLE I CURRENT POPULATION AND DESCENDANTS, BY REGION

103

13.2 36.7 6.50 0.00

2.05 0.927 578 22.0

0.00 15.0

Number of descendants living outside the region (millions)

1638 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1639

preagricultural societies in 1500, had the largest increases.12 The next four columns of the table use the matrix to track the relationship between ancestor and descendant populations. In column (4), we calculate the number of descendants per capita for each region in 1500, which can be thought of as a kind of “genetic success” quotient. The lowest values of this measure are in the United States and Canada and in Australia and New Zealand, where native populations were largely displaced by European colonizers. Among the regions that were relatively developed in 1500, Europe, not surprisingly, has the largest number of descendants per capita. The two regions with the highest genetic success are sub-Saharan Africa and Southeast Asia, which were both relatively poor (and thus less densely populated) in 1500 but in which the native population was hardly at all displaced by migrants. Column (5) calculates the fraction of the current regional population that is descended from the region’s own 1500 ancestors. This ranges from 0.03 for the United States and Canada and in Australia and New Zealand to almost one for South Asia and East Asia. Column (6) shows the fraction of descendants of the 1500 population that still live in the same region. This is lowest in the Caribbean (38%), Europe (54%), Mexico and Central America (85%), and sub-Saharan Africa (86%). The last column of the table calculates the total number of people descended from a region’s 1500 population who live outside it. There were a total of 777 million such people in 2000, amounting to 12.8% of world population. Here Europe is by far the dominant contributor, with 578 million descendants living outside the region, followed by sub-Saharan Africa with 103 million and East Asia with 37 million.13 III. REASSESSING THE EFFECTS OF EARLY ECONOMIC DEVELOPMENT III.A. Measures of Early Development In the Introduction, we noted that studies including Hibbs and Olsson (2004), Chanda and Putterman (2005), Olsson and Hibbs (2005), and Comin, Easterly, and Gong (2006) find strong 12. Estimates of pre-Columbian population in the Americas are highly controversial because of considerable uncertainty about the death rates in the epidemics that followed European contact. Because McEvedy and Jones’s estimates fall toward the low end of some more recent appraisals, the resulting estimates of the increase in population density since 1500 could be overstated. 13. It is worth reminding the reader that we calculate “descendants” by adding up fractions of individuals’ ancestry. Thus two individuals who each have half their ancestry from Europe add up to one descendant in our usage.

1640

QUARTERLY JOURNAL OF ECONOMICS

correlations between measures of early agricultural, technological, or political development and current levels of economic development, but that these studies make relatively ad hoc adjustments, if any, to account for the large population movements on which this paper focuses. The new migration matrix puts us in a position to remedy these shortcomings and thereby put the theory that very early development persists in its effects on economic outcomes to a more stringent test. We use two measures of early development. The first is an index of state history called statehist. The index takes into account whether what is now a country had a supratribal government, the geographic scope of that government, and whether that government was indigenous or by an outside power. The version used by us, as in Chanda and Putterman (2005, 2007), considers state history for the fifteen centuries to 1500, and discounts the past, reducing the weight on each half century before 1451–1500 by an additional 5%. Let sit be the state history variable in country i for the fifty-year period t. sit ranges between 0 and 50 by definition, being 0 if there was no supratribal state, 50 if there was a home-based supratribal state covering most of the present-day country’s territory, and 25 if there was supratribal rule over that territory by a foreign power, and taking values ranging from 15 (7.5) to 37.5 (18.75) for home- (foreign-) based states covering between 10% and 50% of the present-day territory or for several small states coexisting on that territory. statehist is computed by taking the discounted sum of the state history variables over the thirty half-centuries and normalizing it to be between 0 and 1 (by dividing it by the maximum achievable, i.e., the statehist value of a country that had sit = 50 in each period). In a formula: 29

(1.05)−t si,t . statehist = 29t=0 −t t=0 50 × (1.05) For illustration, Ethiopia has the maximum value of 1, China’s statehist value is 0.906 (due to periods of political disunity), Egypt’s value is 0.760, Spain’s 0.562, Mexico’s 0.533, Senegal’s 0.398, and Canada, the United States, Australia, and New Guinea have statehist values of 0.14 14. Bockstette, Chanda, and Putterman (2002) and Chanda and Putterman (2005) also use versions of statehist that include data for the years between 1501 and 1950. The variable that we call statehist in this paper is the same as what Chanda and Putterman (2005, 2007) call statehist1500. Details on the construction

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1641

Our second measure of early development, agyears, is the number of millennia since a country transitioned from hunting and gathering to agriculture. Unlike a similar measure used by Hibbs and Olsson, which had values for eight macro regions, these data are based on individual country information augmented by extrapolation to fill gaps within regions. The data were assembled by Putterman with Trainor (2006) by consulting region- and country-specific as well as wider-ranging studies on the transition to agriculture, such as MacNeish (1991) and Smith (1995). The variable agyears is simply the number of years prior to 2000, in thousands, since a significant number of people in an area within the country’s present borders are believed to have met most of their food needs from cultivated foods. The highest value, 10.5, occurs for four Fertile Crescent countries (Israel, Jordan, Lebanon, and Syria), followed closely by Iraq and Turkey (10), Iran (9.5), China (9), and India (8.5). Near the middle of the pack are countries such as Belarus (4.5), Ecuador (4), the Cˆote d’Ivoire (3.5), and Congo (3). At the bottom are countries such as Haiti and Jamaica (1), which received crop-growing immigrants from the American mainland only a few hundred years before Columbus, New Zealand (0.8), which obtained agriculture late in the Austronesian expansion, and Cape Verde (0.5), Australia (0.4), and others in which agriculture arrived for the first time with European colonists.15 It is worth noting that, whereas statehist measures a stock of experience with state-level organization that takes into account, for example, setbacks such as the disappearance, breakup, or annexation of an existing state by a neighboring empire, agyears simply measures the time elapsed since agriculture’s founding in the country, with no attempt to gauge temporal changes in the kind, intensity, or prevalence of farming within the country’s territory.16 We examine each of these variables both in its original form and adjusted to account for migration. Assuming the “early developmental advantages” proxied by statehist and agyears to be of the state history index, and the data themselves, can be found in Putterman (2004). Note that by beginning with 1 CE, statehist ignores some differences in the onset of state-level society, that is, those between the most ancient states such as Mesopotamia and Egypt (third millennium BCE), and more recent ones such as Rome and pre-Columbian Mesoamerica (first millennium BCE). 15. For further description, see Putterman with Trainor (2006). 16. The difference is primarily due to data availability. Accounts of the histories of kingdoms, dynasties, and empires are considerably easier to come by than are detailed agricultural histories.

1642

QUARTERLY JOURNAL OF ECONOMICS

FIGURE III Adjusted vs. Unadjusted statehist

something that migrants bring with them to their new country, the adjusted variables measure the average level of such advantages in a present-day country as the weighted average of statehist or agyears in the countries of ancestry, with weights equal to population shares. For instance, ancestry-adjusted statehist for Botswana is simply 0.312 times the statehist value for Botswana plus 0.673 times statehist for South Africa (referring to the people in South Africa in 1500, not those there presently) plus weights of 0.005 each times the statehist values of France, Germany, and the Netherlands (the ancestral homes of Botswana’s small Afrikaner population). Algebraically, the “matrix adjusted” form of any variable is Xv , where X is the migration matrix and v is the variable in its unadjusted form. Figures III and IV show the effect of this adjustment on the variables statehist and agyears, respectively. The horizontal axis shows the variable in its unadjusted form and the vertical axis shows the variable in its adjusted form. In the case of statehist the data form a sort of check mark: there are a large number of countries along the 45◦ line, where adjusted and unadjusted statehist are the same because there has been little or no in-migration. These range from China and Ethiopia, with very high levels of statehist, down to eleven countries at or very near the origin, where there was no history of organized states before 1500 and there has been insignificant migration of people from countries that did have organized states in 1500. There are also a large

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1643

FIGURE IV Adjusted vs. Unadjusted agyears

number of countries along the vertical axis, where a population that had zero statehist has been replaced by migrants who have positive values. There is a great deal of dispersion in the adjusted values of statehist in this group, however, reflecting different mixes of immigrants (primarily European vs. African) and different degrees to which the native population was displaced. Only a handful of countries do not fall into one of these two categories. In the case of agyears, as shown in Figure IV, there are still many countries along the 45◦ line where there has been no inmigration. However, because almost all countries had a history of agriculture prior to the spread of European colonialization after 1500, there is not the strong vertical element that is seen in Figure III. In this sense, agyears is clearly picking up a different and prior aspect of early development than statehist.17 III.B. The Effect of Early Development on Current National Income Table II shows the results of regressing the log of year 2000 per capita income on our early development measures. Each 17. Agriculture began in places such as the Fertile Crescent, China, and Mesoamerica millennia before states arose there, and there are numerous presentday countries, for example, in the Americas and Africa, on the territories of which agriculture had arisen but states had not as of 1500.

1644

QUARTERLY JOURNAL OF ECONOMICS TABLE II HISTORICAL DETERMINANTS OF CURRENT INCOME ln(GDP per capita 2000)

Dependent var. statehist

(1) 0.892∗∗∗ (0.330)

Ancestry-adjusted statehist agyears Ancestry-adjusted agyears Constant No. obs. R2

8.17∗∗∗ (0.14) 136 .060

(2)

(3)

2.01∗∗∗ (0.38)

−1.43∗∗∗ (0.32) 3.37∗∗∗ (0.41)

7.61∗∗∗ (0.17) 136 .219

7.51∗∗∗ (0.16) 136 .271

(4)

0.134∗∗∗ (0.035) 7.87∗∗∗ (0.21) 147 .080

(5)

(6)

0.269∗∗∗ (0.040) 7.05∗∗∗ (0.23) 147 .240

−0.198∗∗∗ (0.044) 0.461∗∗∗ (0.054) 6.96∗∗∗ (0.22) 147 .293

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

regression includes the unadjusted form of one early development measure, the adjusted form, or both. Not surprisingly, given previous work, the tests suggest significant predictive power for the unadjusted variables. However, for both measures of early development, adjusting for migration produces a very large increase in explanatory power. In the case of statehist, R2 goes from .06 to .22, whereas in the case of agyears it goes from .08 to .24. The coefficients on the measures of early development are also much larger using the adjusted than the unadjusted values. In the third and sixth columns of the table we run “horse race” regressions including both the adjusted and unadjusted measures of early development. We find that the coefficients on the adjusted measures retain their significance and become larger, whereas the coefficients on the unadjusted measures become negative and significant. Before proceeding further, we test the robustness of our finding to different indicators of population flows, the addition of controls for geography, and alternative measures of early development. In Table III, we start by constructing measures of statehist and agyears that are adjusted in the spirit of Hibbs and Olsson (2004) and Olsson and Hibbs (2005) by simply assigning to four “neo-European” countries (the United States, Canada, New Zealand, and Australia) the statehist and agyears values of the

8.02∗∗∗ 7.55∗∗∗ (0.14) (0.17) 136 136 .122 .230

7.66∗∗∗ (0.19) 147 .127

7.00∗∗∗ (0.22) 147 .259

0.173∗∗∗ −0.133∗∗∗ (0.034) (0.040)

8.87∗∗∗ (0.44) 129 .286

−0.867∗∗∗ (0.265) −0.800∗∗ (0.361)

(0.38)

0.400∗∗∗ (0.050)

(5)

(0.46)

(4) 2.09∗∗∗

(3)

2.76∗∗∗

(2)

1.27∗∗∗ −0.741∗∗ (0.32) (0.355)

(1)

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Fraction European descent Fraction European languages Constant

Retained

Ancestry-adjusted statehist Ancestry-adjusted agyears “Neo-Europes” adjusted statehist “Neo-Europes” adjusted agyears Native

Dependent var. (0.32)

1.48∗∗∗

(8)

0.152∗∗∗ (0.035)

(9)

1.82∗∗∗ 1.63∗∗∗ 1.58∗∗∗ (0.16) (0.16) (0.17)

(7)

8.07∗∗∗ 7.83∗∗∗ 7.27∗∗∗ 7.11∗∗∗ (0.43) (0.10) (0.13) (0.19) 139 138 138 138 .281 .458 .572 .526

−0.744∗∗∗ (0.270) −0.583 (0.358)

0.270∗∗∗ (0.041)

(6)

ln(GDP per capita 2000)

TABLE III ROBUSTNESS TO ALTERNATIVE MEASURES OF MIGRATION, DESCENT, AND LANGUAGE

1.31∗∗∗ (0.21) 8.10∗∗∗ (0.14) 113 .195

(10)

1.04∗∗∗ (0.18) 7.26∗∗∗ (0.17) 113 .418

(0.38)

2.11∗∗∗

(11)

1.06∗∗∗ (0.19) 6.86∗∗∗ (0.23) 113 .393

0.256∗∗∗ (0.043)

(12)

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1645

1646

QUARTERLY JOURNAL OF ECONOMICS

United Kingdom.18 As the table shows, these adjusted versions perform better than the unadjusted ones, but not nearly as well as the versions we construct using the migration matrix. When we run “horse race” regressions including statehist and agyears adjusted using both our matrix and the “neo-Europes” method (columns (2) and (4)), the coefficients on the matrix-adjusted measures rise in size and significance, whereas the coefficients on the “neo-Europes” adjusted measures become negative and significant. We then construct a series of other measures from our matrix. The first is the fraction of the population made up of “natives” (that is, people whose ancestors lived there in 1500). We include this alongside our measures of adjusted statehist and agyears in order to check that we are not just picking up the fact that there is a correlation between the share of a population’s ancestors who lived elsewhere and the types of countries they lived in. In a similar spirit, we construct a measure of the fraction of the descendants of each country’s people in 1500 who live in that country today, which we call “retained population.” For example, only 40.2% of those descended from the 1500 population of what’s now the United Kingdom live there today, whereas 97.4% of those of Indian descent still live in India.19 Neither of these measures eliminates the statistical significance of our adjusted history measures. native is negative and significant, showing that immigrantpopulated countries are better off on average. Retained population enters our regression with a negative sign and is marginally significant, suggesting either that the venting of surplus population may have aided growth or that characteristics that led to countries being able to implant their population abroad also led them to be richer today. Our third set of robustness checks examines whether our adjusted measures of statehist and agyears are simply proxying for a large European population or for speaking a European language. In columns (7)–(9) we include the fraction of the population 18. Hibbs and Olsson actually assign these countries the values for the region treated as inheriting the Mesopotamian agrarian tradition, which includes all of North Africa, the Middle East, and Europe. 19. Note that the migration matrix is a rather blunt tool to use for this sort of exercise, because (even with the added population data) it doesn’t tell us how many people left the country in question but only how many descendants they have today and where the descendants live. A small number of e´ migr´es may have produced a large number of descendants (for example, the French Canadians) or a large number of e´ migr´es may have produced relatively few (for example, African slaves shipped to the Caribbean).

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1647

descended from 1500 inhabitants of European countries, a variable that we create using the matrix. Not surprisingly, given that most of the world’s highest-income countries are either in Europe or mainly populated by persons of European descent, the European descent variable comes in very significantly. By itself, it explains 46% of the variance in the log of GDP per capita. However, even controlling for this variable, our adjusted measures of state history and agriculture are quite significant. It is also worth pointing out that in controlling for European descent rather than, say, Chinese or Indian descent, we are implicitly taking advantage of ex post knowledge about which of the regions that were well developed in 1500 would have the wealthiest descendants today. In columns (10)–(12), we include the fraction of the population speaking one of five European languages (English, French, German, Spanish, or Italian), which is used by Hall and Jones (1999) as an instrument for “social infrastructure.” This variable explains only 20% of the variation in log of income per capita by itself and has a negligible effect on the magnitude and significance of our measures of early development. In Table IV, we consider the effect of a series of measures of geography on the statistical significance of our adjusted statehist and agyears variables, in order to make sure that our measures of early development are not somehow proxying for physical characteristics of the countries to which people moved. Specifically, we control for a country’s absolute latitude, a dummy for being landlocked, a dummy for being in Eurasia (defined as Europe, Asia, and North Africa), and a measure of the suitability of a country for agriculture. This last variable, constructed by Hibbs and Olsson (2004), takes discrete values between 0 (tropical dry) and 3 (Mediterranean). Taken one at a time, each of these controls has a significant effect on log income, with the predictable sign. However, none of them individually, or even all four taken together, eliminates the statistical significance of matrix-adjusted statehist or agyears. Our final check for robustness is to see whether our matrixadjustment procedure works similarly well on measures or predictors of early development other than statehist and agyears. We consider four other indicators of early development. The first two come from Olsson and Hibbs (2005) and are meant to capture the conditions that favored the early transition of a region to agriculture, as proposed by Diamond (1997). geo conditions is the first principal component of climate (as measured above), latitude, the

1648

QUARTERLY JOURNAL OF ECONOMICS TABLE IV HISTORICAL AND GEOGRAPHICAL DETERMINANTS OF CURRENT INCOME

Dependent var. Ancestryadjusted statehist Absolute latitude Landlocked

ln(GDP per capita 2000) (1) 2.38∗∗∗ (0.40)

(2) 1.32∗∗∗ (0.43)

(3) Panel A 2.21∗∗∗ (0.41)

0.0386∗∗∗ (0.0062)

−0.628∗∗ (0.272)

Eurasia

(4) 1.75∗∗∗ (0.55)

0.594∗∗ (0.286)

Climate Constant No. obs. R2 Ancestryadjusted agyears Absolute latitude Landlocked

7.44∗∗∗ (0.17) 111 .294

6.94∗∗∗ (0.15) 111 .527

0.313∗∗∗ 0.172∗∗∗ (0.048) (0.053) 0.0393∗∗∗ (0.0058)

−0.500∗∗ (0.236)

0.631∗∗ (0.250)

Climate

No. obs. R2

6.85∗∗∗ (0.25) 116 .293

1.31∗∗∗ (0.42)

0.609∗∗∗ (0.096) 6.92∗∗∗ (0.17) 111 .494

7.65∗∗∗ 7.44∗∗∗ (0.21) (0.16) 111 111 .339 .334 Panel B 0.289∗∗∗ 0.219∗∗∗ 0.178∗∗∗ (0.051) (0.062) (0.060)

Eurasia

Constant

(5)

6.61∗∗∗ (0.21) 116 .523

7.07∗∗∗ (0.28) 116 .320

7.04∗∗∗ (0.26) 116 .334

0.516∗∗∗ (0.101) 6.74∗∗∗ (0.25) 116 .426

(6) 1.24∗∗∗ (0.42) 0.0337∗∗∗ (0.0084) −0.558∗∗∗ (0.172) −0.327 (0.247) 0.235∗ (0.121) 6.99∗∗∗ (0.20) 111 .593 0.153∗∗∗ (0.054) 0.0404∗∗∗ (0.0087) −0.577∗∗∗ (0.160) −0.172 (0.237) 0.053 (0.133) 6.80∗∗∗ (0.25) 116 .563

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

size of the landmass on which a country is located, and a measure of a landmass’s East–West orientation. bio conditions is the first principal component of the number of heavy-seeded wild grasses and the number of large domesticable animals known to have

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1649

existed in a macro region in prehistory. The other two measures come from Comin, Easterly, and Gong (2010) and measure the degree of technological sophistication in the years 1 and 1500 CE in the regions that correspond to modern countries. In Table V we show univariate regressions in which the dependent variable is the log of GDP per capita in 2000 and each measure of early development appears either in its original form or adjusted using the migration matrix. The most notable finding of the table is that, as expected, adjusting for migration substantially improves the predictive power of any of the alternative measures of early development that we consider. In the cases of the two Hibb–Olsson measures as well as the technology index for 1 CE, the R2 of the regression rises by roughly fifteen percentage points. In the case of the technology index for 1500, the R2 rises by 34 percentage points.20 A second finding of Table V is that the migration-adjusted versions of three of the variables we look at—bio conditions, geo conditions, and the technology index for 1500—do a better job of predicting income today than the matrix adjusted versions of statehist and agyears. (This finding is confirmed in Appendix II, which presents a complete set of horse race regressions using all combinations of two of the six ancestry-adjusted measures of early development.) In the case of technology in 1500, this is not particularly surprising. statehist and agyears are meant to measure political and economic development in the millennia before the great shuffling of population that is captured in the migration matrix (for example, the average value of agyears is 4.7 millennia). The technology measure, by contrast, measures development immediately prior to that shuffling, and so focuses on information that is more likely to be predictive of current outcomes. By contrast, the fact that the matrix-adjusted versions of geo conditions and bio conditions outperform the similarly adjusted versions of agyears in predicting income today is more mysterious. The Hibbs and Olsson variables are designed to be a measure of the suitability of local conditions to the emergence of agriculture. Hibbs and Olsson think that these variables should predict the timing of the Neolithic revolution, and through that channel predict income today. One would thus expect that a measure of when agriculture 20. Comin, Easterly, and Gong (2010) perform a similar exercise using Version 1.0 of our matrix.

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Ancestry-adjusted technology index 1500 CE

Technology index 1500 CE

Ancestry-adjusted technology index 1 CE

Technology index 1 CE

Ancestry-adjusted bio conditions

bio conditions

Ancestry-adjusted geo conditions

geo conditions

Dependent var.

8.42∗∗∗ (0.09) 105 .415

0.752∗∗∗ (0.075)

(1)

8.19∗∗∗ (0.08) 105 .574

0.952∗∗∗ (0.069)

(2)

8.43∗∗∗ (0.09) 105 .417

0.746∗∗∗ (0.081)

(3)

8.21∗∗∗ (0.07) 105 .581

0.947∗∗∗ (0.074)

(4)

8.42∗∗∗ (0.28) 125 .000

0.0924 (0.3758)

(5)

ln(GDP per capita 2000)

TABLE V ALTERNATIVE MEASURES/PREDICTORS OF EARLY HISTORICAL DEVELOPMENT

6.41∗∗∗ (0.46) 125 .133

2.51∗∗∗ (0.59)

(6)

7.77∗∗∗ (0.20) 114 .183

1.55∗∗∗ (0.30)

(7)

3.26∗∗∗ (0.30) 6.54∗∗∗ (0.21) 114 .525

(8)

1650 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1651

actually did emerge, agyears, would have superior predictive power.21 Overall, the results in Tables II–V show that adjusting for migration improves the predictive power of measures of early development, and that once migration is taken into account, the ability of these historical measures to predict income today is surprisingly high. This finding is consistent with the hypothesis that especially Europeans and to some extent East and South Asians carried something with them—human capital, culture, institutions, or something else—that raised the level of income in the Americas, Australia, Malaysia, and elsewhere. The findings are also consistent with the possibility that a corresponding disadvantage of Africans and Amerindians with respect to one or more of these characteristics has played out in their countries of origin and, for Africans, those to which they were transported as slaves. Any such preexisting disadvantages were almost certainly made worse by the nature of European contact. In the case of Africans, for example, the obvious candidate is the impact of slavery both on the descendants of slaves themselves and on the culture and institutions of the regions from which slaves were taken (Nunn 2008). By contrast, the findings of Tables II–V cast doubt on the idea that the same favorable climactic conditions that led some regions to develop early are also responsible for those regions enjoying an economic advantage today. This can be seen both in 21. Note that for most countries, bio conditions refers not to conditions in the country itself but to those in the region from which it is judged to have inherited its package of agricultural technology (there are only eight of these “macro regions”). The superior predictive power of bio conditions results, most importantly, from the grouping of countries stretching from Ireland to Pakistan into a single macro region that is assigned the sample’s highest bio conditions, those of the Fertile Crescent. By contrast, our variable agyears has a value of 5.5 millennia for the United Kingdom and 10 millennia on average for the Fertile Crescent countries. By assigning to Europe, the region that was the source of most rich country populations today, the high value of the Fertile Crescent, the Hibbs–Olsson measure mechanically makes bio conditions an excellent predictor of income today, when adjusted by the migration matrix. Another way to see this problem is to note that despite the fact that China had lower values for bio conditions than does a Europe treated as part of a “greater Fertile Crescent” macro region (0.153 vs. 1.46, on a scale with a mean zero and standard deviation one), China developed agriculture some three thousand years earlier than Europe. Thus the prediction of the Hibbs–Olsson story—that biogeographic conditions should predict the timing of the Neolithic revolution, which in turn predicts income today—is falsified by more location-specific Neolithic revolution timing. Similarly, the mapping from geo conditions to the development of agriculture does not work nearly as well as the mapping from geo conditions (in its matrix adjusted form) to current income. For example, within Europe, the region with the highest values of geo conditions, the correlation between geo conditions and agyears is slightly negative, driven by the strongly negative correlation between latitude and agyears.

1652

QUARTERLY JOURNAL OF ECONOMICS

the superiority of migration-adjusted measures of early development to the unadjusted versions of these measures and in the robustness of migration-adjusted early development measures to inclusion of measures of countries’ geographic characteristics. As implied above, our preferred interpretation of these results is that they reflect a causal link from migration of people from countries with higher levels of early development to subsequent economic growth. It is nonetheless worth considering whether the results might instead simply reflect, or at least be biased by, the endogeneity of migration. Suppose that people from countries with earlier development ended up migrating to places that were better in some respect (climate, institutions, etc.) and that it was this aspect of quality rather than the presence of migrants from areas of early development that ended up making these places wealthy. Some reassurance that this is not all that is going on is provided by Tables III and IV. Controlling for aspects of the quality of physical environment in destination countries, such as climate and latitude, does not make the effect of early development go away. Similarly, if relative emptiness of some countries both attracted a lot of migrants from early developing areas and also made those countries wealthy, then this effect would be picked up by the variable native in Table III. Further, Engerman and Sokoloff (2002) show that European migrants, when they were able to choose where in the New World to migrate, were not attracted to those regions that in the long run would achieve the highest levels of economic success. In future work we hope to further address the issue of causality by looking more closely at the timing of migration and changes in institutions and income. For now, however, we continue to examine the link between early development of a population and the subsequent income of their descendants, wherever they may live, with the strong suspicion that this is causal. III.C. Mechanism Under the assumption that early development of a country’s population is causally linked to current income, one would want to know the specific channel through which this effect flows. For the most part, we consider this an issue for future research. However, we cannot resist taking an initial look at two possibilities. Recent literature has stressed the roles of institutions and culture as fundamental determinants of national income. One could well imagine that whatever it was that immigrants with long histories of state development took with them that led to higher income

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1653

manifests itself in better institutions or in a culture more favorable to economic success. In the first part of Table VI, we look at the relationship between our statehist measure and three indicators of institutional quality: executive constraints, expropriation risk, and government effectiveness (all from Glaeser et al. [2004]). In each case, the dependent variable is normalized to have a standard deviation of one. Not surprisingly, using the matrix to adjust statehist to reflect the experience of a country’s population greatly improves the ability of this variable to predict the quality of institutions. Once matrix adjusted, it is also statistically significant in all three cases. Similarly, the estimated coefficient on statehist rises in each case moving from the unadjusted to the adjusted measure. The coefficient in column (6), for example, implies that moving from statehist of zero (for example, Rwanda) to statehist of one (Ethiopia), raises governmental effectiveness by 1.32 standard deviations—roughly the difference between Bhutan or Bahrain, on the one hand, and the United States, on the other The rest of Table VI considers some indicators of culture taken from the World Values Survey (WVS). Tabellini (2005) focuses on three measures—trust, control, and respect—that he finds positively and one measure—obedience—that he finds negatively correlated with per capita income in 69 regions of Belgium, France, Germany, Italy, the Netherlands, Portugal, Spain, and the United Kingdom.22 Trust is the generalized trust measure used in Knack and Keefer (1997) and other studies, control is the response to a question about the degree to which respondents feel that what they do affects what happens to them, and respect and obedience indicate respondents’ answers regarding how important is teaching children “tolerance and respect for others” and “obedience,” respectively. We also examine the variable thrift, a cultural factor that Guiso, Sapienza, and Zingales (2006) find to significantly predict savings, and that is derived from the same WVS question, where one of the qualities the respondent may list as important to encourage children to learn at home is “thrift, saving money and things.” In columns (7)–(16) we show regressions of the same form as for the previous variables, where each of the five measures is the dependent variable in one pair of regressions. As above, the 22. Tabellini also reports a similar correlation for the first principal component of the four measures for a cross section of 46 countries.

(2)

0.522∗∗∗ (0.167) 80 .156

−1.31∗∗∗ (0.32)

(9)

−0.063 (0.434) 0.031 (0.245) 80 .000

(10)

0.670∗∗ (0.309) 1.95∗∗∗ 1.71∗∗∗ (0.13) (0.15) 141 141 .002 .033 Control

0.158 (0.274)

(1)

(4)

0.373∗∗ (0.158) 81 .078

−0.931∗∗ (0.432)

(11)

−0.403 (0.561) 0.203 (0.254) 81 .010

(12)

1.33∗∗∗ (0.33) 3.89∗∗∗ 3.51∗∗∗ (0.13) (0.15) 111 111 .047 .134 Respect

(0.287)

0.658∗∗

(3)

Expropriation risk (6)

0.449∗∗ (0.180) 81 .113

−1.12∗∗∗ (0.35)

(13)

−1.64∗∗∗ (0.444) 0.824∗∗∗ (0.258) 81 .157

(14)

1.32∗∗∗ (0.30) −0.180 −0.604∗∗∗ (0.114) (0.118) 144 144 .019 .123 Obedience

0.445 (0.271)

(5)

Government effectiveness

Note. All dependent variables are normalized to have a standard deviation of one. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Ancestry-adjusted statehist

statehist

Dependent var.

No. obs. R2

Constant

Ancestry-adjusted statehist

statehist

Dependent var.

Executive constraints

TABLE VI INFLUENCE OF EARLY DEVELOPMENT ON CURRENT INSTITUTIONS AND CULTURE

(8)

−0.365∗∗ (0.159) 81 .075

0.911 (0.382)∗∗

(15)

0.822∗ (0.478) −0.413∗ (0.241) 81 .040

(16)

1.40∗∗∗ (0.39) −0.349∗ −0.702∗∗∗ (0.186) (0.222) 81 81 .069 .114 Thrift

(0.350)

0.872∗∗

(7)

Trust

1654 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1655

dependent variables are normalized to have standard deviations of one. In the regressions using statehist, that variable is significantly positively associated with trust and thrift, and significantly negatively associated with control, respect, and obedience. Replacing statehist with its matrix adjusted version, we find that the correlations with trust and obedience become stronger, whereas those with respect, control, and thrift become nonsignificant (although in the case of thrift the change from unadjusted to adjusted is small). The correlations that remain significant are consistent with the possibility that one of the ways in which early development raises income is by promoting cultural traits that favor good economic outcomes. Overall, the results for measures of institutions are somewhat more internally consistent than for the cultural variables, although for the cultural variables that work best in terms of being linked to adjusted state history, trust and obedience, the coefficient size and fit of the regression are quite comparable to the best-working measures of institutions, expropriation risk and government effectiveness. Of course, this exercise does not say anything conclusive about the causal path from early development to high income, good institutions, and growth-promoting culture. Early development could cause good institutions and/or good culture, which in turn cause high income; or early development could cause high income through some other channel and affect institutions and culture only through income. III.D. Source Region and Current Region Regressions Although our interest in most of this paper is in how the migration matrix can be used to map data on place-specific early development into a measure of early development appropriate to a country’s current population, the matrix can also be used to infer characteristics of the source countries based only on current data. More specifically, if we assume that emigrants from a particular region share some characteristics that affect the income of countries to which they have migrated, then we can back out these characteristics by looking at data on current outcomes and migration patterns. To pursue this idea we regress log GDP per capita in 2000 on the fraction of the current population that comes from each of the 11 regions defined previously for the exercises of Table I. We call the coefficients from this regression, shown in column (1) of Table VII, “source region coefficients.” Loosely speaking,

1656

QUARTERLY JOURNAL OF ECONOMICS

TABLE VII SOURCE REGIONS AND CURRENT REGIONS AS DETERMINANTS OF CURRENT INCOME ln(GDP per capita 2000)

U.S. and Canada Mexico and Central America The Caribbean South America Europe North Africa/West and Central Asia South Asia East Asia Southeast Asia Australia and New Zealand Constant No. obs. R2

(1)

(2)

Source regions

Current regions

33.7∗∗∗ (5.6) 0.380 (0.495) 3.67∗∗∗ (0.81) 0.498∗∗ (0.229) 2.35∗∗∗ (0.16) 1.29∗∗∗ (0.21) 0.872∗∗∗ (0.265) 2.15∗∗∗ (0.54) 0.805∗∗∗ (0.242) 8.09∗∗∗ (2.10) 7.27∗∗∗ (0.11) 152 .631

3.03∗∗∗ (0.16) 1.10∗∗∗ (0.24) 1.33∗∗∗ (0.30) 1.35∗∗∗ (0.20) 2.23∗∗∗ (0.18) 1.28∗∗∗ (0.21) 0.388∗∗ (0.175) 1.81∗∗∗ (0.56) 1.07∗∗∗ (0.32) 2.72∗∗∗ (0.17) 7.34∗∗∗ (0.13) 152 .584

(3) Source regions

Current regions

−2,273∗∗∗ 74.8∗∗∗ (384) (12.4) 1.90 −0.870 (1.25) (0.710) 0.834 0.268 (1.884) (0.221) 1.11∗∗ −0.415 (0.51) (0.419) 2.66∗∗∗ −0.265 (0.47) (0.476) 0.654 0.613 (1.349) (1.248) 3.05∗∗∗ −2.53∗∗∗ (0.39) (0.39) 4.77∗∗∗ −2.81∗∗∗ (0.57) (0.87) 1.59∗∗ −0.913∗ (0.63) (0.500) −1.59∗ 0.436 (0.87) (0.444) 7.22∗∗∗ (0.12) 152 .681

Note. In regression (1), the independent variables are the shares of the population in each country originating in each region. In regression (2), the independent variables are dummies for a country being located in a particular region. In regression (3), the independent variables are both of the above. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

they measure how having a country’s population composed of people from a particular region can be expected to affect GDP per capita. For example, the source region coefficient for Europe is 2.35, whereas that for sub-Saharan Africa is zero, because this is the omitted category. Thus these coefficients say that moving 10% of a country’s population from European to African origin would be expected to lower ln(GDP) by .235 points.23 23. There are three surprisingly high coefficients in this column: the United States and Canada, the Caribbean, and Australia and New Zealand. In all three

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1657

The second column of Table VII shows a more conventional regression of the log of GDP per capita in the year 2000 on dummies for the region in which the country is located (as in the first column, sub-Saharan Africa is the omitted region). We call these “current region coefficients.” The R2 of the regression with current region dummies is about .05 lower than the R2 of the regression with source region shares. It is also interesting to compare the coefficients on the source and current regions. There is a strong tendency for regions that are rich to also have large values for their source region coefficients. For example, among the six source regions that account for 97% of the world’s population (in size order: East Asia, South Asia, Europe, sub-Saharan Africa, Southeast Asia, and North Africa/West and Central Asia), the magnitudes of the coefficients are very similar, with the single exception of South Asia. This similarity of coefficients in the two regressions is not much of a surprise, given the fact, discussed above, that most countries are populated primarily by people whose ancestors lived in that same country 500 years ago. In column (3) of Table VII, we regress log income in 2000 on both the source region and current region measures. The R2 is somewhat higher than in the first two columns, indicating that source regions are not simply proxying for current regions, or vice versa. F-tests easily reject (p-values of .000) the null hypotheses that either the coefficients on source region or the coefficients on current region are zero. Interestingly, the source region coefficients on Europe and East Asia remain positive, whereas the current region coefficients become negative, suggesting that having population from these regions, rather than being located in them, is what tends to make countries rich. III.E. Population Heterogeneity and Income Levels The exercises in Section III.B show that a higher average level of early development in a country is robustly correlated with higher current income. The most likely explanation for this finding is that people whose ancestors were living in countries that cases the explanation is that the source populations in question contributed a small share of the population to only a few current countries. For example, descendants of people living in the United States and Canada as of 1500 contribute only 3.1% and 3.3% of the populations of those two countries and are found nowhere else in the world. Thus, because the United States and Canada are wealthy, this source population gets assigned a high coefficient in the regression. For this reason, we focus our attention on source region coefficients for populations that account for larger population shares in more countries.

1658

QUARTERLY JOURNAL OF ECONOMICS

developed earlier (in the sense of implementing agriculture or creating organized states) brought with them some advantage— such as human capital, knowledge, culture, or institutions—that raises the level of income today. Depending on what exact advantage is conferred by earlier development, there might also be implications for how the variance of early development among a country’s contributing populations would affect output. For example, if early development conferred some cultural attribute that was good for growth, then in a population containing some people with a long history of development and some with a short history, this growth-promoting cultural trait might simply be transferred from the long-history group to the short-history group. Similarly, growth-promoting institutions brought along by people with a long history of development could be extended to benefit people with short histories of development. An obvious model for such transfer is language: in many parts of the world, descendants of people with shorter histories of states and agriculture speak languages that come from Europe, which has a longer history in those respects. If growth-promoting characteristics transfer in this fashion, then a country with half its population coming from areas with high statehist and half from areas with low statehist might be richer than a country with the same average statehist but no heterogeneity. The above logic would tend to predict that, holding average history of early development constant, a higher variance of early development would raise a country’s level of income. However, there are channels that work in the opposite direction. As will be shown below, higher variance of early development predicts higher inequality. Inequality is often found to impact growth negatively (see, for example, Easterly [2007]), and one could easily imagine that the inequality generated by heterogeneity in early development history would lead to inefficient struggles over income redistribution or the creation of growth-impeding institutions. This is certainly the flavor of the story told by Sokoloff and Engerman (2000). Similarly, the ethnic diversity that comes along with a population that is heterogeneous in its early development history could hinder the creation of growth-promoting institutions. To assess the effect of heterogeneity in early development, we create measures of the weighted within-country standard deviations of statehist, agyears, and source region coefficients, where the weights are the fractions of that source country’s descendants in

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1659

current population. The mean within-country standard deviation of statehist is 0.097, and the standard deviation across countries is 0.089. For agyears the mean standard deviation is 0.764, and the standard deviation across countries is 0.718. For the source region coefficients, the values are 0.347 and 0.686, respectively. In all cases, the distribution of the heterogeneity measures is skewed to the right, with a significant number of countries (those that experienced no immigration) having values of zero. In Table VIII we present regressions of the log of current income per capita on the standard deviation of each of our three measures of early development (statehist, agyears, and source region coefficients), with and without controls for the mean of each of the variables. Once the mean level of statehist is controlled for, the standard deviation of statehist has a positive and significant effect on current income. The same is true for agyears. Interestingly, the coefficient on the standard deviation of the source region coefficient is not significant at all once the mean of the source region coefficients is included. Including these measures of standard deviation has little effect on the size or significance of the coefficients on the means of statehist or agyears, as seen in Table II.24 In columns (3), (6), and (9) of the table, we experiment with including the share of the population that is of European descent, to make sure that our heterogeneity measure is not simply reflecting the presence of Europeans in some countries and not others. As can be seen, the coefficients on the standard deviations of our early development measures are not greatly affected. The positive coefficients on the standard deviations of statehist and agyears imply, as discussed above, that a heterogeneous population will be better off than a homogeneous population with 24. We also considered the possibility that the effect of heterogeneity in early development on current income is nonlinear. Ashraf and Galor (2008) argue that this is the case for genetic diversity: people with different genetic backgrounds are complements in production of knowledge, but genetic diversity also reduces social cohesion and hinders the transmission of human capital within and across generations. As a result, there should be a hump-shaped relationship between genetic diversity and income. Ashraf and Galor find evidence for this in crosscountry data. Proxying for genetic diversity with migratory distance from East Africa, they find that the optimal level of genetic diversity occurs in East Asia. About three-fourths of the countries in the world have genetic diversity that is higher than optimal. We tested for a similar effect by including the square of the standard deviation of the relevant early development measure in columns (2), (4), and (6) of Table VIII. In the cases of statehist and agyears, this new term entered insignificantly. In the case of the source region coefficients, the coefficient on the square of the standard deviation was negative and significant, implying a hump-shaped relationship. However, the peak of the hump was when the standard deviation of the source region coefficients was equal to 2.95. Only two countries, the United States and Canada, had values that exceeded this level.

(1)

8.33∗∗∗ (0.15) 136 .013

1.40 (0.91)

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Fraction European descent

Standard deviation of source region coefficients Mean source region coefficient

Ancestry-adjusted agyears

Standard deviation of agyears

Ancestry-adjusted statehist

Standard deviation of statehist

Dependent var. (0.65) 1.55∗∗∗ (0.32)

(0.78) 2.08∗∗∗ (0.37)

7.38∗∗∗ (0.18) 136 .245

1.70∗∗∗

2.02∗∗

1.60∗∗∗ (0.16) 7.08∗∗∗ (0.14) 136 .586

(3)

(2)

8.21∗∗∗ (0.14) 147 .056

0.377∗∗∗ (0.108)

(4)

6.86∗∗∗ (0.23) 147 .278

0.312∗∗∗ (0.094) 0.260∗∗∗ (0.04)

(5)

1.49∗∗∗ (0.16) 6.83∗∗∗ (0.19) 147 .546

0.278∗∗∗ (0.086) 0.178∗∗∗ (0.03)

(6)

ln(GDP per capita 2000)

8.33∗∗∗ (0.11) 152 .065

0.414∗∗∗ (0.06)

(7)

TABLE VIII THE EFFECT OF HETEROGENEITY IN EARLY DEVELOPMENT ON CURRENT INCOME

7.26∗∗∗ (0.09) 152 .634

0.0844 (0.0636) 0.982∗∗∗ (0.067)

(8)

0.0846 (0.0647) 0.978∗∗∗ (0.141) 0.0116 (0.2967) 7.26∗∗∗ (0.10) 152 .634

(9)

1660 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1661

the same average level of early development. For example, using the coefficients in column (2) of Table VIII, a country with a population composed of 50% people with a statehist of 0.4 and 50% with a statehist of 0.6 will be 20% richer than a homogenous country with statehist of 0.5. A country with 50% of the population having statehist of 1.0 and 50% with statehist of zero would be twice as rich as a homogenous country with the same average statehist. (This latter example is quite outside the range of the data, however. The highest values of the standard deviation of statehist in our data set are for Fiji (0.346), Cape Verde (0.301), and Guyana (0.293). In the example, the standard deviation is 0.5.) The coefficients also have the unpalatable property that a country’s predicted income can sometimes be raised by replacing high statehist people with low statehist people, because the decline in the average level of statehist will be more than balanced by the increase in the standard deviation. For example, the coefficients just discussed imply that combining populations with statehist of 1 and 0, the optimal mix is 86% statehist = 1 and 14% statehist = 0. A country with such a mix would be 41% richer than a country with 100% of the population having a statehist of 1.25 We think that this somewhat counterintuitive finding may result from a particular set of historical contingencies that make simple policy inferences problematic. First, during the long era of European expansion spanning the fifteenth to early twentieth centuries, European-settled countries such as the United States, Chile, Mexico, and Brazil having substantial African and/or Amerindian minorities attained considerably higher incomes than many homogenously populated Asian countries with relatively long state histories, including Bangladesh, Pakistan, India, Sri Lanka, Indonesia, and China. Second, the latter group of countries experienced little growth, or negative growth, during those same centuries. Chanda and Putterman (2007) argue that the underperformance of the populous Asian countries during the period 1500–1960 is an exception to the rule (which they find to have held up to 1500 and again since 1960) that earlier development of agriculture and states has been associated with faster economic development during most of world history. Although our regression result reflects the fact that population heterogeneity 25. The specification that we use implies that this property must hold as long as the coefficients on both the mean and standard deviation are positive. However, when we use variance on the right hand side, in which case the property does not automatically hold, it is nonetheless implied by the estimates.

1662

QUARTERLY JOURNAL OF ECONOMICS

has not detracted from economic development in the first group of countries, it seems best not to infer from it that “catch up” by homogeneous Old World countries would be speeded up by infusions of low statehist populations into existing high statehist countries. IV. POPULATION HETEROGENEITY AND INCOME INEQUALITY The finding that current income is influenced by the early development of a country’s people, rather than of the place itself, provides evidence against some theories of why early development is important but leaves many others viable. Early development may matter for income today because of the persistence of institutions (among people, rather than places), because of cultural factors that migrants brought with them, because of long-term persistence in human capital, or because of genetic factors that are related to the amount of time since a population group began its transition to agriculture. Many of the theories that explain the importance of early development in determining the level of income at the national level would also support the implication that heterogeneity in the early development of a country’s population should raise the level of income inequality. For example, if experience living in settled agricultural societies conveys to individuals some cultural characteristics that are economically advantageous in the context of an industrial society, and if these characteristics have a high degree of persistence over generations, then a society in which individuals come from heterogeneous backgrounds in terms of their families’ economic histories should ceteris paribus be more unequal. We pursue three different approaches to examining the determinants of within-country income inequality. We begin by showing that heterogeneity in the historical level of development of countries’ residents predicts the level of income inequality in a cross-country regression. Second, we construct measures of population heterogeneity based both on the current ethnic and linguistic groupings and on the ethnic and linguistic differences among the sources of a country’s current population. We show that allowing for these other measures of heterogeneity does not reduce the importance of heterogeneity in historical development as a predictor of current inequality. Finally, we pursue an implication of these findings by asking whether, within a country, people originating from countries that had characteristics predictive of low national income are in fact found to be lower in the income distribution.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1663

IV.A. Historical Determinants of Current Inequality In this exercise our dependent variable is the Gini coefficient 2000–2004 or in the most recent decade using data from the UN World Income Inequality Database as supplemented by Barro (2008). Our key right hand–side variables are the weighted within-country standard deviations of statehist, agyears, and source region coefficients, as constructed in Section III.E. We experiment with including the levels of these matrix-adjusted early development measures as additional controls. The results are shown in Table IX. Our finding is that heterogeneity in the early development experience of the ancestors of a country’s population is significantly related to current inequality. To give a feel for the size of the coefficients, we look at the case of agyears. The standard deviation of agyears in Brazil is 1.976 millennia. By contrast, in countries that have essentially no in-migration, such as Japan, the standard deviation is zero. Applying the regression coefficient of 0.0571 from the fourth column of Table IX, this would say that variation in early development in Brazil would be expected to raise the Gini there by 0.11, which is certainly an economically significant amount. Because Brazil’s Gini was 0.57 and Japan’s 0.25, the exercise suggests that about one-third of the difference in inequality between the two countries may be attributable to the difference in the heterogeneity of their populations’ early development experiences. The results in columns (5) and (6) for source region coefficients are similar in flavor but somewhat smaller in magnitude. Taking the case of Brazil again, the variance of the source region coefficient in that country is 0.888, reflecting a composition of 74.4% people from Europe (SRC of 2.53), 9.1% from South America (SRC of 0.498), and 15.7% from Africa (SRC of 0). The coefficient in the sixth column of Table IX implies that the difference in standard deviation of the source region coefficients between Brazil, on the one hand, and a country such Japan where the standard deviation of source region coefficients is zero, on the other, would be expected to raise the Gini coefficient by 0.043. IV.B. Other Measures of Heterogeneity Our main finding in the last section was that heterogeneity of a country’s population’s ancestors with respect to measures of early development contributes to current income inequality. We

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Mean source region coefficient

Standard deviation of source region coefficients

Ancestry-adjusted agyears

Standard deviation of agyears

Ancestry-adjusted statehist

Standard deviation of statehist

Dependent var.

0.445∗∗∗ (0.024) 135 .267

0.408∗∗∗ (0.084) −0.148∗∗∗ (0.036)

0.456∗∗∗ (0.088)

0.375∗∗∗ (0.014) 135 .140

(2)

(1)

0.381∗∗∗ (0.014) 140 .108

0.0512∗∗∗ (0.0121)

(3)

0.493∗∗∗ (0.031) 140 .260

0.0571∗∗∗ (0.0108) −0.0217∗∗∗ (0.0052)

(4)

Gini coefficient

TABLE IX HISTORICAL DETERMINANTS OF CURRENT INEQUALITY

0.413∗∗∗ (0.011) 141 .018

0.0207 (0.0166)

(5)

0.0453∗∗∗ (0.0153) −0.0743∗∗∗ (0.0089) 0.498∗∗∗ (0.016) 141 .365

(6)

1664 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1665

now pursue the question of whether heterogeneity in the background of migrants more generally may affect the level of income inequality in a country. If this were the case, then in our previous findings heterogeneity of early development might simply be proxying for more general heterogeneity. To address this issue, we examine two standard measures of heterogeneity as well as two new measures created using the matrix, and we compare the predictive power of these measures to each other and to the measures that incorporate early development. The theory implicit in this exercise is that a country made up of people who are similar in terms of culture, language, religion, skin color, or similar attributes will ceteris paribus have lower inequality. This could come about through a number of different channels. Populations that are similar in the dimensions just listed may be more likely to intermarry and mix socially than populations that are diverse. This mixing could by itself reduce any inequality in the groups’ initial endowments, and would also likely be associated with an absence of institutions that magnify ethnic, racial, or economic distinctions. Countries in which people feel a strong sense of kinship with other citizens might also be expected to redistribute income more actively or promote economic mobility. The first heterogeneity measure we use is ethnic fractionalization from Alesina et al. (2003). This is the probability that two randomly selected individuals will belong to different ethnic groups. Alesina et al. find that higher ethnic fractionalization is robustly correlated with poor government performance on a variety of dimensions. We create a second measure of fractionalization using the data in the matrix, which we call “historic fractionalization.” This is wi2 , 1− i

where wi is the fraction of a country’s ancestors coming from country i. Unlike the ethnic fractionalization index, the historic fractionalization index does not take into account ethnic groups composed of people who came from several source countries, such as African-Americans, but instead differentiates among, for example, Ghanaian, Senegalese, Angolan, and other ancestors of current residents of the United States. As Alesina et al. point out, individual self-identification with ethnic groups can change as

1666

QUARTERLY JOURNAL OF ECONOMICS

a result of economic, social, or political forces. Thus ethnicity has a significant endogenous component that is absent in the case of historical fractionalization. Ethnic and historical fractionalization are almost uncorrelated (correlation coefficient .15). In particular, a large number of African countries have values of ethnic fractionalization near one but historical fractionalization near zero. The reason is that in these countries there is fractionalization based on tribal affiliation that is unrelated to the movement of people over current international borders over the last 500 years. There are also several countries (Haiti, Jamaica, Argentina, Israel, the United States) that have a high historic fractionalization because they contain immigrants from many different countries, but a low level of ethnic fractionalization because immigrant groups from similar countries are viewed as having a single ethnicity. The third measure of heterogeneity we use is “cultural diversity,” as constructed by Fearon (2003). Fearon’s measure is similar in spirit to the ethnic heterogeneity measure described above but goes further in making an additional adjustment for different degrees of dissimilarity (as measured by linguistic distance) among the ethnic groups in a country’s population. Desmet, ˜ Ortuno-Ort´ ın, and Weber (2009), using a similar measure, find that higher linguistic heterogeneity predicts a lower degree of government income redistribution. Our final measure of heterogeneity is similar in approach to Fearon’s, but instead of using the language that a country’s residents speak today, we use data on the languages spoken in the countries inhabited by their ancestors in 1500, according to our matrix. Differences in language may directly impede mixing of people from different source countries. In addition, linguistic closeness may well be proxying for other dimensions of culture (such as religion) that could have similar impacts on the degree of mixing among a country’s constituent populations and/or the openness of institutions.26 For these reasons, historical diversity in languages of a country’s ancestors may have an impact on 26. Spolaore and Wacziarg (2009) use genetic distance, a measure of the time since two populations shared a common ancestor, as an indicator of cultural similarity between countries. They argue that genetic distance determines the ability of countries to learn from each other, and show that it predicts income gaps among pairs of countries. Ethnic distance and genetic distance are closely related in practice, as shown by Cavalli-Sforza and Cavalli-Sforza (1995).

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1667

inequality that lasts long after the residents of a country have come to speak the same language. We call the variable we create historical linguistic fractionalization. (Our methodology and data are described in Online Appendix C.) Table X presents regressions of income inequality, as measured by the Gini coefficient, on our various measures of heterogeneity. The first four columns compare the four measures of heterogeneity described above. Cultural (linguistic) diversity is statistically insignificant. By contrast, the two variables that use the matrix to measure historical heterogeneity, historical fractionalization and historical linguistic fractionalization, as well as ethnic fractionalization enter very significantly with the expected positive sign. It is notable that in each case the measure of diversity based on historic variation performs better than the corresponding measure based on the current variation. For example, distance among the languages spoken by people’s ancestors predicts inequalities today far better than does distance among the languages spoken by those people themselves. In the case of variation in language, much of the superior predictive power is driven by Latin America, which in terms of language currently spoken does not look very heterogeneous but does look heterogeneous in terms of historic languages. Patterns of social differentiation that arose during the encounters of people from different continents appear to show persistence even after extensive intermixing and linguistic homogenization. Part of the reason for this could be that linguistic distance between ancestral populations posed barriers to transmission of technologies within countries of a kind similar to those that Spolaore and Wacziarg (2009) posit for genetic distance in international diffusion of technology. The next four columns of Table X repeat these regressions, controlling for the mean and standard deviation of the state history measures, as in columns (1) and (2) of Table IX.27 The somewhat surprising finding here is that variation in terms of state history dominates the other forms of heterogeneity that we examine. None of the other four measures of heterogeneity is statistically significant. Variation in early development among a country’s people is far more important than more standard forms 27. To save space, we don’t report parallel exercises using the standard deviation of agyears. In Section IV.C, we also focus on statehist. Tables II, III, and IV show that statehist and agyears have similar explanatory power, and we accord slight priority to statehist because of its more nuanced tracking of 1,500 years of social history (see our discussion comparing the two measures in Section III).

Note. Robust standard errors in parentheses. ∗∗∗ p < .01, ∗∗ p < .05, ∗ p < .1.

No. obs. R2

Constant

Ancestry-adjusted statehist

Standard deviation of statehist

Historical linguistic fractionalization

Cultural diversity

Historical fractionalization

Ethnic fractionalization

Dependent var.

0.367∗∗∗ (0.019) 132 .073

0.117∗∗∗ (0.037)

(1)

0.382∗∗∗ (0.014) 135 .101

0.134∗∗∗ (0.034)

(2)

0.407∗∗∗ (0.017) 132 .005

0.0354 (0.0437)

(3)

0.385∗∗∗ (0.013) 135 .115

0.168∗∗∗ (0.039)

(4)

0.392∗∗∗ (0.085) −0.130∗∗∗ (0.040) 0.415∗∗∗ (0.033) 132 .276

0.0517 (0.0355)

(5)

Gini coefficient

0.435∗∗∗ (0.124) −0.148∗∗∗ (0.037) 0.446∗∗∗ (0.025) 135 .267

−0.0116 (0.0489)

(6)

TABLE X ETHNIC, LINGUISTIC, AND HISTORICAL DETERMINANTS OF CURRENT INEQUALITY

0.401∗∗∗ (0.086) −0.151∗∗∗ (0.038) 0.441∗∗∗ (0.030) 132 .275

0.0126 (0.0382)

(7)

0.0460 (0.0721) 0.310∗ (0.167) −0.149∗∗∗ (0.036) 0.445∗∗∗ (0.024) 135 .269

(8)

1668 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1669

of heterogeneity (in language or ethnicity) as an explanation for inequality. Similarly, variation in the linguistic background of a country’s ancestors, despite its surprising predictive power relative to that of present languages spoken, is not important once one controls for variation in early development. IV.C. Source Country Early Development as a Determinant of Relative Income The results in Tables IX and X show that heterogeneity in the historical background of a country’s residents is correlated with income inequality today. A number of mechanisms could produce such a correlation. One simple theory is that when people with high and low statehist are mixed together, the high statehist people have some advantage that leads to their percolating up to the top of the income distribution, and then there is enough persistence so that their descendants are still there hundreds of years later. A second theory is that situations in which high and low statehist people are mixed together tended to occur in cases of colonialization and/or slavery, and that in these circumstances high statehist people were able to create institutions that enabled groups at the top of the income distribution to remain there. We do not propose to test these theories against each other. Instead we test an auxiliary prediction that follows from either of them: specifically, in countries with a high standard deviation of statehist, it is the ethnic groups that come from high statehist countries that tend to be at the top of the income distribution. Confirming this prediction would give us additional confidence that the link between the standard deviations of statehist and the current level of inequality is not spurious. To test this prediction, we looked for accounts of socioeconomic heterogeneity by country or region of ancestral origin in the ten countries in our sample having the highest standard deviation of statehist. It is in countries where statehist is highly variable that we would be most likely to find differences in outcomes among nationality groups with different values of statehist. The countries are listed in Table XI. Not surprisingly, all are former colonies, seven of them in the Americas. Of the latter, three are in Central America, three in South America, and one in the Caribbean. We also list in Table XI the United States, which has the eighteeenth highest standard deviation of statehist in the sample and is of particular interest due to its size, economic importance, and good data availability.

0.346

0.301

0.293

0.292

0.291

1 Fiji

2 Cape Verde

3 Guyana

4 Panama

5 Paraguay

# Country

Standard dev. of statehist

0.000 0.749 0.562 0.000

35.7 5.5 46.8 46.1

Panamanian 0.552 European, non-Spanish Spanish Paraguayan/ Brazilian

0.677 0.142 0.000 0.906 0.677 0.578 0.150

54.0 39.0 5.0 1.5 4.0 45.2 13.0

S. Asian African Guyanese 0.548 Chinese S. Asian European African

0.906 0.723

0.693 0.688 0.000 0.723 0.142

0.7 1.3

2.2 45.0 52.1 41.4 58.6

Percent statehist population (average)

0.540 Chinese Portuguese

0.441 European Indian Fijian 0.51 Portuguese African

Gini

Component groups (region) Othera Indo-Fijian Fijian White Creole Black Chinese Portuguese Mixedb East Indian Black Amerindian Chinese White Mestizo Mixed West-Indian (Black) Amerindian European (incl. Spanish) Mestizo Amerindian

Component groups (ethnic)

94.7 1.1

6.0 3.8

4 41 55 1.0 71.0 28.0 0.3 0.4 11.2 51.9 3.8 5.3 2.0 1.0 68.0 13.0

0.281 0.000

0.000 0.575

0.745 0.688 0.000 0.723 0.473 0.142 0.906 0.723 0.410 0.677 0.142 0.000 0.906 0.578 0.281 0.150

Percent statehist population (average)

TABLE XI STATEHIST AND RELATIVE INCOME FOR ANCESTRY GROUPS AND CURRENT ETHNIC GROUPS, SELECTED COUNTRIES

Middle Low

Low High

High Middle Low High Middle Low High Middle Middle Middle Middle Low High High Upper middle Lower middle

Relative income

1670 QUARTERLY JOURNAL OF ECONOMICS

0.289

0.288

0.284

0.281

6 South Africa

7 Brazil

8 Trinidad and Tobago

9 El Salvador

# Country

Standard dev. of statehist

7.1 45.4 46.0

European S. Asian African 5.0 5.0

15.7 9.1 1.5

African Brazilian 0.402 Chinese

0.484 Spanish Salvadoran

0.8 74.4

18.0 3.4 78.7

0.562 0.000

0.671 0.677 0.166

0.086 0.000 0.906

0.834 0.715

0.710 0.670 0.000

Percent statehist population (average)

0.566 Japanese European

0.565 European Indian/S. Asian South African

Gini

Component groups (region)

TABLE XI (CONTINUED)

White/Caucasian Indian Mixede African White Mestizo Amerindian

White Indian/Asian Colored (mixed)c Black African Asian White Mixedd Black Amerindian Chinese

Component groups (ethnic)

0.7 4.5 14.9 43.5 9.0 9.0 1.0

9.2 2.5 8.9 79.4 0.4 53.7 38.5 6.2 0.4 0.2

0.671 0.677 0.504 0.166 0.562 0.281 0.000

0.710 0.670 0.452 0.000 0.834 0.715 0.384 0.086 0.000 0.906

Percent statehist population (average)

High Low Lower middle Low High Middle Low

High Upper middle Lower middle Low High Middle Low Low Low Upper middle

Relative income

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1671

0.232

18 United States

Asian Central and South American Sub-Saharan African North-American f

0.464 European

0.544 European African Nicaraguan

Component groups (region)

0.640 0.433 0.146 0.000

9.6 3.2

0.648

0.568 0.150 0.000

4.1 6.3

75.7

51.0 9.0 4.0

Percent statehist population (average)

American Indian and Alaska Native

Asian Hispanic of any race Black

White African (Creole) Mestizo Amerindian White not Hispanic

Component groups (ethnic)

0.240 0.000

1.0

0.640 0.485

0.568 0.150 0.281 0.000 0.650

12.8

4.2 14.1

17.0 9.0 69.0 5.0 67.4

Percent statehist population (average)

Lower middle

Low

High Lower middle

High Middle Middle Low Upper middle

Relative income

Note. For detailed information sources, see Online Appendix D. Gini coefficient is from the UN World Income Inequality Database (2007), except Cape Verde: World Development Indicators. Component region group population percentages computed from the matrix. Component ethnic groups based on: Fiji : Household Survey 2002–2003; Cape Verde: Census 1950 (quoted in Lobban [1995, p. 199]); Guyana: Census 1980; Paraguay: Census 2002; Panama: data from Fearon (2003); South Africa: Household Survey 2005; Trinidad and Tobago: Continuous Sample Survey of Population; El Salvador: CIA Factbook; Nicaragua: CIA Factbook; Venezuela: CIA Factbook; United States: U.S. Census, Population Estimates Program, Vintage 2004. a Europeans, Chinese. b One-half East Indian, one-half African. c 0.35 African, 0.1 S. Asian, 0.05 Indonesian, 0.1 UK, 0.1 Netherlands, 0.1 France, 0.1 Germany, and 0.1 Portugal. d 0.506 European, 0.239 Amerindian, 0.255 African. e One-third African, one-third South Asian, one-third European. f Includes Hawaiian and Alaskan.

0.277

10 Nicaragua

# Country

Standard dev. of statehist Gini

TABLE XI (CONTINUED)

1672 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1673

For each country in the table we first show the breakdown of the population in terms of origin countries or groups of similar countries, according to the matrix. We then show the weighted average value of statehist for each origin country or group. The next three columns are based on information about the current ethnic breakdown in the country. Ethnic groups as currently identified sometimes correspond to individual origin groups but are often combinations, referred to for example as mestizo, creole, colored, or mixed. For each current ethnic group, we then present estimates of average statehist and the relative value of current income, listed as high, middle and low or high, upper middle, lower middle, and low (see Online Appendix D for the countryspecific sources of this income breakdown). To estimate statehist for a mixed ethnic group, we use the assumptions underlying the matrix that relate mixed groups to source populations. For example, the group termed “colored” in South Africa is assumed to have half of its ancestors coming in equal proportions from five European countries (England, Portugal, and Afrikaner source countries Netherlands, France, and Germany) and the other half in unequal proportions from South Africa itself (35%), India (10%), and Indonesia (5%). These assumptions are reported in the region Appendices describing the construction of the matrix. Leaving details to Online Appendix D, we note immediately that the ordering of statehist values and the ordering of socioeconomic status in Table XI has at least some correspondence in every country. For nine of the eleven countries listed—Fiji, Cape Verde, Guyana, Paraguay, Panama, South Africa, Brazil, El Salvador, and Nicaragua—the socioeconomic ordering perfectly dovetails with that of statehist values. In two countries—Trinidad and Tobago and the United States—there are discrepancies in the orderings of Asians and “Whites,” with Chinese and (South Asian) Indians having lower incomes than Whites in the first country despite having higher statehist, although Asians in general have higher incomes than Whites in the United States despite lower average statehist. For the United States, there is a further discrepancy in that “Black” Americans have lower average incomes than American Indians and Alaska Natives, despite having somewhat higher average statehist values. Although no statistical significance should be attached to the counts just mentioned, because the categorizations are quite broad and require some judgments to be made, the general pattern clearly supports the expectation. A few patterns are noteworthy. Paraguay and El Salvador are representative of the many Latin American countries in which

1674

QUARTERLY JOURNAL OF ECONOMICS

the main identifiable groupings, listed in order both of socioeconomic status and of average statehist, are European, mestizo, and Amerindian. Three of the represented countries—Panama, Nicaragua, and Brazil—add a group of largely African descent to this tripartite pattern. In each of the latter countries, the White group remains on top and the Amerindian group on the bottom. The Black group, with higher statehist than the Amerindians,28 is variously found on approximate par with the mestizos (Nicaragua) or between the mestizo and Amerindian groups (Panama). In Brazil, mestizos, Blacks, and Amerindians are classified as low because data that discriminate more carefully between them are unavailable. In two of the other represented countries of the Americas— Guyana and Trinidad and Tobago—there are substantial populations of South Asian origin. In Trinidad and Tobago, the socioeconomic positioning of this group is lower than predicted by their average statehist. This result, contradicting our general hypothesis, may be related to the economic hard times on which South Asia itself had fallen by the nineteenth century (mentioned in Section III.E) and the manner in which millions were brought from that region to the Caribbean to work in indentured servitude after Britain outlawed slavery. Consistent with the expectations based on their homelands’ state histories, however, people of South Asian ancestry occupy middle or upper-middle socioeconomic positions in Guyana and also in two of the three non-Americas examples, South Africa and Fiji. Of the two African countries represented in Table XI, Cape Verde began as a Portuguese plantation economy employing slaves brought from the African mainland. At the time of the country’s independence from Portugal, in 1975, the society was described as being stratified along color lines, with people of darker complexion usually found in the lower class and people of lighter complexion constituting the “bourgeoisie” (Meintel 1984; Lobban 1995). The correlation between complexion and socioeconomic class is consistent with our proposed explanation of the correlation between standard deviation of statehist and the Gini coefficient seen in Tables IX and X. In South Africa, the major population categories are Black African, White, “colored” (with both European and either African, Indian, or Malay ancestors), and Indian 28. This is due to the existence of some states in Africa before 1500 but their absence in the Americas outside of Mexico, Guatemala, and the Andes. Note that the situation is reversed in some cases; for instance, the indigenous people of South Africa have a lower statehist value than those of Mexico and Peru.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1675

or Asian. The socioeconomic standings of these groups today remain heavily influenced by the history of European settlement and subordination of the local population, and partly as a result, the average incomes for those in the four groupings are ordered exactly in accord with the ordering of average statehist. The only case in Table XI not located in the Americas or Africa is Fiji, whose population is classified by government statisticians as indigenous (55.0%), Indian (41.0%), and other (mainly European and Chinese, 4.0%). Average household incomes per adult in the three groups are ordered identically to average statehist values. Although the reported income gap between the Indian and native Fijian populations is far smaller than the difference in statehist, the government statisticians comment that the incomes of Indo-Fijians are probably undercounted, because much of them comes from private business activities likely to be underreported. Turning finally to the United States, the Census Bureau reports a breakdown of the population into White non-Hispanic, Hispanic any race, Black, Asian, American Indian and Alaska Native, and other small categories. These groups’ reported median incomes have the same ordering as their average statehist values, with the exception of the higher Asian than White income and the higher American Indian than Black income. The simple correlation between the five statehist and the five income values (as reported in Online Appendix D), with equal weighting on all observations, is 0.741. On balance, the evidence from the ten countries with the highest internal variation of statehist and from the eighteenthranking United States appears to support the idea that correlation between within-country differences in income and corresponding differences in the early development indicator statehist at least partially account for the predictive power of the standard deviation of statehist in the Table IX and X regressions. Indeed, in this section we have found within countries (as the previous section found between countries) that there is considerable persistence and reproduction of income differences, which appears to reflect social differences dating back up to half a millennium. To be sure, in the majority of cases just discussed, differences in societal capabilities during the era of European expansion played themselves out to a considerable degree in the form of outright dominance of some over others, including appropriation of land, control of government and monopoly of armed force, and involuntary movement of millions of people between macro-regions to meet the conquering population’s labor demands. How persistent early differences

1676

QUARTERLY JOURNAL OF ECONOMICS

would have proven to be in the absence of the exercise of raw power is a question that goes beyond the scope of our paper. The point for present purposes is that as history has in fact unfolded, such differences have been remarkably persistent. V. CONCLUSIONS Conquest, colonialism, migration, slavery, and epidemic disease reshaped the world that existed before the era of European expansion. Over the last 500 years, there have been dramatic movements of people, institutions, cultures, and languages among the world’s major regions. These movements clearly have implications for the course of economic development. Existing literature has already made a good start at examining how institutions were transferred between regions and the long lasting economic effects of these transfers. However the human side of the story—the relationship between where the ancestors of a country’s current population lived and current outcomes—has received relatively little attention, in part due to the absence of suitable data. In this paper, we introduce a “world migration matrix” to account for international movements of people since the year 1500, and more specifically for the impact that those movements and subsequent population growth have had on the ancestries of countries’ populations today. We use the matrix to document some major features of world migration’s impacts on ancestry, such as the bimodality of the distribution of indigenous and nonindigenous people by country and the variations in the primary source regions for immigrant-populated countries. In the second part of the paper, we demonstrate the utility of the migration data by using them to revisit the hypothesis that early development of agrarian societies and their sociopolitical correlates—states—conferred developmental advantages that remain relevant today. We confirm that in a global sample, countries on whose territories agriculture and states developed earlier have higher incomes. But we conjecture that people who moved from one region to another carried the human capabilities built up in the former area with them. We find that recalculating state history and agriculture measures for each country as weighted averages by place of origin of their people’s ancestors considerably improves the fit of these regressions. We also find that heterogeneity of early development, holding the mean level constant, is associated with higher per capita income. We interpret this finding as indicating

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1677

that the effect of spillovers of growth-promoting characteristics between groups having different early development histories more than compensated for any negative effect on growth of higher inequality due to heterogeneity. In Section IV, we show that the heterogeneity of a country’s population in terms of the early development of its ancestors as of 1500 is strongly correlated with income inequality. We also show that heterogeneity with respect to country of ancestry or with respect to the ancestral language does a better job than does current linguistic or ethnic heterogeneity in predicting income inequalities today. As an additional test of the theory that early development conferred lasting advantage, we show that the rankings of ethnic or racial groups within a country’s income distribution are strongly correlated with the average levels of groups’ early development indicators. The overall finding of our paper is that the origins of a country’s population—more specifically, where the ancestors of the current population lived some 500 years ago—matter for economic outcomes today. Having ancestors who lived in places with early agricultural and political development is good for income today, both at the level of country averages and in terms of an individual’s position within a country’s income distribution. Exactly why the origins of the current population matter is a question on which we can only speculate at this point. People who moved across borders brought with them human capital, cultures, genes, institutions, and languages. People who came from areas that developed early evidently brought with them versions of one or more of these things that were conducive to higher income. Future research will have to sort out which ones were the most significant. The fact that early development explains an ethnic group’s position within a country’s income distribution suggests that “good institutions” coming from regions of early development cannot be the whole story, although it does not prove that institutions are not of enormous importance. More research is also needed to understand how early development led to the creation of growth promoting characteristics (whatever these turn out to be), how these characteristics were transmitted so persistently over the centuries, as well as the process by which these characteristics are transferred between populations of high and low early development. Our hope is that the availability of a compilation of data on the reconfiguration of country populations since 1500 will make it easier to address such issues in future research.

1678

QUARTERLY JOURNAL OF ECONOMICS

APPENDIX I: WORLD MIGRATION MATRIX, 1500–200029 The goal of the matrix is to identify where the ancestors of the permanent residents of today’s countries were living in 1500 CE. In this abbreviated description, we address some major conceptual issues relevant to the construction of the matrix and identify some of the main sources of information consulted. The migration matrix is a table in which both row and column headings are the names of currently existing countries, and cell entries are estimates of the proportion of the ancestors of those now permanently residing in the country identified in the row heading who lived in the country identified by the column heading in 1500. An ancestor is treated as having lived in what is now, say, Indonesia, if the place he or she then resided in is within the borders of Indonesia today. When ancestors could be identified only as part of an ethnic group that lived in a region now straddling the borders of two or more present-day countries, we try to estimate the proportion of that group living in each country and then allocate ancestry accordingly. For example, if a given ancestor is known to have been a “Gypsy” (Roma) but if we have no information on which country he or she lived in during the year 1500, we apply an assumption (see Online Appendix B) regarding the proportion of Gypsies who lived in Greece, Romania, Turkey, etc., as of 1500. The Gypsy example is one of many illustrating the fact that most of our data sources organize their information around ethnic groups rather than territories of origin. Although the use of information on ethnicity was unavoidable in the process of constructing the matrix, it was not a focus of attention in its own right. In cases in which ancestors are known to have migrated more than once between 1500 and 2000, countries of intervening residence are not indicated in the matrix. For example, an Israeli whose parents lived in Argentina but whose grandparents arrived in Argentina from Ukraine is listed as having had ancestors in Ukraine. People of mixed ancestry are common in many countries—for example, people of mixed Amerindian and Spanish ancestry in Mexico. Such individuals are treated as having a certain proportion of their ancestry deriving from each source country. When members of such groups are reported to account for 30% or more 29. This is an abbreviated version of Main Appendix 1.1 in Online Appendix B, which is linked to region summaries and the data set itself. All can be found at http://www.econ.brown.edu/fac/Louis Putterman/.

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1679

of a country’s population, we searched the specialized scientific literature on genetic admixture for the best available estimates. For smaller mixed groups we base estimates on the stated or implicit assumptions of conventional sources or on extrapolation from similar countries in which we had genetic estimates. Our assumed breakdowns of mixed populations for each country are discussed in the region Appendices in Online Appendix B. Because our interest is in the possible impact of its people’s origins on each country’s economic performance, we try to identify the origins of long-term residents only, thus leaving out guest or temporary workers. Very few data are available about the duration of stay of most temporary workers, so we made educated guesses as to what portion of the originally temporary residents have become permanent, understood as having been in the country at least ten years as of 2000. The matrix includes entries on all countries existing in 2000 that had populations of one-half million or more. A country is included as a source country for ancestors of the people of another country if at least 0.5% of all ancestors alive in 1500 are estimated to have lived there. Some entries smaller than 0.5% are found in the matrix, but these occur as a result of special decompositions applied to populations that our sources identify by ethnic group rather than by country of origin—for example, Gypsies, Africans (descended from slaves, especially in the Americas), and Ashkenazi Jews. The region Appendices in Online Appendix B detail the method of assigning fractions of these populations to individual source countries. Some of the more important sources from which data were drawn for the construction of the matrix are listed below. See Online Appendix B and its regional Appendices for other sources and details: Columbia Encyclopedia (online edition) CIA World Factbook Countriesquest.com Encyclopædia Britannica (online edition) Everyculture.com Library of Congress, Federal Research Division, Country Studies MSN Encarta Encyclopedia (online edition) Nationsencyclopedia.com World Christian Database (Original source for WCE) World Christian Encyclopedia

8.64∗∗∗ (0.10) 92 .289

0.164 (0.126)

0.336∗ (0.189) 0.327∗ (0.179)

8.64∗∗∗ (0.08) 92 .542

0.734∗∗∗ (0.099)

(2)

(1)

8.64∗∗∗ (0.08) 92 .588

0.797∗∗∗ (0.104)

0.119 (0.127)

(3)

(5)

8.64∗∗∗ (0.10) 92 .255

0.136 (0.134)

(6)

(7)

(8)

(9)

ln(GDP per capita 2000)

8.64∗∗∗ (0.08) 92 .598

8.64∗∗∗ (0.08) 92 .528

8.64∗∗∗ (0.08) 92 .583

8.64∗∗∗ (0.10) 92 .272

8.64∗∗∗ (0.07) 92 .611

−0.0549 −0.0748 0.480∗∗∗ −0.223∗ (0.1198) (0.1625) (0.135) (0.119) 0.861∗∗∗ (0.097) 0.915∗∗∗ (0.128) 0.210∗ (0.116) 0.937∗∗∗ 1.04∗∗∗ (0.100) (0.09)

0.491∗∗∗ −0.0908 (.0159) (0.1254)

(4)

(11)

(12)

(13)

8.64∗∗∗ (0.08) 92 .594

8.64∗∗∗ (0.08) 92 .530

8.64∗∗∗ (0.07) 92 .631

8.64∗∗∗ (0.08) 92 .582

0.276 0.795∗∗∗ 0.352∗∗ (0.210) (0.077) (0.145) 0.620∗∗∗ 0.890∗∗∗ (0.220) (0.080) 0.0666 −0.0541 (0.0906) (0.0839) ∗∗∗ 0.595 (0.154)

(10)

(15)

8.64∗∗∗ (0.07) 92 .636

8.64∗∗∗ (0.07) 92 .609

−0.165∗ (0.096) 0.504∗∗∗ 0.968∗∗∗ (0.175) (0.085)

0.437∗∗ (0.175)

(14)

Notes. All right hand–side variables are ancestry-adjusted and normalized to have a mean of zero and a standard deviation of one. Robust standard errors in parentheses. ∗ ∗ ∗ p < .01, ∗ ∗ p < .05, ∗ p < .1.

No. obs. R2

geo conditions bio conditions Technology index AD 0 Technology index AD 1500 Constant

agyears

statehist

Dependent var.

APPENDIX II: HORSE RACE REGRESSIONS USING DIFFERENT MEASURES OF EARLY DEVELOPMENT

1680 QUARTERLY JOURNAL OF ECONOMICS

POST-1500 POPULATION FLOWS AND LONG-RUN GROWTH

1681

BROWN UNIVERSITY BROWN UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH

REFERENCES Acemoglu, Daron, Simon Johnson, and James Robinson, “The Colonial Origins of Comparative Development: An Empirical Investigation,” American Economic Review, 91 (2001), 1369–1401. ——, “Reversal of Fortunes: Geography and Institutions in the Making of the Modern World Income Distribution,” Quarterly Journal of Economics, 117 (2002), 1231–1294. Alesina, Alberto, Arnaud Devleeschauwer, William Easterly, Sergio Kurlat, and Romain Wacziarg, “ Fractionalization,” Journal of Economic Growth, 8 (2003), 155–194. Ashraf, Quamrul, and Oded Galor, “Human Genetic Diversity and Comparative Economic Development,” Brown University Working Paper, 2008. Barro, Robert J., “Inequality and Growth Revisited,” Asian Development Bank Working Paper Series on Regional Economic Integration No. 11, January 2008. Bockstette, Valerie, Areendam Chanda, and Louis Putterman, “States and Markets: The Advantage of an Early Start,” Journal of Economic Growth, 7 (2002), 347–369. Cavalli-Sforza, Luigi L., and Francesco Cavalli-Sforza, The Great Human Diasporas (Reading, MA: Addison Wesley Publishing Co., 1995). Chanda, Areendam, and Louis Putterman, “State Effectiveness, Economic Growth, and the Age of States,” in States and Development: Historical Antecedents of Stagnation and Advance, Matthew Lange and Dietrich Rueschemeyer, eds. (Basingstoke, UK: Palgrave MacMillan, 2005). ——, “Early Starts, Reversals and Catch-Up in the Process of Economic Development,” Scandinavian Journal of Economics, 109 (2007), 387–413. Comin, Diego, William Easterly, and Erick Gong, “Was the Wealth of Nations Determined in 1000 BC?” NBER Working Paper No. W12657, 2006. ——, “Was the Wealth of Nations Determined in 1000 BC?” American Economic Journal, 2 (2010), 69–57. Correlates of War Project, Direct Contiguity Data, 1816–2000, Version 3.0 (http://correlatesofwar.org, 2000). ˜ Desmet, Klaus, Ignacio Ortuno-Ort´ ın, and Shlomo Weber, “Linguistic Diversity and Redistribution,” Journal of the European Economic Association, 6 (2009), 1291–1318. Diamond, Jared, Guns, Germs and Steel (New York: Norton, 1997). Easterly, William, “Inequality Does Cause Underdevelopment: Insights from a New Instrument,” Journal of Development Economics, 84 (2007), 755–776. Engerman, Stanley, and Kenneth Sokoloff, “Factor Endowments, Inequality, and Paths of Development among New World Economies,” Economia, 3 (2002), 41–109. Fearon, James D., “Ethnic Structure and Cultural Diversity by Country,” Journal of Economic Growth, 8 (2003), 195–222. Galor, Oded, and Omer Moav, “The Neolithic Origins of Contemporary Variation in Life Expectancy,” Brown University Department of Economics Working Paper 2007-14, 2007. Glaeser, Edward L., Rafael La Porta, Florencio Lopez-de-Silanes, and Andrei Shleifer, “Do Institutions Cause Growth?” Journal of Economic Growth, 9 (2004), 271–303. Guiso, Luigi, Paola Sapienza, and Luigi Zingales, “Does Culture Affect Economic Outcomes?” Journal of Economic Perspectives, 20 (2006), 23–48. Hall, Robert, and Charles Jones, “Why Do Some Countries Produce So Much More Output Than Others?” Quarterly Journal of Economics, 114 (1999), 83– 116. Hibbs, Douglas A., and Ola Olsson, “Geography, Biogeography, and Why Some Countries Are Rich and Others Are Poor,” Proceedings of the National Academy of Sciences, 101 (2004), 3715–3720.

1682

QUARTERLY JOURNAL OF ECONOMICS

Johnson, Allen, and Timothy Earle, The Evolution of Human Societies: From Foraging Groups to Agrarian State (Stanford, CA: Stanford University Press, 1987). Knack, Stephen, and Philip Keefer, “Does Social Capital Have an Economic Payoff? A Cross-Country Investigation,” Quarterly Journal of Economics, 112 (1997), 1251–1288. La Porta, Rafael, Florencio Lopez-de-Silanes, Andrei Shleifer, and Robert Vishny, “Law and Finance,” Journal of Political Economy, 106 (1998), 1113–1155. Lobban, Richard, Cape Verde: Crioulo Colony to Independent Nation (Boulder, CO: Westview Press, 1995). MacNeish, Richard, The Origins of Agriculture and Settled Life (Norman: University of Oklahoma Press, 1991). McEvedy, Colin, and Richard Jones, Atlas of World Population History (New York: Viking Press, 1978). Meintel, Deirdre, “Race, Culture and Portuguese Colonialism in Cabo Verde,” Syracuse University, Foreign and Comparative Studies, African Series, No. 41, 1984. Nunn, Nathan, “The Long Term Effects of Africa’s Slave Trades,” Quarterly Journal of Economics, 123 (2008), 139–176. Olsson, Ola, and Douglas A. Hibbs, Jr., “Biogeography and Long-Run Economic Development,” European Economic Review, 49 (2005), 909–938. Putterman, Louis, “State Antiquity Index Version 3” (http://www.econ.brown.edu/ fac/Louis Putterman/, 2004). Putterman, Louis, with Cary Anne Trainor, “Agricultural Transition Year Country Data Set,” (http://www.econ.brown.edu/fac/Louis Putterman, 2006). Smith, Bruce, The Emergence of Agriculture (New York: Scientific American Library, 1995). Sokoloff, Kenneth, and Stanley Engerman, “Institutions, Factor Endowments and Paths to Development in the New World,” Journal of Economic Perspectives, 14 (2002), 217–232. Spolaore, Enrico, and Romain Wacziarg, “The Diffusion of Development,” Quarterly Journal of Economics,124 (2009), 469–527. Tabellini, Guido, “Culture and Institutions: Economic Development in the Regions of Europe,” CESifo Working Paper No. 1492, 2005.

COMPETITION AND BIAS∗ HARRISON HONG AND MARCIN KACPERCZYK We attempt to measure the effect of competition on bias in the context of analyst earnings forecasts, which are known to be excessively optimistic because of conflicts of interest. Our natural experiment for competition is mergers of brokerage houses, which result in the firing of analysts because of redundancy (e.g., one of the two oil stock analysts is let go) and other reasons such as culture clash. We use this decrease in analyst coverage for stocks covered by both merging houses before the merger (the treatment sample) to measure the causal effect of competition on bias. We find that the treatment sample simultaneously experiences a decrease in analyst coverage and an increase in optimism bias the year after the merger relative to a control group of stocks, consistent with competition reducing bias. The implied economic effect from our natural experiment is significantly larger than estimates from OLS regressions that do not correct for the endogeneity of coverage. This effect is much more significant for stocks with little initial analyst coverage or competition.

I. INTRODUCTION We study the effect of competition on reporting bias. Efficient outcomes in many markets depend on individuals having accurate information. Yet the suppliers who report this information often have other incentives in addition to accuracy. Two notable examples are (1) media outlets that trade off profits from providing informative news to consumers and voters versus printing information favorable to companies or political clients and (2) credit rating agencies such as Moody’s and Standard & Poor’s (S&P’s) that get paid by the corporations they are supposed to evaluate. Will such conflicts of interest in these important economic and political markets lead to consumers, voters, and investors having poor information? Or can the market discipline these supply-side incentives and limit the distortions? The effect of competition from having more suppliers on bias is an important part of answering these questions. The theoretical literature on the economics of reporting bias yields ambiguous answers when it comes to the potentially ∗ We thank three anonymous referees, Robert Barro (the editor), Edward Glaeser (the editor), Rick Green, Paul Healy, Jeff Kubik, Kai Li, Marco Ottaviani, Daniel Paravisini, Amit Seru, Kent Womack, Eric Zitzewitz, and seminar participants at Copenhagen Business School, Dartmouth, HBS, INSEAD, Michigan State, the Norwegian School of Management, Princeton, SMU, Texas, UBC, the UBC Summer Conference, the AFA 2009 Meetings, and the NBER Behavioral Finance Conference for a number of helpful suggestions. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1683

1684

QUARTERLY JOURNAL OF ECONOMICS

disciplining role of competition. One strand of this literature argues that competition from suppliers (e.g., more newspapers or rating agencies) makes it more difficult for a firm (e.g., a political client or a bank) to suppress information (Besley and Prat 2006; Gentzkow and Shapiro 2006). Intuitively, the more suppliers of information are covering the firm, the more costly it will be for the firm to keep unfavorable news suppressed. Another perspective, the catering view, is that competition need not reduce and may increase bias if the end users (voters, consumers, or investors) want to hear reports that conform to their priors (e.g., Mullainathan and Shleifer [2005]).1 In contrast to the theoretical side, empirical work on reporting bias is limited. There is some evidence that competitive pressures have helped discipline the media market. For example, a number of case studies of political scandals suggest that competition helps subvert attempted suppression of news (Genztkow and Shapiro 2008). In addition, Gentzkow, Glaeser, and Goldin (2006), in their study of U.S. newspapers in the nineteenth century, point out that the papers of the time were biased tools funded by political clients. Importantly, they show that this situation changed in the late nineteenth century, when cheap newsprint engendered greater competition, which increased the rewards to objective reporting. These case studies notwithstanding, we have virtually no systematic evidence on the relationship between competition and bias. In this paper, we attempt to measure the effect of competition on bias in the context of the market for security analyst earnings forecasts. This setting is an ideal one for studying this issue for a couple of reasons. First, the trade-offs faced by security analysts in issuing forecasts are similar to those of suppliers of reports in other markets such as media or credit ratings. Namely, analysts’ earnings forecasts are optimistically biased because of conflicts of interest—a desire to be objective by producing the accurate forecasts desired by investors versus the prospect of payoffs by the companies that they cover, which want positive reports.2 Hence, 1. There is a related literature on professional strategic forecasting in which forecasters, in a rank-order contest based on accuracy, differentiate themselves in their forecasts strategically a` la Hotelling in equilibrium (Laster, Bennett, and Geoum 1999; Ottaviani and Sørensen 2005). These models are good at producing dispersion in forecasts due to convex payoffs (associated with publicity, etc.) and competition can lead to more dispersed forecasts. However, they do not necessarily lead to bias on average. 2. Companies naturally like analysts to be optimistic about their stocks, particularly when they are making initial or seasoned equity offerings. They would

COMPETITION AND BIAS

1685

the lessons regarding competition and bias obtained from our setting can be applied more broadly to other markets. Second, at least two of the economic channels laid out in the theory literature predict that competition disciplines supply-side incentive distortions that are likely to apply to the market for analyst forecasts. Both of them point in the direction that an increase in competition among analysts should lead to less bias. The first channel is what Gentzkow and Shapiro (2008) term the independence rationale: Competition means a greater diversity of preferences among suppliers of information (analysts in our context) and hence a greater likelihood of drawing at least one independent supplier or analyst whose preference is such that he or she cannot be bought by the firm. This supplier’s independence can have a disciplining effect on other suppliers. For instance, if an independent analyst makes a piece of bad news public, then the other analysts will be forced to do so as well. A recent example that illustrates this independence-rationale mechanism at work in the market for analyst forecasts is the negative report produced by Meredith Whitney, a then unknown analyst at a lower-tier brokerage house named Oppenheimer, on Citibank on October 31, 2007. Citibank had a large market capitalization and was covered by in excess of twenty analysts. One can view Whitney as the draw of an independent analyst from among many. Whitney argued in the report that Citibank might go bankrupt as a result of their subprime mortgage holdings. Her report is now widely acknowledged as forcing the release of the pentup bad news regarding financial firms, which had been unreported by other analysts. Her report had a disciplining effect on her peers at other brokerage houses, as they subsequently also reported on the same negative news about Citibank. A similar, though less famous example is Ivy Zelman, a housing analyst for Credit Suisse, who issued negative reports on the housing industry.3 not do business with an investment bank if the analyst were not positive about the stock (e.g., Brown, Foster, and Noreen [1985], Stickel [1990], Abarbanell [1991], Dreman and Berry [1995], and Chopra [1998]). A number of papers find that an analyst from a brokerage house that has an underwriting relationship with a stock tends to issue more positive predictions than analysts from nonaffiliated houses (e.g., Dugar and Nathan [1995], Lin and McNichols [1998], Dechow, Hutton, and Sloan [1999], and Michaely and Womack [1999]). Importantly, analysts’ career outcomes depend both on relative accuracy and optimism bias (e.g., Hong and Kubik [2003] and Fang and Yasuda [2009]). 3. Anecdotes also suggest that independent analysts who “blow the whistle” tend to come from lower-tier brokerage houses that have lower investment-banking business revenues.

1686

QUARTERLY JOURNAL OF ECONOMICS

A second channel whereby competition limits bias that also holds in this market is that a firm’s cost of influence increases with the number of suppliers or analysts. In the model of Besley and Prat (2006), if N analysts are all suppressing information in equilibrium and issuing optimistically biased reports, a single deviator who releases a bad forecast gets the same payoff as a monopolist. So the bribe that must be paid to each analyst to suppress information is thus independent of N, and so the total bribe is increasing in N. Increasing the number of analysts makes it more difficult to suppress information for the same reason that it makes it more likely that tacit collusion will break down. An implication of this mechanism is that the rewards are disproportionately high for the deviator if N − 1 other analysts are suppressing information. The Whitney and Zelman examples also seem to bear out this implication. The deviators Whitney and Zelman became famous because few other negative reports were issued by their peers. Their rewards were by all accounts disproportionate. Beyond being offered jobs at better and higher-paying brokerage houses, they ended up being famous enough to start their own advisory businesses with special clients and revenue streams. And a third reason that the market for analyst forecasts is ideal to study the relationship between competition and bias is that there are plentiful micro data concerning analysts and their performance that are not easily accessible in other markets, as well as opportunities to exploit natural experiments for identification. In particular, we identify the causal effect of competition or coverage on bias by using mergers of brokerage houses as a natural experiment. When brokerage houses merge, they typically fire analysts because of redundancy and potentially lose additional analysts for other reasons, including culture clash and merger turmoil (e.g., Wu and Zang [2009]). For example, if the merging houses each had one analyst covering an oil stock, they would only keep one of the two oil stock analysts after the merger. We use this decrease in analyst coverage for stocks covered by both merging houses before the merger (the treatment sample) to measure the causal effect of competition on bias. During the period from 1980 to 2005, we identify fifteen mergers of brokerage houses (which took place throughout this twenty-five-year period) that affected 948 stocks (stocks covered by both merging houses) or 1,656 stock observations. We measure the change in analyst coverage and mean bias for the stocks in the treatment sample from one year before the merger to one year

COMPETITION AND BIAS

1687

after relative to a control group of stocks. The control group is stocks with the same market capitalization, market-to-book ratio, past return, and analyst coverage features before the merger as the treatment sample. The exclusion restriction is that the change in the mean bias of the treatment sample across the merger date is not due to any factor other than the merger leading to a decrease in analyst coverage of those stocks. We think this is a good experiment because the merger-related departures of analysts due to redundancy or culture clash ought not a priori to be related to anything having to do with the biases of the forecasts of the other analysts, particularly those working for other houses. As a benchmark, we begin with simple OLS regressions of average bias of earnings forecasts on analyst coverage.4 Henceforth, we will refer to the average or median bias of a stock simply as the bias of that stock. We restrict ourselves to stocks in the top 25% of the market capitalization distribution to facilitate comparison with the results from our natural experiment. The mean analyst coverage of these stocks is about 21 analysts and the standard deviation across stocks is about 10 analysts. Depending on the specifications we use, the economic effects are small to none. The largest effect we find is that a decrease in one analyst leads to an increase in bias by 0.0002 (2 basis points). The bias for a typical stock is about 0.03 (3%) with a standard deviation across stocks of about 0.03 (3%). Hence, these estimates obtained from crosssectional regressions suggest a modest increase in bias by about 60 basis points to none as a fraction of the cross-sectional standard deviation of bias as we decrease coverage by one analyst. Of course, these regressions are difficult to interpret, because of the endogeneity of analyst coverage. Existing studies suggest a selection bias in coverage in that analysts tend not to cover stocks that they do not issue positive forecasts about (e.g., McNichols and O’Brien [1997]). In this instance, we would then expect to find a larger causal effect from competition if we could randomly allocate analysts to different stocks. Thus, we evaluate our natural experiment by first verifying the premise of our experiment regarding the change in analyst 4. Lim (2001) also tries to explain analyst bias by arguing that bias helps analysts to get access to a firm and hence to provide more accurate forecasts, and shows a negative correlation between bias and coverage in the cross section. His main variable of interest, however, is stock price volatility, whereas coverage is just another proxy for firm size. As we show, OLS estimates of bias on coverage is specification dependent.

1688

QUARTERLY JOURNAL OF ECONOMICS

coverage for the treatment sample from the year before the merger to the year after.5 We find, as expected, that the average drop in coverage for the treatment sample (using the most conservative control group) is around one analyst, with a t-statistic of around 5.7. The effect is economically and statistically significant in the direction predicted. We then find that the treatment sample simultaneously experiences an increase in optimism bias the year after the merger relative to a control group of stocks. A conservative estimate is that the mean optimism bias increases by fifteen basis points (as a result of reducing coverage by one analyst). As we mentioned earlier, the sample for the natural experiment is similar to that of the OLS by construction. This is a sizable difference and suggests that the OLS estimates are biased downward. Importantly, we find the same results when we look at the change in bias for analysts covering the same stocks but not employed by the merging firms, so our effect is not due to the selection of an optimistic analyst by the merging firms. We also find that this competition effect is significantly more pronounced for stocks with smaller analyst coverage (less than or equal to five). As we discuss below, these key additional results of our paper are consistent with the competition mechanisms articulated above. We then conduct a number of analyses to verify the validity of our natural experiment, including showing that mergers are changing bias and not actual earnings or other firm characteristics, that mergers do no predict pretrends, and using nonparametric analysis to verify a shift in the distribution of forecasts. We also conduct a number of robustness exercises, including using an alternative regression framework that controls for brokerage-house fixed effects as well as firm fixed effects. Our results remain after all these additional analyses. Finally, we examine some auxiliary implications of the competitive pressure view, including looking at how implicit incentives for bias vary with analyst coverage. The rest of the paper proceeds as follows. We describe the data in Section II and estimate the OLS regressions of bias on analyst 5. We expect these stocks to experience a decrease in coverage because one of the redundant analysts is typically let go. The exact number depends on a couple of factors. On one hand, the fired analyst might get a job with another firm and cover the same stock, which means the decrease in coverage might be less than one. On the other hand, a firm might lose or fire both analysts for reasons of culture clash or merger turmoil. In this case, if neither analyst is rehired by another firm, we would see a decrease in coverage of two analysts. What the magnitude turns out to be is an empirical question.

COMPETITION AND BIAS

1689

coverage in Section III. In Section IV, we provide background and statistics on the mergers. We discuss the methodology we use to measure the effect of the mergers on analyst coverage and bias in Section V and describe the results in Section VI. We conclude in Section VII. II. DATA Our data on security analysts come from the Institutional Brokers Estimates System (IBES) database. Our full sample covers the period 1980–2005. In our study, we focus on annual earnings forecasts because forecasts of these types are most commonly issued. For each year, we take the most recent forecast of the annual earnings. As a result, we have for each year one forecast issued by each analyst covering a stock. Our data on U.S. firms come from the Center for Research in Security Prices (CRSP) and COMPUSTAT. From the CRSP, we obtain monthly closing stock prices, monthly shares outstanding, and daily and monthly stock returns for NYSE, AMEX, and NASDAQ stocks over the period 1980–2005. From COMPUSTAT, we obtain annual information on corporate earnings, book value of equity, and book value of assets during the same period.6 To be included in our sample, a firm must have the requisite financial data from both CRSP and COMPUSTAT. We follow other studies in focusing on companies’ ordinary shares, that is, companies with CRSP share codes of 10 or 11. We use the following variables. Analyst forecast bias is the difference between an analyst’s forecast and the actual earnings per share (EPS) divided by the previous year’s stock price. Given that the values of EPS reported by IBES tend to suffer from data errors, we follow the literature and use EPS from COMPUSTAT. Because our analysis is conducted at the stock level, we further aggregate forecast biases and consider the consensus bias expressed as a mean or median bias among all analysts covering a particular stock, which is denoted by BIASit . This is our main dependent variable of interest. We also utilize a number of other independent variables. COVERAGEit is the number of analysts covering stock i in year t. LNSIZEit is the natural logarithm of firm i’s market capitalization 6. Our results are similar if we use IBES earnings numbers as opposed to those from COMPUSTAT.

1690

QUARTERLY JOURNAL OF ECONOMICS

(price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i during year t. RETANNit is the average monthly return on stock i in year t. LNBMit is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings during year t to the book value of equity. Earnings are calculated as income before extraordinary items available to common stockholders (Item 237), plus deferred taxes from the income statement (Item 50), plus investment tax credit (Item 51). To measure the volatility of ROE (VOLROEit ), we estimate an AR(1) model for each stock’s ROE using the past ten-year series of the company’s valid annual ROEs. We calculate VOLROEit as the variance of the residuals from this regression. PROFITit is firm profitability, defined as operating income over book value of assets. SPit is an indicator variable equal to one if the stock is included in the S&P 500 index and zero otherwise. As in earlier studies, stocks that do not appear in IBES are assumed to have no analyst estimates. Following earlier work, we exclude observations (stock-year) in which the stock price is less than five dollars or whose mean bias is at the outer tails—the 2.5% left and right tails. We also exclude analyst forecasts whose absolute difference exceeds ten dollars on the basis that this is likely a coding error. III. OLS RESULTS We begin by estimating a pooled OLS regression of the mean and median BIAS on lagged values of COVERAGE and a set of standard control variables, which include LNSIZE, SIGMA, RETANN, LNBM, VOLROE, and PROFIT. We additionally include an S&P 500 index indicator variable (SP500) as well as time and three-digit SIC industry fixed effects and potentially firm and brokerage-house fixed effects. Standard errors are clustered at the industry groupings. These regressions are based on a sample of stocks in the top 25% of the market capitalization distribution. We restrict ourselves to this sample to facilitate a comparison with the results from our natural experiment.7 The summary statistics for these regressions (time-series averages of cross-sectional means, 7. Qualitatively, the same results hold even using the entire universe. We have replicated these results, which are consistent with those in Lim (2001).

COMPETITION AND BIAS

1691

TABLE I SUMMARY STATISTICS ON THE IBES SAMPLE

Variable COVERAGEi,t Mean BIASi,t (%) Median BIASi,t (%) Mean FERRORi,t (%) Median FERRORi,t (%) FDISPi,t (%) LNSIZEi,t SIGMAi,t (%) RETANNi,t (%) LNBMi,t VOLROEi,t (%) PROFITi,t (%)

Cross-sectional mean (1)

Cross-sectional median (2)

21.45 2.70 2.64 3.31 3.24 0.75 8.38 40.72 1.73 −1.02 26.53 15.48

21 2.10 2.01 2.39 2.26 0.41 8.38 35.04 1.49 −0.92 10.43 15.29

Cross-sectional st. dev. (3) 9.57 3.10 3.17 2.93 3.00 1.02 1.62 21.03 4.04 0.88 19.79 9.38

Notes. We consider a sample of stocks covered by IBES during the period 1980–2005 with valid annual earnings forecast records. COVERAGEit is a measure of analyst coverage, defined as the number of analysts covering firm i at the end of year t. Analyst forecast bias (BIAS jt ) is the difference between the forecast analyst j in year t and the actual EPS, expressed as a percentage of the previous year’s stock price. The consensus bias is expressed as a mean or median bias among all analysts covering a particular stock. Analyst forecast error (FERROR jt ) is the absolute difference between the forecast of analyst j in year t and the actual EPS, expressed as a percentage of the previous year’s stock price. The forecast error is expressed as a mean or median bias among all analysts covering a particular stock. FDISPit is analyst forecast dispersion, defined as the standard deviation of all analyst forecasts covering firm i in year t. LNSIZEit is the natural logarithm of firm i’s market capitalization (price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i in year t. RETANNit is the average monthly return on stock i in year t. LNBMit is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a 10-year series of the company’s valid annual ROEs. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings in year t over the book value of equity. We calculate VOLROE as the variance of the residuals from this regression. PROFITit is the profitability of company i at the end of year t, defined as operating income over book value of assets. We exclude observations that fall to the left of the 25th percentile of the size distribution, observations with stock prices lower than $5, and those for which the absolute difference between forecast value and the true earnings exceeds $10.

medians, and standard deviations) are reported in Table I. The cross-sectional mean (median) analyst coverage of these stocks is about 21 (21) analysts and the standard deviation across stocks is about 10 analysts. The cross-sectional mean (median) bias is 0.027 (0.021) with a standard deviation of around 0.03. The regression results are presented in Table II. We first present the results for the mean bias with just time and industry fixed effects in column (1), with industry, time, and brokeragehouse fixed effects in column (2), and additionally with firm fixed effects in column (3). In column (1), the coefficient in front of COVERAGE is −0.0002 and is statistically significant at the 5% level of significance. In column (2), the coefficient is also −0.0002 and it is still statistically significant at the 5% level of significance.

1692

QUARTERLY JOURNAL OF ECONOMICS

TABLE II REGRESSION OF CONSENSUS FORECAST BIAS ON COMPANY CHARACTERISTICS Mean BIAS Variables/model

(1)

(2) ∗∗

COVERAGEi,t −1 −0.0002 (0.0001) 0.0028∗∗∗ LNSIZEi,t −1 (0.0009) −0.0093 SIGMAi,t −1 (0.0062) −0.1000∗∗∗ RETANNi,t −1 (0.0199) 0.0121∗∗∗ LNBMi,t −1 (0.0016) 0.0062∗∗∗ VOLROEi,t −1 (0.0019) 0.0579∗∗∗ PROFITi,t −1 (0.0095) −0.0110∗∗∗ SP500i,t −1 (0.0024) Year fixed effects Yes Industry fixed Yes effects Brokerage fixed No effects Firm fixed effects No Observations 9,313

Median BIAS (3)

∗∗

−0.0002 0.0001 (0.0001) (0.0001) ∗∗∗ 0.0026 0.0044∗∗∗ (0.0009) (0.0014) −0.0031 0.0108∗∗∗ (0.0058) (0.0039) −0.0986∗∗∗ −0.0368∗∗∗ (0.0199) (0.0133) 0.0115∗∗∗ 0.0053∗∗∗ (0.0015) (0.0013) 0.0061∗∗∗ 0.0000 (0.0019) (0.0000) 0.0571∗∗∗ 0.0629∗∗∗ (0.0092) (0.0115) −0.0110∗∗∗ −0.0017 (0.0024) (0.0072) Yes Yes Yes Yes

(4)

(5) ∗∗∗

−0.0002 (0.0001) 0.0028∗∗∗ (0.0008) −0.0095 (0.0061) −0.0986∗∗∗ (0.0192) 0.0118∗∗∗ (0.0016) 0.0060∗∗∗ (0.0019) 0.0578∗∗∗ (0.0098) −0.0110∗∗∗ (0.0025) Yes Yes

(6) ∗∗

−0.0002 0.0001 (0.0001) (0.0001) ∗∗∗ 0.0026 0.0042∗∗∗ (0.0008) (0.0014) −0.0061 0.0108∗∗∗ (0.0059) (0.0041) −0.0995∗∗∗ −0.0367∗∗∗ (0.0198) (0.0135) 0.0116∗∗∗ 0.0049∗∗∗ (0.0015) (0.0013) 0.0059∗∗∗ 0.0000 (0.0019) (0.0000) 0.0574∗∗∗ 0.0619∗∗∗ (0.0095) (0.0115) −0.0110∗∗∗ −0.0026 (0.0024) (0.0075) Yes Yes Yes Yes

Yes

Yes

No

Yes

Yes

No 9,313

Yes 9,313

No 9,313

No 9,313

Yes 9,313

Notes. The dependent variable is BIAS, defined as a consensus forecast bias of all analysts tracking stock i in year t. Forecast bias is the difference between the forecast of analyst j in year t and the actual EPS, expressed as a percentage of the previous year’s stock price. The consensus is obtained either as a mean or median bias. COVERAGEi,t is a measure of analyst coverage, defined as the number of analysts covering firm i at the end of year t. LNSIZEi,t is the natural logarithm of firm i’s market capitalization (price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i during year t. RETANNi,t is the average monthly return on stock i in year t. LNBMi,t is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a 10-year series of the company’s valid annual ROEs. ROEi,t is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings in year t over the book value of equity. VOLROE is the variance of the residuals from this regression. PROFITi,t is the profitability of company i at the end of year t, defined as operating income over book value of assets. SP500i,t is an indicator variable equal to one if stock i is included in the S&P500 index in year t. We exclude all observations that fall to the left of the 25th percentile of the size distribution, observations with stock prices lower than $5, and those for which the absolute difference between forecast value and the true earnings exceeds $10. All regressions include three-digit SIC industry fixed effects and year fixed effects. Some specifications also include brokerage house and firm fixed effects. Standard errors (in parentheses) are clustered at the industry level. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

However, the coefficient turns positive and statistically nonsignificant when we include firm fixed effects. Because coverage is fairly persistent, it may be that a fixed-effects approach is not picking up the right variation, in contrast to the cross-sectional approach. So depending on the controls used, we find that a decrease in coverage by one analyst leads to an increase in bias of anywhere from 0.0002 (two basis points) to none. The bias for a typical stock is about

COMPETITION AND BIAS

1693

0.027 (2.7%) with a standard deviation across stocks of about 0.03 (3%). Hence, these estimates obtained from cross-section regressions suggest only a small increase in bias of about zero to sixty basis points as a fraction of the cross-sectional standard deviation of bias as we decrease coverage by one analyst, though some are very precisely measured. The results using the median bias instead of the mean bias are reported in columns (4), (5), and (6). Again, there is little difference in the coefficient on COVERAGE. The other control variables also come in significantly in these regressions. Bias increases with firms’ sizes, book-to-market ratios, volatilities of return on equity, and profits. Bias is lower for firms with high returns and for firms in the S&P 500 index. The sign on stock return volatility is ambiguous, depending on whether firm fixed effects are included. Of course, as we explained in the Introduction, these OLS regressions are difficult to interpret because of the endogeneity of analyst coverage. If stocks that attract lots of coverage are stocks that analysts are likely to be excited about, then these OLS estimates are biased downward. In contrast, if stocks covered by only a few analysts are likely under-the-radar stocks that analysts have to be very excited about to initiate coverage on, then these OLS estimates of the competition effect are biased upward. Estimating this regression using stock fixed effects is not an adequate solution to the endogeneity critique because analyst coverage tends to be a fairly persistent variable and analysts drop coverage on stocks when the stock is no longer doing well (e.g., McNichols and O’Brien [1997]). Hence, we rely on a natural experiment to sort out these endogeneity issues. We use mergers of brokerage houses as our experiment, on the premise that mergers typically lead to a reduction in analyst coverage on the stocks that were covered by both the bidder and target firms premerger. If a stock is covered by both firms before the merger, they will get rid of at least one of the analysts, usually the target analyst. It is to this experiment that we now turn. IV. BACKGROUND ON MERGERS We begin by providing some background on these mergers. We identify mergers among brokerage houses by relying on information from the SDC Mergers and Acquisition database. We start with the sample of 32,600 mergers of financial institutions. Next,

1694

QUARTERLY JOURNAL OF ECONOMICS

we choose all the mergers in which the target company belongs to the four-digit SIC code 6211 (“Investment Commodity Firms, Dealers, and Exchanges”). This screen reduces our sample to 696 mergers. Subsequently, we manually match all the mergers with IBES data. This match identifies 43 mergers with both bidder and target covered by IBES. Finally, we select only those mergers where both merging houses analyze at least two of the same stocks—otherwise, there is little scope for our instrumental variables approach below. With this constraint, our search produces fifteen mergers, which we break down to parties involved: bidder and target. We provide further details about these mergers in the Appendix. Of the fifteen mergers, six are particularly big in the sense that the merging houses both tend to be big firms and have coverage premerger on a large number of similar stocks. The first of these big mergers is Merrill Lynch acquiring on September 10, 1984, a troubled Becker Paribas that was having problems with its own earlier merger to another firm. The second is Paine Webber acquiring Kidder Peabody on December 31, 1994. Kidder was in trouble and had fired a good part of its workforce before the merger in the aftermath of a major trading scandal involving its government bond trader, Joseph Jett. Kidder’s owner, General Electric, wanted to sell the company and Paine Webber (a secondtier brokerage house) wanted to buy a top-tier investment bank with a strong research department. The third is Morgan Stanley acquiring Dean Witter Reynolds on May 31, 1997. Morgan Stanley was portrayed as wanting to get in on the more down-market retail brokerage operations of Dean Witter. The fourth is Smith Barney (Travelers) acquiring Salomon Brothers on November 28, 1997. This is viewed as a synergy play led by Sandy Weill. The fifth and sixth mergers involved Swiss banks trying to diversify their lines of business geographically into the American market. These mergers happened within a few months of each other. Credit Suisse First Boston acquired Donaldson Lufkin and Jenrette on October 15, 2000. A few months later, on December 10, 2000, UBS acquired Paine Webber. The anecdotal descriptions of the motivations for these mergers provide comfort in our proposed experiment, which is that these mergers provide a change in competition that is unrelated to some underlying unobservable of the biases in the stocks. In Table III, we provide a number of key statistics regarding all fifteen mergers. In Panel A, we summarize the names, the

1695

COMPETITION AND BIAS TABLE III DESCRIPTIVE STATISTICS FOR MERGERS

Panel A: Mergers used in the analysis and stocks covered Merger Merger # stocks # stocks # stocks (bidder Brokerage house number date (bidder) (target) and target) Merrill Lynch Becker Paribas Paine Webber Kidder Peabody Morgan Stanley Dean Witter Reynolds Smith Barney (Travelers) Salomon Brothers Credit Suisse First Boston Donaldson Lufkin and Jenrette UBS Warburg Dillon Read Paine Webber Chase Manhattan JP Morgan Wheat First Securities Butcher & Co., Inc. EVEREN Capital Principal Financial Securities DA Davidson & Co. Jensen Securities Dain Rauscher Wessels Arnold & Henderson First Union EVEREN Capital Paine Webber JC Bradford Fahnestock Josephthal Lyon & Ross Janney Montgomery Scott Parker/Hunter

1

9/10/1984

762

173 288

2

12/31/1994

659

234 545

3

05/31/1997

739

251 470

4

11/28/1997

914

327 721

5

10/15/2000

856

307 595

6

12/10/2000

596

7

12/31/2000

487

213 487 80 415

8

10/31/1988

178

8 66

9

1/9/1998

178

17 142

10

2/17/1998

76

8 53

11

4/6/1998

360

26 135

12

10/1/1999

274

21 204

13

6/12/2000

516

28 182

14

9/18/2001

117

5 91

15

3/22/2005

116

10 54

1696

QUARTERLY JOURNAL OF ECONOMICS TABLE III (CONTINUED) Panel B: Career outcomes of analysts after mergers # analysts # analysts after merger

Brokerage house Merrill Lynch Becker Paribas Paine Webber Kidder Peabody Morgan Stanley Dean Witter Reynolds Smith Barney (Travelers) Salomon Brothers Credit Suisse First Boston Donaldson Lufkin Jenrette UBS Warburg Dillon Read Paine Webber Chase Manhattan JP Morgan Wheat First Securities Butcher & Co., Inc. EVEREN Capital Principal Financial Securities DA Davidson & Co. Jensen Securities Dain Rauscher Wessels Arnold & Henderson First Union EVEREN Capital Paine Webber JC Bradford Fahnestock Josephthal Lyon & Ross Janney Montgomery Scott Parker/Hunter

Merger number Prior After 1

Retained Left to Exited in the another sample New house house (fired) analysts

4

90 27 50 51 70 35 91

98 — 57 — 92 — 140

84 1 42 9 61 5 70

0 11 1 28 2 16 6

5 15 7 14 7 14 15

13 — 6 — 26 — 27

5

76 120

— 146

43 93

20 5

13 22

— 35

77

—

18

17

42

—

94

118

80

5

9

0

64 64 50 13

— 106 — 21

38 48 34 13

8 5 1 0

17 11 15 0

— 24 — 8

13 27 18

— 31 —

3 21 2

3 4 6

7 2 10

— 8 —

6 4 39 15

8 — 36 —

4 4 19 11

1 0 9 0

1 0 11 4

0 — 6 —

35 32 54 22 14 14

54 — 55 — 16 —

26 12 37 0 7 0

2 10 9 14 1 5

7 10 8 8 6 9

16 — 18 — 9 —

13

15

11

1

1

3

5

—

1

0

4

—

2 3

6

7 8

9

10 11

12 13 14

15

1697

COMPETITION AND BIAS TABLE III (CONTINUED)

Panel C: Percentage of stocks covered by analysts from bidder and target houses after mergers Percentage of stocks (bidder) Percentage of stocks (target) Merger 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(1) 85.7 73.7 66.3 50.0 63.7 67.8 45.3 61.9 67.7 50.0 52.8 48.1 67.3 43.8 73.3

(2) 1.1 15.8 5.4 30.7 12.3 32.3 32.1 14.3 6.5 50.0 30.6 22.2 0 0 6.7

Notes. Panel A includes the names of brokerage houses involved in mergers, the date of the merger, and the number of stocks covered by either brokerage house or both of them prior to the merger. Panel B breaks down the merger information at the analyst level. We include number of analysts employed in the merging brokerage houses prior to merger and after the merger as well as the detailed information on the career outcomes of the analysts after the merger. Panel C calculates the percentage of analysts from the merging houses that cover the same stock after the merger. We restrict our sample of stocks to those which were covered by bother the bidder and the target house.

dates, and the number of stocks covered by the bidder and target individually and the overlap in the coverage. For instance, in the merger involving Paine Webber and Kidder Peabody, Paine Webber covered 659 stocks and Kidder covered 545 stocks. There was a 234-stock overlap in their coverage. As a result, the merger leads to a potential decrease of around one analyst for a large number of stocks. The size of our treatment sample, the number of firms covered by both merging houses, ranges from a low of five stocks in the merger involving Fahnestock and Josephthal Lyon and Ross to a high of 327 stocks in the Smith Barney and Salomon Brothers deal. Notice that the big six mergers described above give us much of the variation in terms of the number of treatment stocks. In total, we have a significant treatment sample with which to identify our effect. To better support the premise that mergers lead to less analyst coverage in the treatment sample via job turnover, we

1698

QUARTERLY JOURNAL OF ECONOMICS

examine career outcomes of analysts employed by merging houses. Panel B presents the results with the breakdown of career outcomes of analysts employed by both the bidder and target houses. A few observations can be noted. First, the big mergers affected a very significant number of analysts. The largest of the mergers— between Credit Suisse First Boston and Donaldson Lufkin and Jenrette—affected almost 200 analysts. The smallest merger in terms of analysts affected is Davidson and Jensen with ten. Given that in our sample the average brokerage house employs approximately fifteen analysts, a number of our mergers constituted important events in the analyst industry. Second, as expected, mergers generally reduce the number of analysts covering stocks. For example, the two brokerage houses involved in the first merger, Paine Webber and Kidder Peabody, employed a total of 101 analysts prior to merger. After Paine Webber acquired Kidder Peabody, the employment in the joint entity decreased to 57 analysts. Third, the majority of the employment reduction comes from the closure of the target house. In particular, out of 51 analysts employed by Kidder, only nine were retained in the new company, and 28 left to a different house, whereas fourteen exited the sample, which we interpret as a firing decision. In Panel C, we confirm more precisely that for stocks covered by both houses premerger, it is usually the analyst in the bidding house who remains whereas the target analyst is let go. In the first column of Panel C, we report for the treatment sample, stocks covered by both houses, the fraction that are covered by the bidder analyst after the merger. In the second column, we report the fraction covered by the target analysts after the merger of the treatment sample. In the Paine Webber and Kidder merger, for stocks covered premerger by both houses, it is the target analyst who is indeed the redundant one that gets fired—the corresponding figures are 73.7% for the bidder analysts and only 15.8% for the target analysts. Similarly big gaps exist for most of the other mergers. This gap is much smaller in the Davidson and Jensen merger, 50% for the bidder and 50% for the target. Nonetheless, from Panel B, it still appears that there were fewer analysts working for the merged entity than for the two houses beforehand. V. EMPIRICAL DESIGN Our analysis of the effect of competition on analyst forecast bias utilizes a natural experiment involving brokerage house

COMPETITION AND BIAS

1699

mergers. The outcome of this process is reduction in the number of analysts employed in the combined entity compared to the total number of analysts employed in bidder and target entities prior to merger. As a result, the number of analysts covering a stock that was covered by both houses before the merger (our treatment sample) should drop as one of the redundant analysts is let go or reallocated to another stock (or maybe even both are let go), and thus the competition in the treatment sample decreases. The questions then are whether there is a decrease in competition among analysts around merger events and whether this decrease is associated with an economically significant effect on average consensus bias. Our empirical methodology requires that we specify a representative window around the merger events. In choosing the proper estimation window, we face a trade-off, unlike most other event studies, which would have us focus on a very narrow window. As is the case with most event studies, choosing a window that is too long may incorporate information that is not really relevant to the event under consideration. But in our case, choosing too short a window means we may lose observations, because analysts may not issue forecasts on the same date or with the same frequency. We want to keep a long enough window to look at the change in the performance of all analysts before and after the merger. To this end, we use a two-year window, with one year of data selected for each pre- and postevent period. Most analysts will typically issue at least one forecast within a twelve-month window. Given that in each of the two windows one analyst could issue more than one forecast, we retain only the forecast that has the shortest possible time distance from the merger date. In addition, because we are interested in the effect of merger on various analyst characteristics, we require that each stock be present in both windows around the merger. As a result, for every stock we note only two observations—one in each window of the event. This one year before and one year after the merger event having been chosen, one then has to factor in the fact that coverage and the average stock bias may vary from one year to the next. In other words, to identify how the merger affected coverage in the stocks covered by both houses premerger and how the bias in these stocks then also changed, one needs to account for the fact that there may be natural changes from year to year in coverage and bias for these stocks.

1700

QUARTERLY JOURNAL OF ECONOMICS

A standard approach to deal with these time trends is based on the difference-in-differences (DID) methodology. In this approach, the sample of stocks is divided into treatment and control groups. In the context of our paper, the treatment group includes all stocks that were covered by both brokerage houses before the merger. The control group includes all the remaining stocks. If we denote the average observed characteristics in the treatment (T) and control (C) groups in the pre- and postevent periods by CT ,1 , CT ,2 , CC,1 , and CC,2 , respectively, the partial effect of change due to merger can be estimated as (1)

DID = (CT ,2 − CT ,1 ) − (CC,2 − CC,1 ).

Here the characteristics might be analyst coverage or bias. By comparing the time changes in the means for the treatment and control groups, we allow for both group-specific and time-specific effects. This estimator is unbiased under the condition that the merger is not systematically related to other factors that affect C. A potential concern with the above estimator is the possibility that the treatment and control groups may be significantly different from each other and thus the partial effect may additionally capture the differences in the characteristics of the different groups. For example, the average stocks in both groups may differ in terms of their market capitalizations, value characteristics, or past return characteristics. For instance, it might be that companies with good recent returns lead analysts to cover their stocks and to be more optimistic about them. Hence, we want to make sure that past returns of the stocks in the treatment and control samples are similar. We are also worried that higher analyst coverage stocks may simply be different than lower analyst coverage stocks for reasons unrelated to our competition effect. So we will also want to keep the premerger coverage characteristics of our treatment sample similar to those of our control sample. To account for such systematic differences across the two samples, we use the matching technique similar to that used in the context of IPO event studies or characteristic-based asset pricing. In particular, each stock in the treatment sample is matched with its own benchmark portfolio obtained using the sample of stocks in the control group. We expect our controls typically to do a better job at capturing our true effect by netting out unobserved heterogeneity.

COMPETITION AND BIAS

1701

To construct the benchmark, we first sort stocks into tercile portfolios according to their market capitalizations. Next, we sort stocks within each size portfolio according to their book-to-market ratios. This sort results in nine different benchmark portfolios. Further, we sort stocks in each of the nine portfolios into tercile portfolios according to their past returns, which results in 27 different benchmark portfolios. Finally, we sort stocks in each of the 27 portfolios into tercile portfolios according to their analyst coverage. Overall, our benchmark includes 81 portfolios. Using the above benchmark specification, we then construct the benchmark-adjusted DID estimator (BDID). In particular, for each stock i in the treatment sample, the partial effect of change due to merger is calculated as the difference between two components, (2)

i i − BCC,1 , BDIDi = CTi ,2 − CTi ,1 ) − (BCC,2

where the first component is the difference in characteristics of stock i in the treatment sample moving from the premerger to the postmerger period. The second component is the difference in the average characteristics of the benchmark portfolios that are matched to stock i along the size/value/momentum/coverage dimensions. In general, the results are comparable if we use benchmarks matched along any subset of the characteristics. To assess the average effect for all stocks in the treatment sample, one can then take the average of all individual BDIDs. One final issue that we need to account for is that a few of the mergers occurred within several months of each other (e.g., the fifth and sixth mergers occurred on October 15, 2000, and December 10, 2000, respectively). As a result, it might be difficult to separate out the effects of these two mergers individually. As the baseline case, we decided for simplicity to treat each merger separately in our analysis. However, we have also tried robustness checks in which we group mergers occurring close together in time and treat them as one merger. For instance, we consider a one-year window before the third merger on October 15, 2000, as the premerger period and the one-year window after the fourth merger on December 10, 2000, as the postmerger period. As a result, the treatment sample is the union of the 307 stocks jointly covered by Credit Suisse and DLJ and the 213 stocks covered by UBS and Paine Webber. There is potentially some overlap of these two subsets of stocks, and hence it might be the case that some of

1702

QUARTERLY JOURNAL OF ECONOMICS TABLE IV SUMMARY STATISTICS FOR THE TREATMENT SAMPLE

Variable COVERAGEi,t Mean BIASi,t (%) Median BIASi,t (%) Mean FERRORi,t (%) Median FERRORi,t (%) FDISPi,t (%) LNSIZEi,t SIGMAi,t (%) RETANNi,t (%) LNBMi,t VOLROEi,t (%) PROFITi,t (%)

Cross-sectional mean (1) 21.12 2.79 2.74 3.40 3.33 0.75 8.39 41.00 1.74 −1.03 25.32 15.52

Cross-sectional median (2) 20 2.24 2.21 2.52 2.43 0.40 8.37 35.86 1.52 −0.92 9.89 15.25

Cross-sectional st. dev. (3) 9.45 3.10 3.19 2.90 2.99 0.94 1.60 21.02 4.13 0.91 43.40 9.22

Notes. We consider all stocks covered by two merging brokerage houses around the one-year merger event window. COVERAGEit is a measure of analyst coverage, defined as the number of analysts covering firm i at the end of year t. Analyst forecast bias (BIAS jt ) is the difference between the forecast analyst j at time t and the actual EPS, expressed as a percentage of the previous year’s stock price. The consensus bias is expressed as a mean or median bias among all analysts covering a particular stock. Analyst forecast error (FERROR jt ) is the absolute difference between the forecast analyst j at time t and the actual EPS, expressed as a percentage of the previous year’s stock price. The forecast error is expressed as a mean or median bias among all analysts covering a particular stock. FDISPit is analyst forecast dispersion, defined as the standard deviation of all analyst forecasts covering firm i at time t. LNSIZEit is the natural logarithm of firm i’s market capitalization (price times shares outstanding) at the end of year t. SIGMAit is the variance of daily (simple, raw) returns of stock i during year t. RETANNit is the average monthly return on stock i during year t. LNBMit is the natural logarithm of firm i’s book value divided by its market cap at the end of year t. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a 10-year series of the company’s valid annual ROEs. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings during year t over the book value of equity. We calculate VOLROE as the variance of the residuals from this regression. PROFITit is the profitability of company i at the end of year t, defined as operating income over book value of assets. We exclude observations with stock prices lower than $5 and those for which the absolute difference between the forecast value and the true earnings exceeds $10.

these stocks will experience a greater decline in analyst coverage to the extent that they have more than two redundant analysts. However, these alterations do not affect our baseline results. Table IV presents summary statistics for the treatment sample in the two-year window around the merger. The characteristics of the treatment sample are similar to those reported in Table I for the OLS sample. For instance, the coverage is about 21 analysts for the typical stock. The mean bias is 2.79% with a standard deviation of around 3.10%. These figures, along with those of the control variables, are fairly similar across these two samples. This provides comfort that we can then relate the economic effect of competition obtained from our treatment sample to the OLS estimates presented in Table II.

COMPETITION AND BIAS

1703

VI. RESULTS VI.A. Analyst Coverage and Optimism Bias We first verify the premise of our natural experiment by measuring the change in analyst coverage for the treatment sample from the year before the merger to the year after. We expect these stocks to experience a decrease in coverage. Panel A of Table V (column (1)) reports the results of this analysis. We present the DID estimator for coverage using our benchmarking technique—size, book-to-market, return, and coverage matched. We observe a discernible drop in coverage due to merger by around 1.02 analysts, using the DID estimator, and the level of the drop of between one and two analysts is in line with our expectations. This effect is significant at the 1% level. One can think of this finding as essentially the first stage of our estimation. The effect is economically and statistically significant in the direction predicted, and hence confirms the premise of our natural experiment. We will focus on this number in our discussion of the economic effect of competition below. We next look at how the optimism bias changes for the treatment sample across the mergers. These results are presented in Panel A of Table V. We present the findings in column (2) for the mean BIAS and in column (3) for the median BIAS. Using the DID estimator, we find an increase in optimism bias of 0.0013 for the mean bias (significant at the 10% level) and 0.0016 for the median bias (significant at the 5% level). Using the estimates obtained above, a conservative estimate is that the mean optimism bias increases by about thirteen basis points (as a result of reducing coverage by one analyst). As we mentioned earlier, the sample for the natural experiment is similar to that of the OLS by construction—the typical stock has a bias of around 2.7% and the standard deviation of the optimism bias is also around 3%. This means that the estimate of the competitive effect from our natural experiment is approximately six to seven times as large as that from the OLS estimates. This is a sizable difference and suggests that the OLS estimates are biased downward, consistent with the documented selection bias that stocks that attract lots of coverage are likely to have more optimistic analysts. One could argue that our mean bias effect might be driven by selection through which one of the two analysts from the

1704

QUARTERLY JOURNAL OF ECONOMICS TABLE V CHANGE IN STOCK-LEVEL COVERAGE AND BIAS: DID ESTIMATOR Panel A: Coverage and bias Coverage Mean BIAS (1) (2)

SIZE/BM/RET/NOAN-matched

−1.021∗∗∗ (0.179)

0.0013∗ (0.0007)

Median BIAS (3) 0.0016∗∗ (0.0008)

Panel B: Change in forecast bias: DID estimator without analysts from merging houses Mean BIAS Median BIAS (1) (2) SIZE/BM/RET/NOAN-matched

0.0011∗∗ (0.0005)

0.0012∗∗ (0.0005)

Panel C: Change in forecast bias: Conditioning on initial coverage Mean BIAS Median BIAS (1) (2) SIZE/BM/RET/NOAN-matched (coverage ≤ 5) SIZE/BM/RET/NOAN-matched (coverage > 5 and ≤ 20) SIZE/BM/RET/NOAN-matched (coverage > 20)

0.0078∗∗ (0.0036) 0.0017∗ (0.0011) 0.0003 (0.0013)

0.0096∗∗ (0.0044) 0.0020∗ (0.0011) 0.0007 (0.0013)

Notes. N = 1,656 in Panels A and B. We measure analyst coverage as the number of analysts covering firm i at the end of year t. For all mergers, we split the sample of stocks into those covered by both merging brokerage houses (treatment sample) and those not covered by both houses (control sample). We also divide stocks into premerger period and postmerger period (one-year window for each period). For each period we further construct benchmark portfolios using the control sample based on stocks’ size (SIZE), book-to-market ratio (BM), and average past year’s returns (RET). Our benchmark assignment involves three portfolios in each category. Each stock in the treatment sample is then assigned to its own benchmark SIZE/BM/RET/NOANmatched). Next, for each period, we calculate the cross-sectional average of the differences in analyst stock coverage across all stocks in the treatment sample and their respective benchmarks. Finally, we calculate the difference in differences between postevent period and pre-event period (DID estimator). Our sample excludes observations with stock prices lower than $5 and those for which the absolute difference between forecast value and the true earnings exceeds $10. In Panel B, we exclude from our sample all analysts employed in the merging houses. Panel C presents our results by cuts on initial coverage. There are three groups: lowest coverage (≤5), medium coverage (>5 and ≤20) and highest coverage (>20). Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

merging firms covering the stock gets fired. It might be that the less optimistic analyst gets fired and hence the bias might be higher as a result. Another possibility could be that analysts employed by the merging houses may compete for the job in the new merged house and thus they may strategically change their reporting behavior.

COMPETITION AND BIAS

1705

We deal with these issues in two ways. The first one, because we have turnover data, is simply to check whether the merging brokerage houses selectively fire analysts who are less optimistic. We do not find such a selection bias. The second and more direct way to deal with this concern is to look only at the change in the bias for the analysts covering the same stocks but not employed by the merging firms. The findings are in Panel B of Table V. We report the change in bias for the treatment sample, but now the bias is calculated using only the forecasts of the analysts not employed by the merging houses. The figures are very similar to the main findings—only slightly smaller in some instances by a negligible amount. The mean bias increases by eleven basis points, the median bias increases by twelve basis points, and both are significant at the 5% level. Collectively, these findings provide comfort that our main results are not spuriously driven by outliers or by selection biases. We next test a key auxiliary prediction that will further buttress our identification strategy. We check whether the competition effect is more pronounced for stocks with smaller analyst coverage. The idea is that the more analysts cover a stock, the less the loss of an additional analyst matters, akin to the Cournot view of competition. For instance, in the independence rationale of Gentzkow and Shapiro, when there are already many analysts, losing one would not change much the likelihood of drawing an independent analyst. In contrast, when there are only a few analysts to begin with, losing one analyst could really affect the likelihood of getting an independent supplier of information. However, note that if collusion is possible, then we might expect a nonlinear relationship between bias and coverage. Suppose that collusion is easier when there are only a few analysts. Under this scenario, going from one to two analysts may not have an effect because the two can collude. And we might find more of an effect when going from five to six analysts if the sixth analyst does not collude. With collusion, it might be that we expect the biggest effect for stocks covered by a moderate number of analysts—that is, an inverted U-shape with the effect being greatest for mediumcoverage stocks. We examine this issue in Panel C of Table V using the same DID framework as before. We divide initial coverage into three groups: less than or equal to five analysts, between six and twenty analysts, and more than twenty analysts. Column (1) reports the results using mean bias. We expect and find that the effect is

1706

QUARTERLY JOURNAL OF ECONOMICS

significantly smaller when there are a lot of analysts covering. The effect is greatest for the first group (less than or equal to five analysts). The mean bias increases by 78 basis points and the median bias by 96 basis points, and both are significant at the 5% level. The next largest effect is in the second group (more than five and less than or equal to twenty): The mean bias increases by seventeen basis points and the median bias by twenty basis points. Both are significant at the 10% level. Finally, the effect is much smaller for the highest-coverage group: the mean bias increases by three basis points and the median bias by seven basis points and neither of these point estimates is statistically significant. In sum, the evidence is remarkably comforting, as it conforms well to our priors on competition being more important when there are fewer analysts around. This result reassures us that our estimate is a sensible one.8 Next, we delve deeper into our results in Table V by analyzing plots in event time of the change in coverage and bias. This will also allow us to gauge the robustness of the parallel trend assumption required for the difference-in-differences approach. To this end, we consider the event window with three periods before and three periods after the merger. For each event-window date, we calculate the difference in coverage and median bias between the treatment and control groups and plot it against the event time. The left-hand side of Figure I presents the results for coverage and the right-hand side for bias. We report the results separately for the three subgroups of stocks sorted according to coverage (Panels A–C) and then for the entire sample (Panel D). We do not find support for pretrends driving our results. In particular, both for coverage and for bias, the difference between the treatment and control groups is stable before the event date, and also after the event date. Moreover, we confirm the results from Panel C of Table V. Consider Panel A of Figure I, which shows the results for the low-coverage stocks. Note that the mergers cause a relative drop of about one analyst on the event date (from time −1 to time 0) and an increase in bias of about eighty basis points for low-coverage stocks (from time −1 to time 0). Our matching in terms of premerger coverage characteristics is nearly perfect here, and there is also little difference in the 8. The results are not affected by a particular cutoff level for the number of analysts. The results are generally declining in a nonlinear way with an increase of coverage.

FIGURE I Trend of Analyst Coverage and Bias in the Treatment Sample (Net of Control) We show the trend of average analyst coverage and forecast bias in the treatment sample net of the control group (in a given year) up to three years before and after the merger event. Panel A is for stocks with low analyst coverage (less than six analysts); Panel B is for stocks with medium analyst coverage (six to twenty analysts); Panel C is for stocks with high analyst coverage (above twenty analysts). Panel D documents aggregate results. Dotted lines illustrate 95% confidence intervals.

COMPETITION AND BIAS

1707

QUARTERLY JOURNAL OF ECONOMICS

FIGURE I (CONTINUED)

1708

COMPETITION AND BIAS

1709

premerger mean bias between the treatment and control groups. There are no discernible pretrends or posttrends. We also observe a similar degree of statistical significance, as illustrated by two-standard-deviation bands represented by dotted lines. Panel B shows the results for the medium-coverage stocks. Again, the matching in premerger coverage characteristics is nearly perfect, with the treatment sample having a slightly higher premerger coverage by about one-half analyst than the control sample. Again there is nearly perfect matching in the mean bias premerger between the treatment and control samples. We see a drop of about one analyst on the event date and an increase in bias by about twenty basis points. Importantly, there are no discernible pretrends, though there is a slight posttrend in the coverage drop, which continues to drop a couple of years after the event date. But by and large, the figures in Panel B support the validity of the experiment. In Panel C, we look at the high-coverage stocks. Notice here that the treatment sample has a much higher coverage than the control sample. Part of the reason for this is that the mergers in our sample involve big brokerage houses, which cover very big stocks. As a result, it is difficult to get a match for these big stocks identical to what we could for the lower-coverage stocks. This might explain why the premerger bias of the treatment sample is slightly lower than that of the control group, which would very much be consistent with the competition mechanism. We believe this difference is not contributing any biases to our estimates related to the merger. Absent this difference, we see roughly the same picture of a one-analyst drop on the event date and a slight increase in the bias, which is not significantly different from zero. The picture that emerges is similar to that of Panel B. Finally, in Panel D, we draw the same pictures using the entire sample. We see no significant pretrends in either coverage or bias and an observable event day drop in coverage and increase in bias. These figures provide comfort that our results in Table V are not driven by pretrends. VI.B. Validity of the Natural Experiment The economic significance of our results strictly depends on the validity of our natural experiment. Although the results in Figure I are a start in the direction of comforting us on the validity of our natural experiment, in this section we report a number of

1710

QUARTERLY JOURNAL OF ECONOMICS

further tests, which collectively provide strong support for our experiment. First, we separately estimate our effect using the six biggest mergers. The results are very similar in that the conservative estimates are a one-analyst drop in coverage associated with a 0.0017 increase in bias. Further, we estimate our effect separately for each of the fifteen mergers. Each of the fifteen mergers experienced a decline in coverage using the most conservative DID estimate. Hence, our result is not driven by outliers—there is a distinct coverage drop with mergers. Clearly, the fact that fifteen out of fifteen mergers experienced drops suggests that our effect is robustly significant in a nonparametric sense. Similarly, we find that twelve (thirteen) of the fifteen mergers experienced an increase in mean (median) bias using the most conservative DID estimate. It is important to emphasize that because these mergers occur throughout our entire sample, our effects are not due to any particular macroeconomic event such as a recession or boom. Second, given that the optimism measure is constructed as a difference between an analyst forecast and actual earnings, one could worry that our results are driven by differences in the actual earnings and not in reported forecasts. To rule out such a possibility, we test whether merger events lead to differential changes in earnings between treatment and control groups. Panel A of Table VI reports the results separately for the mean and median earnings. We find no evidence that competition causes changes in actual earnings. Third, our experiment relies on the validity of the matching procedure between firms in treatment and control groups. In general, our findings do not raise a major problem with the matching, but to provide further robustness that the differences we observe do not actually capture the ex ante differences in various observables, we report similar DID estimators for other response variables—Tobin’s Q, size, returns, volatility, profitability, and sales. The results in Panel B of Table VI show that none of the important observables is significantly affected by the merger event. These results are comforting, as they confirm the validity of our matching procedure. Fourth, the nature of our experiment requires that the same company be covered by two merging houses. To ensure that our effects are not due merely to the fact that the selection of the companies to brokerage houses is not random, we reexamine our evidence by focusing on stocks that are covered by one of the

1711

COMPETITION AND BIAS TABLE VI VALIDITY OF THE NATURAL EXPERIMENT Panel A: Change in earnings Mean EARN (1) DID estimator (SIZE/BM/RET/NOAN-matched)

Stock characteristic

−0.0002 (0.0005)

Median EARN (2) −0.0002 (0.0005)

Panel B: Change in firm characteristics SIZE/BM/RET/NOAN-matched

Tobin’s Q

0.0397 (0.1138) 30.52 (20.63) 0.0015 (0.0012) −0.0069 (0.0054) −0.0593 (0.0597) 0.0046 (0.0090)

Size Returns Volatility Profitability Log(sales)

Panel C: Change in forecast bias for non-overlapping stocks Coverage Mean BIAS Median BIAS (1) (2) (3) SIZE/BM/RET/NOAN-matched

0.080 (0.106)

0.0001 (0.0004)

0.0001 (0.0004)

Panel D: Change in forecast bias for non-overlapping stocks as a control Mean BIAS Median BIAS (1) (2) SIZE/BM/RET/NOAN-matched

0.0017∗∗∗ (0.0006)

0.0018∗∗∗ (0.0006)

Notes. N = 1,656 in Panels A, B, and D. In Panel A, we measure analyst earnings (EARN jt ) as the actual EPS expressed as a percentage of the previous year’s stock price. For all mergers, we split the sample of stocks into those covered by both merging brokerage houses (treatment sample) and those not covered by both houses (control sample). We also divide stocks into premerger period and postmerger period (one-year window for each period). For each period we further construct benchmark portfolios using the control sample based on stocks’ size (SIZE), book-to-market ratio (BM), average past year’s returns (RET), and analyst coverage (NOAN). Our benchmark assignment involves three portfolios in each category. Each stock in the treatment sample is then assigned to its own benchmark portfolio (SIZE/BM/RET/NOAN-matched). Next, for each period, we calculate the cross-sectional mean and median of the differences in earnings across all stocks in the treatment sample and their respective benchmarks. Finally, we calculate the difference in differences between postevent period and pre-event period (DID estimator). In Panel B, we provide the DID estimator for various corporate characteristics, including Tobin’s Q, asset size, stock returns, volatility, profitability, and log sales. In Panel C, the treatment sample is constructed based on the stocks that are covered by one but not both merging houses. In Panel D, the control sample is constructed using the stocks that are covered by one but not both merging houses. Our sample excludes observations with stock prices lower than $5 and those for which the absolute difference between forecast value and the true earnings exceeds $10. Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

1712

QUARTERLY JOURNAL OF ECONOMICS

0.045 0.040 0.035

Frequency

0.030 0.025 0.020 0.015 0.010 0.005

–0 .1 2 –0 5 .1 1 –0 0 .0 9 –0 6 .0 8 –0 1 .0 6 –0 6 .0 5 –0 2 .0 3 –0 7 .0 2 –0 2 .0 08 0. 00 7 0. 02 1 0. 03 6 0. 05 1 0. 06 5 0. 08 0 0. 09 5 0. 10 9

0.000

Difference in bias Before

After

FIGURE II Kernel Densities of Differences between Treatment and Control before and after Merger We show Epanechnikov kernel densities of differences in forecast bias between treatment and control groups for the period before and after the merger. The bandwidth for the density estimation is selected using the plug-in formula of Sheather and Jones (1991). The rightward shift of the distribution after the merger is significant because the hypothesis of equality of distributions is rejected at 1% level using the Kolmogorov–Smirnov test.

merging houses, but not by both. We show in Panel C of Table VI that the average stock coverage does not change significantly on the event date across these treatment and control groups, and the change in the bias is statistically not different from zero. We further apply this setting to validate the quality of our control group. Specifically, in Panel D of Table VI, we show that using stocks covered by only one of the two merging houses as a control group does not change the nature of our results. In fact, the results become slightly stronger than those in our baseline specification. Fifth, we examine whether competition changes the entire distribution of forecasts. To this end, we plot Epanechnikov kernel densities of bias in the treatment group relative to the control group before and after the merger. The bandwidth for the density estimation is selected using the plug-in formula of Sheather and Jones (1991). Figure II presents the results. We observe a significant rightward shift in the entire distribution of bias in the

COMPETITION AND BIAS

1713

postmerger period. The rightward shift of the distribution after the merger is significant because the hypothesis of equality of distributions is rejected at the 1% level using the Kolmogorov– Smirnov test. Moreover, the average relative bias becomes strictly positive, consistent with our earlier findings. These results suggest that the findings of our experiment are not driven by outliers and further indicate that the merger mainly causes previously unbiased analysts to become biased. VI.C. Robustness In this section, we report a number of tests that confirm the robustness of our results. An alternative econometric approach to capturing the effect of change in the bias due to merger that we consider is to estimate the following regression model: (3) Ci = α + β1 Mergei + β2 Affectedi + β3 Mergei × Affectedi + β4 Controlsi + εi , where C is the characteristic that may be subject to merger; Merge is an indicator variable, equal to one for observations after the merger, and zero otherwise; Affected is an indicator variable equal to one if stock i is affected by the merger, and zero otherwise; and Controls is a vector of stock-specific covariates affecting C. In this specification, the coefficient of primary interest is β3 , which captures the partial effect of change due to merger; in the version with additional controls its value is similar in spirit to the DID estimator in equation (2). By including additional controls, we account for any systematic differences in stocks that may affect the partial effect of change due to merger. Importantly, the regressions include merger fixed effects and industry fixed effects, which ensures comparability of the samples across various time-invariant characteristics. We also include brokerage fixed effects and firm fixed effects, which help us understand whether the observed effects in the data are driven by any systematic time-invariant differences between brokerage houses covering particular companies and the companies themselves. These regressions ought to provide similar answers as the DID approach except that we can control for additional sources of heterogeneity. We estimate our regression model using a pooled (panel) regression and calculating standard errors by clustering at the

1714

QUARTERLY JOURNAL OF ECONOMICS

merger level. This approach addresses the concern that the errors, conditional on the independent variables, are correlated within merger groupings (e.g., Moulton [1986]). One reason this may occur is that the bias occurring in one company may also naturally arise in another company covered by the same house because the broker tends to cover stocks with similar bias pressures.9 The results for the effect on bias obtained using an alternative regression approach outlined in equation (3) are presented in Table VII. The first column shows the result using mean bias and the second column shows the results for median bias. In the first column, the coefficient of interest in front of MERGE × AFFECTED is 0.0021, which is significant at the 10% level. The coefficient of interest increases slightly to 0.0024 for median bias and the statistical significance level is 5%. Hence, the results in this table are consistent with those using the DID estimator, though the estimates are a bit bigger. Further, we account for the fact that the bias change we capture may result from the difference in the timeliness of the forecasts issued premerger compared to postmerger. In particular, empirical evidence suggests that analyst bias is more pronounced the farther out is the forecast. Indeed, there is a tendency for an analyst to undershoot the earnings number for forecasts issued near the earnings date. To this end, we first document that there is no difference in the timeliness of the forecasts issued premerger as compared to postmerger. Further, in our regression model, we include an additional control variable—recency (Rec)—that measures the average distance of the forecast from the date for which the forecast is obtained. The results, presented in columns (3) and (4) of Table VII, show that controlling for forecast timing does not qualitatively affect our results. In this paper, we focus on annual earnings forecasts, because these are the key numbers that the market looks to, and every analyst has to submit such a forecast. For completeness, we also look at how long-term growth forecasts and stock recommendations change for the treatment sample in comparison to the control sample around these mergers. One downside is that data in this case are more sparse as analysts do not issue as many timely growth forecasts or recommendations. Moreover, we cannot measure bias 9. We have also considered other dimensions of clustering: clustering by industry, by stock, by time, and by time and industry. All of them produced standard errors that were lower than the ones we report.

1715

COMPETITION AND BIAS TABLE VII CHANGE IN FORECAST BIAS: REGRESSION EVIDENCE

Mean BIAS Median BIAS Mean BIAS Median BIAS (1) (2) (3) (4) MERGEi AFFECTEDi MERGEi × AFFECTEDi LNSIZEi,t−1 RETANNi,t−1 LNBMi,t−1 COVERAGEi,t−1 SIGMAi,t−1 VOLROEi,t−1 PROFITi,t−1 SP500i,t−1

0.0005 (0.0008) −0.0019∗∗ (0.0006) 0.0021∗ (0.0012) 0.0038∗∗∗ (0.0010) 0.0005 (0.0017) −0.0037 (0.0073) 0.0001 (0.0001) 0.0001∗∗ (0.0000) 0.0000 (0.0000) 0.0629∗∗∗ (0.0052) 0.0032 (0.0031)

0.0005 (0.0008) −0.0019∗∗ (0.0007) 0.0024∗∗ (0.0012) 0.0037∗∗∗ (0.0010) 0.0004 (0.0017) 0.0001 (0.0074) 0.0001 (0.0001) 0.0000 (0.0000) 0.0000 (0.0000) 0.0620∗∗∗ (0.0052) 0.0039 (0.0037)

Yes Yes Yes Yes 57,005

Yes Yes Yes Yes 57,005

RECi,t−1 Merger fixed effects Industry fixed effects Brokerage fixed effects Firm fixed effects Observations

0.0005 (0.0008) −0.0019∗∗∗ (0.0006) 0.0021∗ (0.0012) 0.0038∗∗∗ (0.0009) 0.0005 (0.0017) −0.0037 (0.0073) 0.0001 (0.0001) 0.0001∗∗ (0.0000) 0.0000 (0.0000) 0.0630∗∗∗ (0.0052) 0.0032 (0.0031) −0.0000 (0.0000) Yes Yes Yes Yes 57,005

0.0006 (0.0008) −0.0019∗∗∗ (0.0006) 0.0024∗∗ (0.0012) 0.0037∗∗∗ (0.0010) 0.0004 (0.0017) 0.0001 (0.0074) 0.0001 (0.0001) 0.0000 (0.0000) 0.0000 (0.0000) 0.0621∗∗∗ (0.0052) 0.0039 (0.0037) −0.0000 (0.0000) Yes Yes Yes Yes 57,005

Notes. The dependent variable is forecast bias (BIAS), defined as the difference between forecasted earnings and actual earnings, adjusted for the past year’s stock price. For each merger, we consider a one-year window prior to merger (pre-event window) and a one-year window after the merger (postevent window). We construct an indicator variable (MERGE) equal to one for the postevent period and zero for the pre-event period. For each merger window, we assign an indicator variable (AFFECTED) equal to one for each stock covered by both merging brokerage houses (treatment sample) and zero otherwise. LNSIZE is a natural logarithm of the market cap of the stock; SIGMAit is the variance of daily (simple, raw) returns of stock i during year t; RETANN is annual return on the stock; LNBM is a natural logarithm of the book to market ratio; COVERAGE denotes the number of analysts tracking the stock. To measure the volatility of ROE (VOLROE), we estimate an AR(1) model for each stock’s ROE using a ten-year series of the company’s valid annual ROEs. ROEit is firm i’s return on equity in year t. ROE is calculated as the ratio of earnings in year t over the book value of equity. VOLROE is the variance of the residuals from this regression. PROFITit is the profitability of company i at the end of year t, defined as operating income over book value of assets. SP500 is an indicator variable equal to one if a stock is included in the S&P500 index. RECit is the recency measure of the forecast, measured as an average distance between the analyst forecast and the earnings’ report. All regressions include three-digit SIC industry fixed effects, merger fixed effects, brokerage fixed effects, and firm fixed effects. We report results based on both mean and median bias. Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

1716

QUARTERLY JOURNAL OF ECONOMICS TABLE VIII CHANGE IN ALTERNATIVE FORECAST BIAS MEASURES: DID ESTIMATOR Mean BIAS (1)

Panel A: Long-term growth DID estimator (SIZE/BM/RET/NOAN-matched) 0.553∗∗∗ (0.202) Panel B: Analyst recommendations DID estimator (SIZE/BM/RET/NOAN-matched) 0.0501 (0.0412)

Median BIAS (2) 0.352∗ (0.212) 0.0902∗ (0.0556)

Notes. We measure analyst forecast bias (BIAS jt ) using two different measures: the forecast of longterm growth of analyst j at time t (Panel A), and the analyst’s j stock recommendation at time t (Panel B). For each analyst, the recommendation variable is ranked from 1 to 5, where 1 is strong sell, 2 is sell, 3 is hold, 4 is buy, and 5 is strong buy. The consensus bias is expressed as a mean or median bias among all analysts covering a particular stock. For all mergers, we split the sample of stocks into those covered by both merging brokerage houses (treatment sample) and those not covered by both houses (control sample). We also divide stocks into premerger period and postmerger period (one-year window for each period). For each period we further construct benchmark portfolios using the control sample based on stocks’ size (SIZE), book-to-market ratio (BM), average past year’s returns ( RET), and analyst coverage (NOAN). Our benchmark assignment involves three portfolios in each category. Each stock in the treatment sample is then assigned to its own benchmark portfolio (SIZE/BM/RET/NOAN-matched). Next, for each period, we calculate the crosssectional average of the differences in analyst forecast bias across all stocks in the treatment sample and their respective benchmarks. Finally, we calculate the difference in differences between postevent period and pre-event period (DID estimator). Our sample excludes observations with stock prices lower than $5 and those for which the absolute difference between forecast value and the true earnings exceeds $10. Standard errors (in parentheses) are clustered at the merger groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

in the same way, because there are no actual earnings forecasts to make the comparison to. However, we can gauge the extent to which the average long-term forecast or recommendation changes across the merger date for our treatment sample (provided data are available) compared to the control group. To the extent that there is less competition as a result of these mergers, we expect forecasts for percentage growth to be higher after the merger and for them to have more positive recommendations. The results for the long-term growth forecasts and recommendations are in Table VIII. Panel A reports the results for long-term growth forecasts (which is the percentage long-term growth in earnings). Using the most conservative benchmark, we see that long-term growth forecasts increase by 55 bp’s after the merger, using mean forecasts, and by 35 bp’s, using median forecasts. The mean long-term growth forecast in the treatment sample is 14%, with a standard deviation of 6%. So a one-analyst drop in coverage in our treatment sample results in an increase in the mean longterm growth forecast that is about 9% of a standard deviation of these forecasts. This is both an economically and a statistically significant effect.

COMPETITION AND BIAS

1717

Panel B reports the results using recommendations. Recommendations are given in terms of the following five rankings: strong sell, sell, hold, buy, and strong buy. We convert these into a score of 1 for strong sell, 2 for sell, 3 for hold, 4 for buy, and 5 for strong buy. We then take the mean and median of these recommendation scores and look at how they vary for the treatment sample and the control group across the merger date. Using again the most conservative benchmark, the merger event is associated with an increase in the average recommendation score for the treatment sample of 0.05 using the mean score and 0.09 using the median score. The result using the mean score is not statistically significant, but the result using the median score is statistically significant at the 10% level. However, both estimates imply quite significant economic effects. The mean score for the treatment sample is 3.87 with a standard deviation of 0.44. Hence, we find that a one-analyst drop in coverage leads to about a 20% (10%) increase in the median (mean) recommendation score as a fraction of the standard deviation of these recommendations. In sum, we conclude that our baseline results based on annual forecasts are robust to different measures of bias. Moreover, in economic magnitude, they are half as large as the alternative effects we document above, and thus they constitute a lower bound in estimating the effect of competition on bias. One explanation of this fact is that long-term and recommendation forecasts might be more difficult to verify and thus they are subject to stronger competition effects. VI.D. Additional Analysis In this section, we bring to bear additional evidence on one of the mechanisms behind the competition effect in the data.10 Consider the independence-rationale mechanism in Gentzkow and Shapiro in which one independent or honest analyst can discipline her peers and subvert attempts to suppress information. Now imagine a firm covered by many analysts. This firm is less likely to try to bribe analysts to suppress information because the likelihood of drawing an independent or honest analyst is so high. In contrast, a firm with only a couple of analysts covering it is more likely to attempt to influence their few analysts because the payoffs for doing so are higher. One can measure attempts 10. We would like to thank an anonymous referee for several suggestions on addressing this issue.

1718

QUARTERLY JOURNAL OF ECONOMICS

on the firm to influence analysts by comparing the incentives for analysts for optimism bias for analysts who cover stocks with low analyst coverage versus those for analysts who cover stocks with high analyst coverage—by our logic, we expect higher incentives for analyst optimism bias for analysts covering stocks with little initial coverage or competition to begin with. Hence, our focus in this section is to measure how the incentives for optimism bias faced by analysts change depending on how much competition there is. Here we build on the work of Hong and Kubik (2003), who measure the implicit incentives of analysts for bias by looking at how career outcomes depend on optimism bias. They document strong evidence that optimism is correlated with subsequent career outcomes: Optimism increases chances of being promoted, whereas pessimism increases chances of being demoted. Our twist, building on their framework, is to examine whether, in the cross section, the subsequent career outcomes of analysts that cover stocks followed by fewer analysts are more strongly related to the degree of their bias. We interpret this as firms with little analyst coverage attempting to influence analysts by providing them incentives through their brokerage houses. To this end, using analysts as a unit of observation, we estimate the following linear probability models: (4a) (4b)

Promotionit+1 = α + β1 Biasit + β2 Coverageit + β3 Biasit × Coverageit + β4 Controlsit + εit+1 , Demotionit+1 = α + β1 Biasit + β2 Coverageit + β3 Biasit × Coverageit + β4 Controlsit + εit+1 .

Our coefficient of interest is β3 . Following Hong and Kubik (2003), Promotionit+1 equals one if an analyst i moves to a brokerage house with more analysts, and zero otherwise; Demotionit+1 equals one if an analyst i moves to a brokerage house with fewer analysts, and zero otherwise; Controls is a vector of controls including forecast accuracy, natural logarithm of an analyst’s experience, the size of the brokerage house. We also include year fixed effects, broker fixed effects, and analyst fixed effects. We estimate our regression model using a pooled (panel) regression and calculating standard errors by clustering at the analyst level. An important control in the regression model is forecast accuracy. To construct our measure of accuracy, we first calculate an analyst forecast error, defined as the absolute difference between his or her forecast and the actual EPS of firm i at time t.

COMPETITION AND BIAS

1719

We express the difference as a percentage of the previous year’s stock price. Subsequently, we follow the methodology in Hong and Kubik (2003) and construct a measure of relative analyst forecast accuracy. To this end, we first sort the analysts who cover a particular stock in a year based on their forecast error. We then assign a ranking based on this sorting; the best analyst (the one with the lowest forecast error) receives the first rank for that stock, the second best analyst receives the second rank, and onward until the worst analyst receives the highest rank. If more than one analyst is equally accurate, we assign all those analysts the midpoint value of the ranks they take up. Finally, we scale an analyst’s rank for a firm by the number of analysts who cover that firm. Following Hong and Kubik (2003), who have shown that analyst accuracy predicts career outcomes in a nonlinear fashion, we define two measures of accuracy: High Accuracy is an indicator variable that equals one if an analyst’s error falls into the highest decile of the distribution of accuracy measure, and zero otherwise; Low Accuracy is an indicator variable that equals one if the analyst’s error falls into the lowest decile of the distribution of accuracy measure, and zero otherwise. Note that in the regression with promotions as the dependent variable, the indicator for high accuracy is used as a control, whereas the indicator for low accuracy is used as a control when demotions is the dependent variable. We can have a more symmetric specification in which we include set of dummies for different accuracy deciles, along with the respective interaction terms with coverage, as control variables for both promotions and demotions and the results would be identical (these are available from the authors). We present results from estimation of equation (4) in Table IX. Columns (1) and (2) show the results for promotion and columns (3) and (4) for demotion. In columns (1) and (3), we replicate the results in Hong and Kubik (2003) and show that bias positively affects the probability of being promoted and negatively affects the probability of being demoted. More important, consistent with the hypothesis that competition affects incentives for bias, we find that the probability of being promoted is positively correlated with bias in an environment with lower competition. The result is statistically significant at the 1% level. Likewise, we find a qualitatively similar result for the probability of being demoted at the 10% level of significance. These results provide support for the independence rationale mechanism behind the disciplining effect of competition on bias.

1720

QUARTERLY JOURNAL OF ECONOMICS TABLE IX INCENTIVES AND CAREER OUTCOMES Promotion (1)

BIAS HIGH ACCURACY

0.0106∗ (0.0067) 0.0082∗∗ (0.0040)

Demotion (2)

0.0448∗∗∗ (0.0136) 0.0183∗∗ (0.0080)

LOW ACCURACY −0.0019∗∗∗ (0.0006) −0.0006 (0.0004)

BIAS × COVERAGE HIGH ACCURACY × COVERAGE LOW ACCURACY × COVERAGE COVERAGE EXPERIENCE BROKERAGE SIZE Year fixed effects Brokerage fixed effects Analyst fixed effects Observations

−0.0004∗ (0.0002) 0.0262∗∗∗ (0.0052) −0.0016∗∗∗ (0.0003) Yes Yes Yes 45,770

0.0002 (0.0003) 0.0265∗∗∗ (0.0052) −0.0016∗∗∗ (0.0003) Yes Yes Yes 45,770

(3) −0.0036 (0.0074) 0.0262∗∗∗ (0.0062)

0.0004 (0.0003) 0.0067 (0.0043) 0.0008∗∗∗ (0.0002) Yes Yes Yes 45,770

(4) −0.0065 (0.0092) 0.0233∗∗ (0.0098) 0.0005∗ (0.0003)

0.0001 (0.0005) 0.0002 (0.0003) 0.0065 (0.0043) 0.0008∗∗∗ (0.0002) Yes Yes Yes 45,770

Notes. The dependent variables are promotion (PROMOTION), defined as an indicator variable equal to one when an analyst moves to a larger brokerage house, and zero otherwise; and demotion (DEMOTION), defined as an indicator variable equal to one when an analyst moves to a smaller brokerage house, and zero otherwise. BIAS is forecast bias, defined as the difference between forecasted earnings and actual earnings, adjusted for the past year’s stock price. COVERAGE denotes the number of analysts tracking the stock. HIGH ACCURACY is an indicator variable that equals one if an analyst’s error falls into the highest decile of the distribution of accuracy measure, and zero otherwise; LOW ACCURACY is an indicator variable that equals one if the analyst’s error falls into the highest decile of the distribution of accuracy measure, and zero otherwise. EXPERIENCE is the natural logarithm of the number of years that the analyst is employed in the brokerage house. BROKERAGE SIZE is the number of analysts employed by the brokerage house. This table includes an interaction term between BIAS and COVERAGE. All regressions include year fixed effects, brokerage fixed effects, and analyst fixed effects. Standard errors (in parentheses) are clustered at the analyst groupings. ∗∗∗ , ∗∗ , ∗ 1%, 5%, and 10% statistical significance.

Another key part of the independence-rationale mechanism through which we think competition affects bias is that independent analysts keep other analysts honest. To this end, one can assess the incentives for being biased by comparing the penalty that competition imposes for an analyst who is contradicted by other analysts. A reasonable hypothesis is that the impact of bias on the probability of being promoted for such an analyst is lower the more contradicted she is. To evaluate this hypothesis, one can

COMPETITION AND BIAS

1721

define contradiction by other analysts as the average across stocks of an absolute difference between an analyst’s bias and the average bias of all other analysts calculated for each stock. We find some mixed evidence in support of this hypothesis. We find that the importance of bias for an analyst’s promotion decreases with the degree of her being contradicted. The result is significant at the 5% level. Unfortunately, being contradicted by other analysts does not significantly change the probability of being demoted. We omit these results for brevity. Nonetheless, this set of findings, along with the ones on how incentives for analyst optimism bias vary with competition, provide some comforting support for the independence-rationale mechanism. Finally, even though it might seem ideal to implement the above ideas using the context of our merger experiment, we note that this task proves to be quite difficult empirically. The main problem is that of statistical power. Because we do not observe many career changes for each analyst around merger events, it is very difficult to estimate the incentives for bias separately before and after the merger. Hence, we decided to use the approach that is most appealing statistically. VII. CONCLUSIONS We attempt to measure the effect of competition on bias in the context of analyst earnings forecasts, which are known to be excessively optimistic due to conflicts of interest. Using crosssectional regressions, we find that stocks with more analyst coverage, and presumably competition, have less biased forecasts on the average. However, these OLS estimates are biased, because analyst coverage is endogenous. We propose a natural experiment for competition—namely, mergers of brokerage houses, which result in the firing of analysts because of redundancy and other reasons including culture clash and general merger turmoil. We use this decrease in analyst coverage for stocks covered by both merging houses before the merger (the treatment sample) to measure the causal effect of competition on bias. We find that the treatment sample simultaneously experiences a decrease in analyst coverage and an increase in optimism bias the year after the merger relative to a control group of stocks. Our findings suggest that competition reduces analyst optimism bias. Moreover, the economic effect from our estimates is much larger than that from the OLS estimates.

1722

QUARTERLY JOURNAL OF ECONOMICS

Our findings have important welfare implications. Notably, a number of studies find that retail investors in contrast to institutional investors cannot adjust for the optimism bias (i.e., debias) of analysts and hence these optimistic recommendations have an effect on stock prices (e.g., Michaely and Womack [1999], Malmendier and Shanthikumar [2007]). One conclusion of our findings is that more competition can help protect retail investors because it tends to lower the optimism bias of analysts. Finally, our natural experiment for analyst coverage can also be useful for thinking about the determinants of stock prices. There is a large literature in finance and accounting that has tried to pin down whether analyst coverage increases stock prices. These studies are typically biased because of endogeneity, as analysts tend to cover high-priced, high-performing, or large stocks, for a variety of reasons. In other words, the causality might be reversed. Our natural experiment can hence be used to identify the causal effect of coverage on stock prices. Recent interesting research in the spirit of our experiment is that of Kelly and Ljungqvist (2007), who use closures of brokerage houses as a source of exogenous variation in coverage. We anticipate more exciting work will be done along this vein.

15

7 14

6

5

13

12

11

10

4 9

2 3

Brokerage firm

Target’s industry

12/31/2000 JP Morgan 9/18/2001 Josephthal Lyon & Ross 3/22/2005 Parker/Hunter Inc

Investment bank Security brokers and dealers Pvd investment, investment banking services

Securities dealer; RE broker 12/31/1994 Kidder Peabody & Co. Investment bank 5/31/1997 Dean Witter Discover Pvd sec brokerage & Co. services 11/28/1997 Salomon Brothers Investment bank 1/9/1998 Principal Financial Investment bank; Securities securities firm 2/17/1998 Jensen Securities Co. Securities brokerage firm 4/6/1998 Wessels Arnold & Investment bank Henderson LLC 10/1/1999 EVEREN Capital Securities brokerage Corp. firm 6/12/2000 JC Bradford & Co. Securities brokerage firm 10/15/2000 Donaldson Lufkin & Investment bank Jenrette 12/10/2000 Paine Webber Investment bank

10/31/1988 Butcher & Co., Inc.

8

Target house

9/10/1984 Becker Paribas

Merger date

1

Merger number

860

873 933

189

86

34

829

280

932

242 495

150 232

44

299

6211

6211 6211

6211

6211

6211

6211

6211

6211

6211 6211

6211 6211

6211

6211

IBES Industry no. code

Janney Montgomery Scott LLC

UBS Warburg Dillon Read Chase Manhattan Fahnestock & Co.

CSFB

First Union Corp., Charlotte, NC PaineWebber Group, Inc.

Dain Rauscher Corp.

DA Davidson & Co.

Wheat First Securities Inc (WF) PaineWebber Group, Inc. Morgan Stanley Group, Inc. Smith Barney EVEREN Capital Corp.

Merrill Lynch & Co., Inc.

Bidder house

APPENDIX: MERGERS INCLUDED IN THE SAMPLE (SORTED BY DATE)

142

125 98

85

100

189

282

76

79

254 829

189 192

282

183

IBES no.

Investment bank Securities brokerage firm Pvd sec brokerage services

Investment bank

Investment bank

Commercial bank; holding company Investment bank

Investment bank

Investment bank Securities brokerage firm Investment company

Pvd investment, financial advisory services Investment bank, brokerage firm Investment bank Investment bank

Bidder’s industry

6211

6211 6211

6211

6211

6211

6021

6211

6799

6211 6211

6211 6211

6211

6211

Industry code

COMPETITION AND BIAS

1723

1724

QUARTERLY JOURNAL OF ECONOMICS

PRINCETON UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH NEW YORK UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH

REFERENCES Abarbanell, Jeffery S., “Do Analysts’ Earnings Forecasts Incorporate Information in Prior Stock Price Changes?” Journal of Accounting and Economics, 14 (1991), 147–165. Besley, Timothy, and Andrea Prat, “Handcuffs for the Grabbing Hand? The Role of the Media in Political Accountability,” American Economic Review, 96 (2006), 720–736. Brown, Phillip, George Foster, and Eric Noreen, Security Analyst Multi-year Earnings Forecasts and the Capital Market (Sarasota, FL: American Accounting Association, 1985). Chopra, Vijay K., “Why So Much Error in Analysts’ Earnings Forecasts?” Financial Analysts Journal, 54 (1998), 30–37. Dechow, Patricia, Amy Hutton, and Richard G. Sloan, “The Relation between Affiliated Analysts’ Long-Term Earnings Forecasts and Stock Price Performance Following Equity Offerings,” Contemporary Accounting Research, 17 (1999), 1–32. Dreman, David, and Michael Berry, “Analyst Forecasting Errors and Their Implications for Security Analysis,” Financial Analysts Journal, 51 (1995), 30– 42. Dugar, Abhijeet, and Siva Nathan, “The Effect of Investment Banking Relationships on Financial Analysts’ Earnings Forecasts and Investment Recommendations,” Contemporary Accounting Research, 12 (1995), 131– 160. Fang, Lily, and Ayako Yasuda, “The Effectiveness of Reputation as a Disciplinary Mechanism in Sell-Side Research,” Review of Financial Studies, 22 (2009), 3735–3777. Gentzkow, Matthew, Edward L. Glaeser, and Claudia Goldin, “The Rise of the Fourth Estate: How Newspapers Became Informative and Why It Mattered,” in Corruption and Reform: Lessons from America’s History, Edward L. Glaeser and Claudia Goldin, eds. (Cambridge, MA: National Bureau of Economic Research, 2006). Gentzkow, Matthew, and Jesse M. Shapiro, “Media Bias and Reputation,” Journal of Political Economy 114 (2006), 280–316. ——,“Competition and Truth in the Market for News,” Journal of Economic Perspectives, 22 (2008), 133–154. Hong, Harrison, and Jeffrey D. Kubik, “Analyzing the Analysts: Career Concerns and Biased Earnings Forecasts,” Journal of Finance, 58 (2003), 313–351. Kelly, Bryan, and Alexander Ljungqvist, “The Value of Research,” Stern NYU Working Paper, 2007. Laster, David, Paul Bennett, and In Sun Geoum, “Rational Bias in Macroeconomic Forecasts,” Quarterly Journal of Economics, 114 (1999), 293–318. Lim, Terence, “Rationality and Analysts’ Forecast Bias,” Journal of Finance, 56 (2001), 369–385. Lin, Hsiou-wei, and Maureen F. McNichols, “Underwriting Relationships, Analysts’ Earnings Forecasts and Investment Recommendations,” Journal of Accounting and Economics, 25 (1998), 101–127. Malmendier, Ulrike, and Devin Shanthikumar, “Are Small Investors Na¨ıve about Incentives?” Journal of Financial Economics, 85 (2007), 457–489. McNichols, Maureen, and Patricia C. O’Brien, “Self-Selection and Analyst Coverage,” Journal of Accounting Research, 35 (1997), Supplement, 167–199. Michaely, Roni, and Kent L. Womack, “Conflict of Interest and the Credibility of Underwriter Analyst Recommendations,” Review of Financial Studies, 12 (1999), 653–686. Moulton, Brent, “Random Group Effects and the Precision of Regression Estimates,” Journal of Econometrics, 32 (1986), 385–397. Mullainathan, Sendhil, and Andrei Shleifer, “The Market for News,” American Economic Review, 95 (2005), 1031–1053.

COMPETITION AND BIAS

1725

Ottaviani, Marco, and Peter N. Sørensen, “Forecasting and Rank-Order Contests,” Kellogg Working Paper, 2005. Sheather, Simon J., and Chris M. Jones, “A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation,” Journal of the Royal Statistical Association B, 53 (1991), 683–690. Stickel, Scott E., “Predicting Individual Analyst Earnings Forecasts,” Journal of Accounting Research, 28 (1990), 409–417. Wu, Joanna S., and Amy Zang, “What Determines Financial Analysts’ Career Outcomes during Mergers?” Journal of Accounting and Economics, 47 (2009), 59–86.

IMPORTED INTERMEDIATE INPUTS AND DOMESTIC PRODUCT GROWTH: EVIDENCE FROM INDIA∗ PINELOPI KOUJIANOU GOLDBERG AMIT KUMAR KHANDELWAL NINA PAVCNIK PETIA TOPALOVA New goods play a central role in many trade and growth models. We use detailed trade and firm-level data from India to investigate the relationship between declines in trade costs, imports of intermediate inputs, and domestic firm product scope. We estimate substantial gains from trade through access to new imported inputs. Moreover, we find that lower input tariffs account on average for 31% of the new products introduced by domestic firms. This effect is driven to a large extent by increased firm access to new input varieties that were unavailable prior to the trade liberalization.

I. INTRODUCTION New intermediate inputs play a central role in many trade and growth models. These models predict that firms benefit from international trade through increased access to previously unavailable inputs, and this process generates static gains from trade. Access to these new imported inputs in turn enables firms to expand their domestic product scope through the introduction of new varieties, which generates dynamic gains from trade. Despite the prominence of these models, we have surprisingly little evidence to date on the relevance of the underlying microeconomic mechanisms. In this paper we take a step toward bridging the gap between theory and evidence by examining the relationship between new imported inputs and the introduction of new products by domestic firms in a large and fast-growing developing economy: India. During the 1990s, India experienced an explosion in the number of products manufactured by Indian firms, and these new products ∗ We thank Matthew Flagge, Andrew Kaminski, Alexander Mcquoid, and Michael Sloan Rossiter for excellent research assistance and Andy Bernard, N.S. Mohanram, Marc Melitz, Steve Redding, Andres Rodriguez-Clare, Jagadeesh Sivadasan, Peter Schott, David Weinstein, the referees, the Editor, and several seminar participants for useful comments. We are particularly grateful to Christian Broda and David Weinstein for making their substitution elasticity estimates available to us. Goldberg also thanks the Center for Economic Policy Studies at Princeton for financial support. All remaining errors are our own. The views expressed in this paper are those of the authors and should not be attributed to the International Monetary Fund, its Executive Board, or its management. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1727

1728

QUARTERLY JOURNAL OF ECONOMICS

accounted for a quarter of India’s manufacturing output growth (Goldberg et al. [henceforth GKPT] 2010a). During the same period, India also experienced a surge in imported inputs, with more than two-thirds of intermediate import growth occurring in new varieties. The goal of this paper is to determine if the increase in Indian firms’ access to new imported inputs can explain the introduction of new products into the domestic economy by these firms. One of the challenges in addressing this question is the potential reverse causality between imports of inputs and new domestic products. For instance, firms may decide to introduce new products for reasons unrelated to international trade. Once the manufacture of such products begins, the demand for imported inputs, both existing and new varieties, may increase. This would lead to a classic reverse causality problem: the growth of domestic products could lead to the import of new varieties and not vice versa. To identify the relationship between changes in imports of intermediates and introduction of new products by domestic firms, we exploit the particular nature of India’s trade reform. The reform reduced input tariffs differentially across sectors and was not subject to the usual political economy pressures because the reform was unanticipated by Indian firms. Our analysis proceeds in two steps. We first offer strong reduced-form evidence that declines in input tariffs resulted in an expansion of firms’ product scope: industries that experienced the greatest declines in input tariffs contributed relatively more to the introduction of new products by domestic firms.1 The relationship is also economically significant: lower input tariffs account on average for 31% of the observed increase in firms’ product scope over this period. Moreover, the relationship is robust to specifications that control for preexisting industry- and firm-specific trends. We also find that lower input tariffs improved the performance of firms in other dimensions including output, total factor productivity (TFP), and research and development (R&D) activity that are consistent with the link between trade and growth. In order to investigate the channels through which input tariff liberalization affected domestic product growth in India, we 1. Recent theoretical work by Bernard, Redding, and Schott (2006), Nocke and Yeaple (2006), and Eckel and Neary (2009) shows that trade liberalization should lead firms to rationalize their product scope. These theoretical models focus on the effect of final goods and tariffs on output, whereas the analysis of this paper focuses on input tariffs and the role of intermediates.

IMPORTED INPUTS AND PRODUCT GROWTH

1729

then impose additional structure guided by the methods of Feenstra (1994) and Broda and Weinstein (2006) and use India’s Input– Output (IO) Table to construct exact input price indices for each sector. The exact input price index is composed of two parts: a part that captures changes in prices of existing inputs and a part that quantifies the impact of new imported varieties on the exact price index. Thus, we can separate the changes in the exact input price indices faced by firms into a “price” and a “variety” channel. This methodology reveals substantial gains from trade through access to new imported input varieties: accounting for new imported varieties lowers the import price index for intermediate goods on average by an additional 4.7% per year relative to conventional gains through lower prices of existing imports. We relate the two components of the input price indices to changes in firm product scope. The results suggest an important role for the extensive margin of imported inputs. Greater access to imported varieties increases firm scope. This relationship is robust to an instrumental variable strategy that accounts for the potential endogeneity of input price indices using input tariffs and proximity of India’s trading partners as instruments. Hence, we conclude that input tariff liberalization contributed to domestic product growth not simply by making available imported inputs cheaper, but, more importantly, by relaxing technological constraints facing such producers via access to new imported input varieties that were unavailable prior to the liberalization.2 These findings relate to two distinct, yet related, literatures. First, endogenous growth models, such as the ones developed by Romer (1987, 1990) and Rivera-Batiz and Romer (1991), emphasize the static and dynamic gains arising from the import of new varieties. Not only do such varieties lead to productivity gains in the short and medium runs, but also the resulting growth fosters the creation of new domestic varieties that further contribute to growth. The first source (static) of gains has been addressed in the empirical literature before. Existing studies document a large expansion in new imported varieties (Feenstra 1994, Klenow and Rodriguez-Clare 1997, Broda and Weinstein 2006, Arkolakis et al. 2008), which, depending on the overall importance of new 2. The importance of increased access to imported inputs has been noted by Indian policy makers. In a recent speech, Rakesh Mohan, the managing director of the Indian Reserve Bank, argued that “trade liberalization and tariff reforms have provided increased access to Indian companies to the best inputs available globally at almost world prices” (Mohan 2008).

1730

QUARTERLY JOURNAL OF ECONOMICS

imported varieties in the total volume of trade, can generate substantial gains from trade (see, for example, Feenstra [1994] and Broda and Weinstein [2006]).3 Our evidence points to large static gains from trade because of increased access to imported inputs. The second source (dynamic) of gains from trade has been empirically elusive, partly because data on the introduction of domestic varieties produced in each country have been difficult to obtain.4 The two studies that are closest to ours (Feenstra et al. 1999 and Broda, Greenfield, and Weinstein 2006) resort to export data to overcome this difficulty. They use the fraction of the economy devoted to exports and industry-specific measures of export varieties as proxies for domestic R&D and domestic variety creation, respectively. The advantage of our data is that we directly observe the creation of new varieties by domestic firms. This enables us to link the creation of new domestic varieties to changes in imported inputs. In our framework, trade encourages creation of new domestic varieties because Indian trade liberalization significantly reduces tariffs on imported inputs. This leads to imports of new varieties of intermediate products, which in turn enables the creation of new domestic varieties. Hence, new imported varieties of intermediate products go hand-in-hand in our context with new varieties of domestic products. Our study also relates to the literature on the effects of trade liberalization on total factor productivity. Several theoretical papers have emphasized the importance of intermediate inputs for productivity growth (e.g., Ethier [1979, 1982], Romer [1987, 1990], Markusen [1989], Grossman and Helpman [1991]). Empirically, most recent studies have found imports of intermediates or declines in input tariffs to be associated with sizable productivity gains (see Halpern, Koren, and Szeidl [2006] , Amiti and Konings [2007], Kasahara and Rodrigue [2008]), with Muendler (2004) being an exception. Our findings are in line with the majority of the empirical literature on this subject, as we too document positive effects of input trade liberalization and imported intermediates. However, in contrast to earlier work, our main focus is not on TFP but rather on the domestic product margin.5 As noted by Erdem 3. Klenow and Rodriguez-Clare (1997) and Arkolakis et al. (2008) find small variety gains following the Costa Rican trade liberalization, which they attribute to the fact that the new varieties were imported in small quantities, thus contributing little to welfare. 4. Brambilla (2006) is an exception. 5. Nevertheless, we also provide evidence that conventionally measured TFP increases with input trade liberalization in our context. See also Topalova and Khandelwal (2011).

IMPORTED INPUTS AND PRODUCT GROWTH

1731

and Tybout (2003) and De Loecker (2007), a potential problem with the interpretation of the TFP findings is that the use of revenue data to calculate TFP implies that it is not possible to identify the effects of trade liberalization on physical efficiency separately from its effects on firm markups, product quality, and, in the case of multiproduct firms, the range of products produced by the firm. In light of this argument, one can interpret our findings as speaking to the effects of trade reform on one particular component of TFP that is clearly identified in our data: the range of products manufactured by the firm.6 The remainder of the paper is organized as follows. In Section II we provide a brief overview of the data we use in our analysis and the Indian trade liberalization of the 1990s. We next discuss the reduced-form evidence. Section III organizes our results in two subsections. In Section III.A, we provide descriptive evidence linking the expansion of the intermediate import extensive margin to tariff declines. In Section III.B, we provide reduced-form evidence that lower input tariffs caused firms to expand product scope and we conduct a series of robustness checks. Although these regressions establish our main empirical findings, they are unable to inform our understanding of the particular channels that are at work. In Section IV, we therefore impose more structure and develop a framework that allows us to interpret the reduced form results and identify the relevant mechanisms. Sections IV.A and IV.B present the framework and our identification assumptions; Sections IV.C and IV.D discuss the empirical implementation of the structural approach and our results, respectively. Section V concludes. II. DATA AND POLICY BACKGROUND II.A. Data Description The firm-level data used in the analysis are constructed from the Prowess database, which is collected by the Centre for Monitoring the Indian Economy (CMIE). Prowess has important 6. Exploring the relationship between the number of new products and TFP is beyond the scope of this analysis. The theoretical literature offers arguments for both a positive (Bernard, Redding, and Schott 2006) and a negative (Nocke and Yeaple 2007) relationship between these two variables. We note however, that although the effect of new products on firm-level TFP may depend on the particular theoretical model one adopts, there is substantial empirical evidence that new product additions by domestic firms account for a sizable share of sales growth in several countries (Navarro 2008, Bernard, Redding, and Schott 2010, GKPT 2010a).

1732

QUARTERLY JOURNAL OF ECONOMICS

advantages over the Annual Survey of Industries (ASI), India’s manufacturing census, for our study. First, unlike the repeated cross section in the ASI, the Prowess data are a panel of firms, which enables us to track firm performance over time. Second, Prowess records detailed product-level information at the firm level and can track changes in firm scope over the sample. Finally, the data span the period of India’s trade liberalization from 1989 to 2003. Prowess is therefore particularly well suited for understanding how firms adjust their product lines over time in response to increased access to intermediate inputs.7 Prowess enables us to track firms’ product mix over time because Indian firms are required by the 1956 Companies Act to disclose product-level information on capacities, production and sales in their annual reports. As discussed extensively in GKPT (2010a), several features of the database give us confidence in its quality. Product-level information is available for 85% of the manufacturing firms, who collectively account for more than 90% of Prowess’ manufacturing output and exports. More importantly, product-level sales comprise 99% of the (independently) reported manufacturing sales. We refer the reader to GKTP (2010a) for a detailed summary statistics. Our database contains 2,927 manufacturing firms that report product-level information and span the period from 1989 to 1997. We complement the product-level data with disaggregated information on India’s imports and tariffs. The tariff data, reported at the six-digit HS (HS6) level, are available from 1987 to 2001, and they are obtained from Topalova and Khandelwal (2011). We use a concordance by Debroy and Santhanam (1993) to aggregate tariffs to the National Industrial Classification (NIC) level. Input tariffs, the key policy variable in this paper, are computed by running the industry-level tariffs through India’s

7. Prowess accounts for 60% to 70% of the economic activity in the organized industrial sector and comprises 75% of corporate taxes and 95% of excise duty collected by the Government of India (CMIE). The Prowess is not well suited for understanding firm entry and exit because firms are under no legal obligation to report to the data collecting agency. However, because Prowess contains only relatively large Indian firms, entry and exit is not necessarily an important margin for understanding the process of adjustment to increased openness within this subset of the manufacturing sector. Very few firms exit from our sample during this period (7%), and we observe no statistical difference in initial firm scope, output, TFP and R&D activity between continuing and exiting firms. Using a nationally representative data covering Indian plants, Sivadasan (2009) finds that reallocation across firms played a minor role in aggregate TFP gains following India’s reforms. Our analysis below relies on within-firm variation in firm outcomes, rather than across-firm variation.

IMPORTED INPUTS AND PRODUCT GROWTH

1733

input–output matrix for 1993–1994. For each industry, we create an input tariff for that industry as the weighted average of tariffs on inputs used in the production of the final output of that industry. The weights are constructed as the input industry’s share of the output industry’s total output value. Formally, input tariffs are inp defined as τqt = i αiq τit , where αiq is the value share of input i in industry q. For example, if a final good uses two intermediates with tariffs of 10% and 20% and value shares of 0.25 and 0.75, respectively, the input tariff for this good is 17.5%.8 The weights in the IO table are also used to construct the components of the input exact price index. Official Indian import data are obtained from Tips Software Services. The data classify products at the eight-digit HS (HS8) level and record transactions for approximately 10,000 manufacturing products imported from 160 countries between 1987 and 2000. For the purposes of descriptive analysis in Section III.A, we assign products according to their end use into two classifications: intermediate goods (basic, capital, intermediates) and final goods (consumer durables and nondurables).9 This classification is adopted from Nouroz’s (2001) classification of India’s IO matrix. The codes from the IO matrix are then matched to the four-digit HS (HS4) level following Nouroz (2001), which enables us to classify imports broadly into final and intermediate goods.

II.B. India’s Trade Liberalization India’s postindependence development strategy was one of national self-sufficiency and heavy government regulation of the economy. India’s trade regime was among the most restrictive in Asia, with high nominal tariffs and nontariff barriers. The emphasis on import substitution resulted in relatively rapid industrialization, the creation of domestic heavy industry, and an economy that was highly diversified for its level of development (Kochhar et al. 2006).

8. The IO table includes weights for manufacturing and nontradables (e.g., labor, electricity, utilities, labor, etc.), but tariffs, of course, exist only for manufacturing. Therefore, the calculation of input tariffs implicitly assumes a zero tariff for nontradables. All of our regressions rely on changes in tariffs over time and not cross-sectional comparisons. 9. An input for one industry may be an output for another industry, though there are many products for which the most common use justifies this distinction (e.g., jewelry and clothing are usually considered final goods, whereas steel is considered an intermediate product).

1734

QUARTERLY JOURNAL OF ECONOMICS

In August 1991, in the aftermath of a balance-of-payments crisis, India launched a dramatic liberalization of the economy as part of an IMF adjustment program. An important part of this reform was to abandon the extremely restrictive trade policies.10 The average tariffs fell from more than 80% in 1990 to 39% by 1996. Nontariff barriers (NTBs) were reduced from 87% in 1987 to 45% in 1994 (Topalova and Khandelwal 2011). There were some differences in the magnitude of tariff changes (and especially NTBs) according to final and intermediate industries, with NTBs declining at a later stage for consumer goods. Overall, the structure of industrial protection changed, as tariffs were brought to a more uniform level across sectors reflecting the guidelines of the tariff reform spelled out in the IMF conditions (Chopra et al. 1995). Several features of the trade reform are crucial to our study. First, the external crisis of 1991, which came as a surprise, opened the way for market-oriented reforms (Hasan, Mitra, and Ramaswamy 2007).11 The liberalization of the trade policy was therefore unanticipated by firms in India. Moreover, reforms were passed quickly as a sort of “shock therapy” with little debate or analysis, to avoid the inevitable political opposition (Goyal 1996). Industries with the highest tariffs received the largest tariff cuts, implying that both the average and standard deviation of tariffs fell across industries. Consequently, although there was significant variation in the tariff changes across industries, Topalova and Khandelwal (2011) have shown that output and input tariff changes were uncorrelated with prereform firm and industry characteristics such as productivity, size, output growth during the 1980s, and capital intensity.12 The tariff liberalization does not appear to have been targeted toward specific industries and appears free of the usual political economy pressures. India remained committed to further trade liberalization beyond the Eighth Plan (1992–1997). However, following an election in 1997, Topalova and Khandelwal (2011) find evidence that 10. The structural reforms of the early 1990s also included a stepped-up dismantling of the “license raj,” the extensive system of licensing requirements for establishing and expanding capacity in the manufacturing sector, which had been the cornerstone of India’s regulatory regime. See GKPT (2010a). 11. This crisis was in part triggered by the sudden increase in oil prices due to the Gulf War in 1990, the drop in remittances from Indian workers in the Middle East, and the political uncertainty surrounding the fall of a coalition government and the assassination of Rajiv Gandhi, which undermined investors’ confidence. 12. This finding is consistent with those of Gang and Pandey (1996), who argue that political and economic factors cannot explain tariff levels at the time of the reform.

IMPORTED INPUTS AND PRODUCT GROWTH

1735

TABLE I PREREFORM FIRM CHARACTERISTICS AND INPUT TARIFF CHANGES

1997–1992 input tariff change Observations

Products (1)

Output (2)

TFP (3)

R&D (4)

0.180 (0.308) 713

−0.744 (0.745) 712

0.517 (0.507) 614

−1.531 (1.341) 667

Notes. The dependent variables in each column are the prereform 1989–1991 growth in firm-level outcomes. The variables are regressed on postreform (between 1992 and 1997) changes in input tariffs. Column (1) is the prereform firm-level change in (log) number of products. Columns (2)–(4) are the prereform changes in (log) firm output, TFP, and R&D expenditure. The number of observations varies in each column because the coverage of firm outcomes varies. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗ ∗ 5%, and ∗∗∗ 1% significance level.

tariffs under the Ninth Plan (1997–2002) changed in ways that were correlated with firm and industry performance in the previous years. This indicates that unlike the initial tariff changes following the reform, after 1997, tariff changes were subject to political influence. This concern leads us to restrict our analysis in this paper to the sample period that spans 1989–1997. We extend Topalova and Khandelwal’s (2011) analysis by providing additional evidence that the input tariff changes from 1992 to 1997 were uncorrelated with prereform changes in the firm performance measures that we consider in this paper. Column (1) of Table I regresses the prereform (1989–1991) growth in firm scope on the subsequent input tariff changes between 1992 and 1997. If the tariff changes were influenced by lobbying pressures, or targeted toward specific industries based on prereform performance, we would expect a statistically significant correlation. However, the correlation is statistically nonsignificant, suggesting that the government did not take prereform trends in firm scope into account while cutting tariffs. Columns (2)–(4) of Table I report the correlations of the input tariff changes with the prereform growth in firm output, TFP and R&D. As before, there is no statistically significant correlation between changes in these firm outcomes and input tariff changes. This table provides additional assurance that the tariff liberalization was unanticipated by firms. III. REDUCED-FORM RESULTS This section presents some descriptive and reduced-form evidence on the relationship between tariff liberalization and product scope. Before we review the evidence, it is instructive to briefly

1736

QUARTERLY JOURNAL OF ECONOMICS

explain the reasons we expect tariffs to affect the development of new products in the domestic market. Section IV provides a more formal analysis of specific channels. Suppose that the production technology of a product q in the final goods sector of the economy has the general form I , (1) Yq = f A, L, S, {Xi }i=1 where Y denotes output, A is the product-specific productivity, and L and S are labor and nontradable inputs (e.g., electricity, water, warehousing). The input vectors Xi = {Xi D, Xi F } comprise domestic (Xi D) and imported inputs (Xi F ), respectively. This production technology is general and for now does not commit us to any particular functional form. Suppose further that production of q has a fixed cost Fq . The firm will choose inputs optimally to maximize profits and will produce product q as long as the variable profits are greater than or equal to the fixed cost. Even without making any particular assumptions about market structure or functional forms, it is easy to see how a reduction in input tariffs would affect a firm’s decision to introduce a new product. First, input tariff reductions lower the prices of existing imported inputs. The increase in variable profits resulting from lower input tariffs raises the likelihood that a firm can manufacture previously unprofitable products. Second, liberalization may lead to the import of new varieties (e.g., see Klenow and Rodriguez-Clare [1997]), thereby expanding the set of intermediate inputs available to the firm.13 The significance of this second effect will depend on the particular form of the production technology, and in particular on the substitutability between domestic and imported inputs, as well as the substitutability between different varieties of imported intermediates. Suppose, for example, that at one extreme, some of the interI are essential, so that if one of mediate inputs included in {Xi F }i=1 these inputs falls to zero, product q cannot be produced. Then the effect of trade liberalization on the introduction of new products is expected to be large, as it will relax technological constraints facing domestic firms. On the other extreme, if the new imported varieties were perfect substitutes for domestic, or previously imported, varieties there would be no effect through the extensive 13. The fixed costs of production may also decline with input tariff liberalization, which would also increase the likelihood that firms manufacture new products.

IMPORTED INPUTS AND PRODUCT GROWTH

1737

margin of imports. The importance of the extensive margin relative to the pure price effects of trade liberalization is therefore an empirical question. The reduced-form evidence we present in this section does not allow us to distinguish between these two channels. That is, even if we found that tariff liberalization led to an increase in domestically produced varieties, this increase could have resulted solely from a decline in the prices of existing imported inputs; the reform would then have operated only through price effects on existing imports. Nevertheless, the descriptive evidence we present here indicates an enormous contribution of the extensive margin to import growth, which suggests that the reform is unlikely to have operated solely through the price channel. In Section IV, we place additional structure on the firm’s production function in order to quantify the specific channels generating the reduced-form findings.

III.A. Descriptive Evidence: Trade Liberalization and Import Data Before analyzing the relationship between input tariff declines and firm scope, we first examine India’s import data. We show that imports increased following the trade liberalization and decompose the margins of aggregate import adjustment during the 1990s. Next, we examine the impact of trade liberalization on key trade variables in our empirical framework: total imports, imports of intermediates, unit values, and the number of imported varieties. The goal of this analysis is to show that the extensive product margin was an important component of import growth (especially for intermediates) and that trade liberalization affected the variables relevant in our framework in expected ways. Import Decomposition. We begin by examining the growth of imports into India during the 1990s. Total import growth reflects the contribution of two margins: growth in HS6 products that existed in the previous period (intensive margin) and growth in products that did not exist in the previous period (extensive margin). Two striking features emerge from this decomposition, reported in Table II. The first observation is that India experienced a surge in overall imports; column (1) indicates that real imports

1738

QUARTERLY JOURNAL OF ECONOMICS TABLE II DECOMPOSITION OF IMPORT GROWTH, 1987–2000 Extensive margin

Product classification All products Intermediate products Final products

Intensive margin Import Product Product growth Net entry exit Net Growing Shrinking (1) (2) (3) (4) (5) (6) (7) 130 227 90

84 153 33

84 153 33

0 0 0

45 74 57

84 116 86

−39 −42 −29

Notes. Real import growth decomposed into the extensive and intensive margins between 1987 and 2000. Imports are deflated by the wholesale price index. Column (1) reports overall import growth. Columns (2) and (5) report the contribution of the extensive and intensive margins, respectively. The extensive margin is growth in imports due to new six-digit HS codes not imported in 1987. The intensive margin measures import growth within products that India had imported in 1987. The gross contributions are reported in columns (3) and (4) for the extensive margin, and columns (7) and (8) for the intensive margin. Rows (2) and (3) decompose import growth in the intermediate (basic, capital, and intermediates) and final (consumer durables and nondurables) products. The HS codes have been standardized to remove any issues due to changes in the Indian HS classification system.

(inclusive of tariffs) rose by 130% between 1987 and 2000.14 More interestingly, intermediate imports increased by 227%, whereas final goods increased by 90%. In other words, the overall import growth was dominated by an increase in intermediate imported products.15 The second fact that emerges from Table II is that the relative contribution of the extensive margin to overall growth was substantially larger in the intermediate imports. Intermediate products unavailable prior to the reform accounted for about 66% of the overall intermediate import growth, whereas the intensive margin accounted for the remaining third. Moreover, the net contribution of the extensive margin is driven entirely by gross product entry. Very few products ceased to be imported over this period. In contrast, the relative importance of each margin in the final goods sectors is reversed; the extensive margin accounted only for 37% of the growth in imports, whereas the intensive margin contributed 63% of the growth. In GKPT (2010b), we provide evidence that the majority of the growth in the extensive margin is driven by imports from OECD countries, which presumably are relatively high-quality imports. Table II therefore suggests 14. Nominal imports, inclusive of tariffs, grew 516% over this period. Excluding tariffs, real and nominal import growth was 228% and 781%, respectively. The reason the growth numbers excluding tariffs are higher is that tariffs were very high prior to the reform. 15. As discussed above, we rely on the Nouroz (2001) classification of products to final and intermediate goods in this section only. The results in Section IV rely on input–output matrices to construct the input price indices.

IMPORTED INPUTS AND PRODUCT GROWTH

1739

TABLE IIIa IMPORT VALUES AND TARIFFS All products (1)

Intermediates (2)

Final goods (3)

Output tariff

−0.136∗∗∗ (0.035)

−0.117∗∗∗ (0.044)

−0.151∗∗ (0.076)

Year FEs HS6 FEs R2 Observations

Yes Yes .82 35,833

Yes Yes .82 20,140

Yes Yes .80 11,838

Notes. Coefficients on tariffs from product-level regressions of log (fob) import value on lagged output tariffs, HS6 product fixed effects, and year effects. An observation is HS6–category–year. Column (1) pools across all sectors. Columns (2) and (3) report coefficients for the intermediate and final goods, respectively. Tariffs are at the HS6 level and regressions are run from 1987 to 1997. Standard errors (in parentheses) clustered at the HS6 level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

that imports increased substantially during our sample period and that this increase was largely driven by the growth in the number of intermediate products that were imported. Import Values, Prices, and Varieties. We next examine whether the expansion in trade noted in Table II was systematically related to the tariff reductions induced by India’s trade liberalization. To summarize our findings, we find that (a) lower tariffs led to an overall increase in import values, (b) lower tariffs resulted in lower unit values of existing product lines, and (c) lower tariffs led to an increase in the imports of new varieties. Moreover, this expansion of varieties in response to tariff declines was particularly pronounced for intermediate products. We begin by examining the responsiveness of import values to tariffs by regressing the (log) import value (exclusive of tariffs) of an HS6 product on the HS6-level tariff,16 an HS6-level fixed effect, and year fixed effects, and restrict the analysis to 1987–1997 (see Section II.B). We should emphasize that we interpret these regressions strictly as reduced-form regressions. In particular, unlike Klenow and Rodriguez-Clare (1997), we are not assuming complete tariff pass-through on import prices, so that the tariff coefficients in our regressions cannot be used to back out structural parameters.17 Table IIIa reports the coefficient estimates on tariffs for all sectors (column (1)), intermediate sectors 16. We lag the tariff measure one period in all specifications because the trade reform was implemented toward the end of 1991 (initiated in August 1991). 17. Incomplete pass-through can arise even with a CES utility function if the market structure is oligopolistic and/or nontraded local costs are present.

1740

QUARTERLY JOURNAL OF ECONOMICS

(column (2)), and final goods sectors (column (3)). In all cases, declines in tariffs are associated with higher import values. This analysis therefore confirms that the trade reform played an important role in the expansion of imports documented in Table II. Traditional trade theory usually emphasizes the benefits from trade that occur through increased imports of existing products/varieties at lower prices. This channel also plays a role in our context. We explore the impact of tariff declines on the tariffinclusive unit values of HS8-country varieties by regressing the variety’s unit value on the tariff, a year fixed effect, and a variety (HS8-country) fixed effect. Note that by including the variety fixed effect, we implicitly investigate how tariffs affected the prices of continuing varieties. The results are reported in Table IIIb. Overall, lower tariffs are associated with declines in the unit values of existing varieties (column (1)). Columns (2) and (3) report the coefficients for the intermediate and final goods sectors, respectively. Although the coefficient is positive and significant for both sectors, the magnitude of the coefficient is larger for the intermediate sectors. This suggests that to the extent that imported inputs are used in the production process by domestic firms, the observed declines in unit values of existing products will lower the marginal cost of production for Indian firms. The aggregate decomposition in Table II suggests that new imported varieties played an important role in the expansion of overall imports, particularly for the intermediate sectors. This is consistent with Romer (1994), who shows that if there is a fixed cost of importing a product, a country will import the product only if the profits from importing exceed the fixed costs. This means that high tariffs limit not only the quantity but also the range of goods imported. To provide direct evidence of the effect of tariffs on the extensive margin of imports we estimate the following specification: (2)

ln(vht ) = αh + αt + βτht + εht ,

where vht is the number of varieties within an HS6 product h at time t, τht is the HS6 tariff, αh is an HS6 fixed effect, and αt is a year fixed effect. The results are reported in Table IIIc. To show that our results are not sensitive to the definition of a variety, the table reports equation (2) with different definitions of a “variety” as the dependent variable: HS6–country (Panel A), HS8 codes (Panel B), and HS8 category–country (Panel C). Because our

IMPORTED INPUTS AND PRODUCT GROWTH

1741

TABLE IIIb IMPORT UNIT VALUES AND TARIFFS All products (1)

Intermediate (2)

Final goods (3)

Output tariff

0.273∗∗∗ (0.050)

0.304∗∗∗ (0.077)

0.245∗∗∗ (0.079)

Year FEs HS8–country FEs R2 Observations

Yes Yes .88 49,109

Yes Yes .85 32,619

Yes Yes .93 11,070

Notes. Regressions of (log) tariff-inclusive unit values on tariffs, HS8–country fixed effects, and year fixed effects. Unit values are computed for each HS8–country pair and the tariffs are the HS6 level. Column (1) uses all products and columns (2) and (3) report coefficients for the intermediates and final goods, respectively. Regressions are run from 1987 to 1997. Standard errors (in parentheses) clustered at the HS6 level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

TABLE IIIc IMPORT EXTENSIVE MARGIN AND TARIFFS All products (1) Output tariff R2 Observations Output tariff R2 Observations Output tariff R2 Observations

Intermediate (2)

Final goods (3)

−0.082∗∗∗ (0.012) .85 35,833

Panel A: Variety: HS6–country −0.106∗∗∗ (0.014) .84 20,140

−0.049∗ (0.026) .84 11,838

−0.015∗∗ (0.007) .88 35,833

Panel B: Variety: HS8 −0.023∗∗∗ (0.009) .90 20,140

−0.005 (0.014) .85 11,838

−0.095∗∗∗ (0.013) .87 35,833

Panel C: Variety HS8–country −0.129∗∗∗ (0.016) .86 20,140

−0.042 (0.028) .86 11,838

Notes. All regressions also include year fixed effects and HS6 fixed effects. The table reports coefficients on tariffs from product-level regressions of (log) number of varieties on output tariffs, HS6 product fixed effects, and year effects. The regressions are run at the HS6–year level and each panel uses an alternative definition of a variety. A variety is defined as an HS6–country pair in Panel A, an HS8 code in Panel B, and an HS8–country pair in Panel C. Within each panel, column (1) pools across all sectors, whereas columns (2) and (3) report coefficients for the intermediate and final goods, respectively. As in the previous tables, tariffs are at the HS6 level and the regressions are run from 1987 to 1997. Standard errors (in parentheses) clustered at the HS6 level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

1742

QUARTERLY JOURNAL OF ECONOMICS

results are robust to alternative definitions of a variety, we focus our discussion on the results in Panel A.18 Column (1) estimates equation (7) for all products and shows that tariff declines were associated with an increased number of imported varieties. This result confirms the importance of the new variety margin during a trade reform, as emphasized in Romer (1994). We rerun regression (2) for the intermediate and final products in columns (2) and (3) of each panel, respectively. Consistent with the evidence in Table II, the relationship between tariff declines and the extensive margin is particularly pronounced for intermediate products. The coefficient on tariffs for the intermediate products in column (2) is more than twice as large as the tariff coefficient for the final goods. Moreover, the results for intermediate products are robust to the alternative definitions of a variety in Panels B and C, whereas the results for final products are more sensitive to the definition of varieties.19 Our results are generally consistent with the evidence in Klenow and Rodriguez-Clare (1997) and Arkolakis et al. (2008), who also find that the range of imported varieties expands as a result of the tariff declines in Costa Rica. However, there is one important difference. In India, Table II indicates that new imported intermediate varieties accounted for a sizable share of total imports. In contrast, in Costa Rica, newly imported varieties accounted for a small share of total imports and thus generated relatively small gains from trade (Arkolakis et al. 2008). Thus, the evidence so far suggests that gains from new import varieties, particularly from the intermediate sectors, may be potentially large in the context of the Indian trade liberalization. In sum, a first look at the import data demonstrates that tariff declines led to increases in import values, reductions in the import prices of existing products and expansion of new varieties. These responses were particularly pronounced for imports of intermediate products. Thus, Indian firms may have benefited from the trade reform not only via cheaper imports of existing intermediate inputs but also by having access to new intermediate inputs. In the next section, we quantify the overall impact of input tariff reductions on firm-level outcomes. 18. We obtain qualitatively similar results using a Poisson regression, and when balancing the data to account for HS6 codes with no initial imports. Results are available upon request. 19. One explanation for the lack of robust findings for final goods is the fact that NTBs still existed in these HS lines.

IMPORTED INPUTS AND PRODUCT GROWTH

1743

III.B. Reduced-Form Evidence Input Tariffs and Domestic Varieties. In this section, we relate input tariffs to the number of new products introduced in the market by domestic Indian firms. We then examine the relationship between input tariff reductions and other variables that are relevant in endogenous growth models, such as firm sales, total factor productivity, and R&D. To explore the impact of input tariffs on the extensive product margin, we estimate the following equation: q inp (3) ln nit = αi + αt + βτqt + εit , q

where nit is the number of products manufactured by firm i inp operating in industry q at time t, and τqt is the input tariff that corresponds to the main industry in which firm i operates. This regression also includes firm fixed effects to control for timeinvariant firm characteristics, and year fixed effects to capture unobserved aggregate shocks. The coefficient of interest is β, which captures the semielasticity of firm scope with respect to tariffs on intermediate inputs. Standard errors are clustered at the industry level. In GKPT (2010a), we found virtually no evidence that firms dropped product lines during this period; 53% of firms report product additions during the 1990s, and very few firms dropped any product lines. Thus, the net changes in firm scope during this period can effectively be interpreted as gross product additions. Table IVa presents the main results in column (1). The coefficient on the input tariff is negative and statistically significant: declines in input tariffs are associated with an increase in the scope of production by domestic firms. The point estimate implies that a 10–percentage point fall in tariffs results in a 3.2% expansion of a firm’s product scope. During the period of our analysis, input tariffs declined on average by 24 percentage points, implying that within-firm product scope expanded 7.7%. Firms increased their product scope on average by 25% between 1989 and 1997, so our estimates imply that declines in input tariffs accounted for 31% of the observed expansion in firms’ product scope. In GKPT (2010a), we find that the (net) product extensive margin accounted for 25% of India’s manufacturing output growth during our sample. If India’s trade liberalization impacted growth only through the increase in product scope, our estimates imply that the lower input tariffs contributed 7.8% (0.25 × 0.31)

1744

QUARTERLY JOURNAL OF ECONOMICS TABLE IVa PRODUCT SCOPE AND INPUT TARIFFS

Input tariff

(1)

(2)

(3)

−0.323∗∗ (0.139)

−0.310∗∗ (0.150) −0.013 (0.043)

−0.327∗∗ (0.150) −0.014 (0.041) −0.032 (0.023)

Yes Yes .90 14,882

Yes Yes .90 14,864

Yes Yes .90 13,435

Output tariff Delicensed FDI liberalized Year effects Firm FEs R2 Observations

(4) −0.281∗∗ (0.125) −0.010 (0.041) −0.026 (0.021) 0.037 (0.024) Yes Yes .90 11,135

Notes. The dependent variable in each regression is (log) number of products manufactured by the firm. The delicensed variable is an indicator variable obtained from Aghion et al. (2008) that switches to one in the year that the industry becomes delicensed. The FDI variable is a continuous variable obtained from Topalova and Khandelwal (2011), with higher values indicating a more liberal FDI policy. As with the tariffs, the licensed and FDI policy variables are lagged. All regressions include firm and year fixed effects and are run from 1989 to 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

to the overall manufacturing growth. This back-of-the-envelope calculation suggests a sizable effect of increased access to imported inputs for manufacturing output growth. As discussed in Section II.B, the trade liberalization coincided with additional market reforms. In the remaining columns of Table IVa, we control for these additional policy variables. Column (2) introduces output tariffs to control for procompetitive effects associated with the tariff reduction. The coefficient on output tariffs is not statistically significant, whereas the input tariff coefficient hardly changes and remains negative and statistically significant. Although it may appear puzzling that the output tariff declines did not result in, for instance, a rationalization of firm scope, we refer the reader to GKPT (2010a) for explanations of this finding. In column (3), we include a dummy variable for industries delicensed (obtained from Aghion et al. [2008]) during our sample, and the input tariff coefficient remains robust. Finally, column (4) includes a measure of FDI liberalization taken from Topalova and Khandelwal (2011). The coefficient implies that firms in industries with FDI liberalization increased scope, but the coefficient is not statistically significant. The input tariff remains negative and significant, indicating that even after conditioning on other market reforms during this period, input tariff declines led to an expansion of firm product scope.

IMPORTED INPUTS AND PRODUCT GROWTH

1745

In Table IVb, we run a number of robustness checks to examine the sensitivity of our main results to alternative specifications of the main estimating equation, most importantly to controlling for preexisting sector and firm trends. Specifications (1) and (2) of Table IVb introduce NIC2–year and NIC3–year pair fixed effects, respectively, to control for preexisting, sectorspecific trends. These controls capture several factors, such as sector-specific technological progress, that may be correlated with input tariff changes. Not only do the input tariff coefficients in each column remain statistically significant, the magnitude of the point estimates hardly changes. This is further evidence that input tariffs are not correlated with potentially omitted variables. Specifications (3)–(6) control for industry-specific trends by interacting year fixed effects with the prereform (1989–1991) growth in the number of products by industry (3), output growth (4), and TFP growth (5). Specifications (6)–(10) control for a number of preexisting firm trends. Specification (6) reports the coefficient on input tariffs by augmenting equation (3) with year fixed effects interacted with a dummy that indicates whether the firm manufactured multiple products in its initial year. Specification (7) presents more flexible controls by interacting year fixed effects with the number of initial products manufactured by the firm. Specifications (8) and (9) place firms into output and TFP deciles, based on their initial year, and interacts the deciles with year dummies. This specification controls for shocks to firms of similar sizes over time. Specification (10) interacts a dummy indicating whether the firm had initial-period positive R&D expenditures with year dummies. The input tariff coefficient is robust to including all these flexible industry and firm controls. More importantly, the magnitude of the input tariff coefficient is remarkably stable across specifications, which provides further reassurance that the baseline results are not driven by omitted variable bias or preexisting trends. Specification (11) reports the input tariff coefficient using a Poisson specification that uses the number of products as the dependent variable. Finally, specification (12) addresses potential concerns about entry and exit by rerunning specification (3) on a set of constant firms that appear in each year of the sample period from 1989 to 1997. As before, the input tariff coefficient remains stable and statistically significant. The bottom panel of Table IVb reports robustness checks using long differences. The first check (specification (13)) regresses changes in firm scope on changes in input tariffs between 1989 and 1997. The standard error is now larger (p-value: 19%), but

Notes. The dependent variable in each regression is (log) number of products manufactured by the firm. Each row reports the coefficient on the input tariff with the additional controls beyond firm and year fixed effects. Specifications (1)–(12) have approximately 14,000 observations. Specifications (1) and (2) include two-digit NIC-year and three-digit NIC-year fixed effects, respectively. Specification (3) interacts the firms’ initial multiple-product status with year fixed effects. Specification (4) interacts the firms’ initial number of products with year fixed effects. Specifications (5) and (6) include interactions of year fixed effects with deciles of initial firm output and TFP. Specification (7) includes firms’ initial R&D status with year dummies. Specifications (8)–(10) include interactions of year dummies with prereform (1989–1991) industry product, output and TFP growth, respectively. Specification (11) runs equation (3) using a Poisson regression using product scope (rather than log product scope) as the dependent variable. Specification (12) reruns equation (3) on a constant set of firms. The bottom panel reports long-difference specifications. Specification (13) runs a long-difference regression of the change in (log) scope on the change in input tariffs between 1989 and 1997. Specification (14) runs a long-difference regression that controls for firm-specific trends by running the change in log scope between 1991–1997 and 1989–1991 on the equivalent double difference in input tariffs. All regressions include firm and year fixed effects. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

(l) NIC2 × year FEs

Product scope regressed on input tariffs, firm and year fixed effects, plus controls −0.323∗ (7) Initial products × year FEs −0.408∗∗∗ (0.191) (0.128) (2) NIC3 × year FEs −0.424∗∗ (8) Initial output decile × year FEs −0.311∗∗ (0.204) (0.146) (3) Prereform industry product growth × year FEs −0.327∗∗ (9) Initial TFP decile × year FEs −0.321∗ (0.145) (0.163) (4) Prereform industry output growth × year FEs −0.315∗∗ (10) Initial R&D dummy × year FEs −0.312∗∗ (0.141) (0.154) (5) Prereform industry TFP growth × year FEs −0.336∗∗ (11) Poisson model −0.286∗ (0.141) (0.162) (6) Initial MPF dummy × year FEs −0.393∗∗∗ (12) Constant firms −0.403∗ (0.127) (0.206) Long-difference robustness checks Dependent variable: 1997–1989 change in log Dependent variable: 1997–1991 change in log product scope scope—1991–1989 change in log scope (13) Input tariffs1997−1989 −0.315 (14) Input tariffs1997−1991 −0.234 (0.237) Input tariffs1991−1989 0.251 Observations 696 Observations 662

TABLE IVb REDUCED-FORM ROBUSTNESS CHECKS

1746 QUARTERLY JOURNAL OF ECONOMICS

IMPORTED INPUTS AND PRODUCT GROWTH

1747

TABLE IVc INPUT TARIFFS AND OTHER FIRM OUTCOMES Output (1) Input tariff

TFP (2)

R&D (3)

−1.125∗∗ (0.436)

−0.454∗ (0.233)

−1.559 (1.751)

Yes Yes .92 14,874

Yes Yes .81 13,714

Yes Yes .21 14,233

Input tariff × large firm Year effects Firm FEs R2 Observations

R&D (4) −0.077 (1.124) −1.903∗ 1.111 Yes Yes .21 14,233

Notes. The dependent variable in column (1) is log output. The dependent variable in column (2) is firm TFP obtained from Topalova and Khandelwal (2011). Columns (3) and (4) are R&D expenditures. Column (4) includes an interaction with a dummy if the firm is above median size. All regressions include firm and year fixed effects and are run from 1989 to 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

the coefficient is remarkably close to the annual regression results in Table IVa and the previous regressions in Table IVb. Specification (14) reports a double-difference specification by inp inp q q regressing (ln ni,97−91 − ln ni,91−89 ) on (τq,97−91 − τq,91−89 ). This double-difference specification removes firm-specific trends throughout the sample period. Although not statistically significant, the input tariff coefficient is again very close to the previous regressions. The finding that the long-difference specifications do not substantially attenuate the input tariff coefficient suggests that omitted variables are not biasing our main results in Table IVa. Input Tariffs and Other Firm Outcomes. In Table IVc, we estimate variants of equation (3) that use other firm outcome variables as dependent variables. These variables—firm sales, productivity, and R&D—were chosen based on their relevance to the mechanisms emphasized in endogenous growth models. We find that declines in input tariffs were associated with increased firm sales (column (2)) and higher firm productivity (column (3)).20 This 20. We obtain TFP for our sample of firms from Topalova and Khandelwal (2011). We should emphasize that the interpretation of the TFP findings is difficult in our setting for reasons discussed in Erdem and Tybout (2003). The presence of multiproduct firms further complicates the interpretation of TFP obtained from the methodology of Olley and Pakes (1996; see De Loecker [2007]). We therefore view these results simply as a robustness check that allows us to compare our findings to those of the existing literature.

1748

QUARTERLY JOURNAL OF ECONOMICS

evidence is consistent with predictions of theoretical papers that have emphasized the importance of intermediate inputs for productivity growth (e.g., Ethier [1979, 1982], Romer [1987, 1990], Markusen [1989], Grossman and Helpman [1991], and RiveraBatiz and Romer [1991]). It is also in line with recent empirical studies that find imports of intermediates or declines in input tariffs to be associated with sizable measured productivity gains (see Amiti and Konings [2007], Kasahara and Rodrigue [2008], Halpern, Koren, and Szeidl [2009], and Topalova and Khandelwal [2011]). Finally, we find that lower input tariffs are associated with increased R&D expenditures (column (3)), although the coefficient is imprecisely estimated. The imprecision might in part reflect heterogeneity in the R&D response across firms. In column (4), we allow the effect of input tariffs to differ across firms that are above and below the median value of initial sales. The coefficient on the interaction between input tariffs and the size indicator is negative and statistically significant. Thus, lower input tariffs are associated with increased R&D participation, but only in initially larger firms. Overall, the above results provide further support for the effects emphasized in the endogenous growth literature. Our earlier findings in GKPT (2010a) indicate no systematic relationship between India’s liberalization of output tariffs and domestic product scope. In sharp contrast, here we find strong and robust evidence that the reductions of input tariffs were associated with an increase in the range of products manufactured by Indian firms. Moreover, we also observe that lower input tariffs are associated with an increase in firm output, total factor productivity, and R&D expenditure among (initially) larger firms.

IV. MECHANISMS The results presented in the preceding section quantify the overall impact of access to imported inputs on firm scope and other outcomes. A limitation of this analysis is that it cannot uncover the mechanisms through which lower input tariffs influence product scope. In particular, it does not tell us whether the effects operate through lower prices for existing imported intermediate products or through increases in the variety of available inputs. This section explores and quantifies the relative importance of the price and variety channels.

IMPORTED INPUTS AND PRODUCT GROWTH

1749

IV.A. Theoretical Framework We first provide the theoretical foundation for understanding the mechanisms through which imported inputs lead to growth in domestic varieties. This necessitates introducing functional form assumptions for the production function of producing product q in equation (1). The functional forms we choose are motivated by the nature of our data, and importantly, the model provides a specification that is easy to implement empirically. We start by specifying a Cobb–Douglas production function, α Lq

Yq = AL

(4)

S

αsq

I

α

Xi iq ,

i=1

l

where α Lq + αsq + i=1 αiq = 1.The production of the final good requires a fixed cost Fq . The minimum cost of manufacturing one unit of output is given by I I αiq αLq αSq −αLq −αSq −α α Lq α Sq Pi αiq iq , PL PS (5) Cq = A−1 i=1

i=1

where Pk denotes the price index associated with input k = L, S, 1 . . . i . . . I. We assume that each input sector i has a domestic and an imported component (e.g., Indian and imported steel) that are combined according to the CES aggregator:21 γ γi−1 yi−1 i γi−1 γi γi (6) Xi = Xi D + Xi F , where Xi F and Xi D denote the domestic and foreign inputs, and γi is the elasticity of substitution between the two input bundles. The overall price index for input industry i is a weighted average of the price index for the domestic and foreign input bundles, i D and i F : Pi = iωDi D iωFi F .

(7)

The weights {ωi D, ωi F } are the Sato–Vartia log-ideal weights, (8) ωi B =

si B−si B ln si B−ln si B

si B−si B B=D,F ln si B−ln si B

and

si B =

i B Xi B , B = D, F, B=D,F i B Xi B

21. Halpern, Koren, and Szeidl (2009) use a similar production structure.

1750

QUARTERLY JOURNAL OF ECONOMICS

where the notation denotes the value of a variable in the previous period. We assume that the imported input industry Xi F is itself a CES aggregator of imported varieties (e.g., Japanese and German steel): ⎡ ⎤ σ σi i−1 σ σi−1 σi i ⎣ ⎦ (9) Xi F = aiv xiv , σi > 1, v∈Ii F

where σi is the industry-specific elasticity of substitution, aiv is the quality parameter for variety v, and Ii F is the set of available foreign varieties in industry i. The minimum cost function associated with purchasing the basket of foreign varieties in equation (9) is given by ⎡ ⎤ 1−σ1 i σ aiv pivi−1 ⎦ (10) c( piv , aiv, Ii F ) = ⎣ v∈Ii F

Following Feenstra (1994) and Broda and Weinstein (2006), the price index over a constant set of imported varieties is the conventional price index, Piconv F : piv wiv c( piv , aiv , Ii F ) conv = (11) Pi F = , c( piv , aiv , Ii F ) piv v∈Ii F

IiF

where Ii F = Ii F ∩ is the set of common imported varieties between the current and previous period. The weights in equation (11) are again the Sato–Vartia log-ideal weights: (12)

wiv =

siv −siv ln siv −ln siv

siv −siv ln siv −ln siv

and

siv =

v∈ Ii F

piv xiv . v∈ Ii F piv xiv

Feenstra (1994) shows that the price index of these foreign varieties in equation (11) can be modified to account for the role of new imported varieties as long as there is some overlap in the varieties available between periods (Ii F = ∅). The exact price index adjusted for new imported varieties is (13)

i F = Piconv F i F

Equation (13) states that the exact price index from purchasing the basket of imported varieties in equation (9) is the conventional

1751

IMPORTED INPUTS AND PRODUCT GROWTH

price index multiplied by a variety index, i F, that captures the role of new and disappearing varieties: i F =

(14) with (15)

v∈ I λi F = i F v∈Ii F

piv xiv piv xiv

λi F λi F

σ 1−1 i

and

v∈ I λi F = i F

v∈I i F

piv xiv piv xiv

.

As has been noted in the literature, i F, has an intuitive interpretation. Suppose there are no disappearing varieties (Table II) so that the denominator of (14) is one; then i F, measures the expenditure on the varieties that are available in both periods relative to the expenditure on the set of varieties available in the current period. The more important the new varieties are (i.e., higher expenditure share), the lower will be i F, and the smaller the exact price index will be relative to the conventional index. Equation (14) also shows that i F, depends on the substitutability of the foreign varieties captured by the elasticity of substitution σi . The more substitutable the varieties are, the lower is the term 1/(σi − 1) and the lower is the difference between the exact and conventional price indices. In the limit case of an infinite elasticity of substitution, the second term becomes unity, indicating that changes in the available varieties have no effect on the price index. Substituting equation (13) into equation (7) indicates that the overall input price index for input industry i is ωi F . Substituting this expression back into the Pi = iωDi D (Piconv F i F ) minimum cost function in equation (5) and taking logs yields I conv αiq ωi F ln Pi F + α Lq ln PL + αsq ln Ps ln Cq = (16) i=1

+

I

αiq ωi F lni F

+ v,

i=1

I −α −α −α I where v ≡ i=1 αiq ωi Dlni D + ln(α Lq Lq αsq sq i=1 αiq iq ) − lnA. The expression in equation (16) illustrates the channels through which changes in the minimum cost of production affect the set of products manufactured by domestic firms. Equation (16) can be expressed in terms of observable data (the terms in the

1752

QUARTERLY JOURNAL OF ECONOMICS

first two brackets) and the unobservable component captured by v. The first bracket captures the overall conventional price index for imported inputs (Piconv F ), labor (PL) and nontradables (PS ): (17)

lnPqinp,conv ≡

I

αiq ωi F lnPiconv + α Lq lnPL + αsq lnPs . F

i=1

The second bracket in equation (16) captures the importance of new imported inputs: (18)

inp

lnqF ≡

I

αiq ωi F lni F .

i=1

As discussed above, the term in (18) adjusts the price index to reflect new (or disappearing) imported varieties available to firms; a lower value indicates larger gains from variety. We use this structure to guide our analysis of the mechanisms driving the link between imported input use and domestic product scope. As we discuss in Section III, lower tariffs on imported inputs will affect product innovation if they lower the variable cost of producing the product below the fixed cost of introducing a product. Our approach relates the change (between 1989 and 1997) in firms’ product scope to the observable input price indices [equations (17) and (18)] in the firms’ minimum cost function. Although equation (13) suggests that reductions in the price and variety indices should have an equal effect on product scope, there are additional factors, not explicitly modeled above, which may break this equality. For example, the technological complementarity between varieties within the firm or within product lines of a firm could be much stronger than that implied by the index we estimate at the level of aggregation we use in our empirical analysis. In this case, new varieties would be more important to the firm than suggested by the variety index, which would make the introduction of new domestic products more responsive to the estimated variety index. We therefore allow the impact of the input indices to vary in the following specification: (19)

ln nαf = α + β1 lnPqinp,conv + β2 lnqF + ε f . inp

The theoretical framework suggests that coefficients on both input price components should be negative.22 inp,conv

22. Note that (19) is a change-on-change regression because both Pq inp qF are price indices.

and

IMPORTED INPUTS AND PRODUCT GROWTH

1753

IV.B. Identification Strategy The error term in (19) captures unobservable factors that might influence changes in firm scope. These factors include the unobserved components in v as well as potential demand shocks. Specification (19) clearly illustrates the endogeneity issues that arise in estimating how imported inputs affect firm scope. For instance, suppose firms expand the set of domestic varieties in response to lower price and variety indices for imported inputs. The expansion of domestic varieties will affect the exact price index of domestic inputs (contained in the unobserved ε). This domestic variety expansion will further drive down (depending on parameters) the minimum cost of production, thereby increasing the likelihood of more domestic variety expansion. This feedback between the foreign and domestic price indices creates a correlation between the error term and the observable input price indices in (19); in the absence of a shock to changes in the input indices, it is difficult to separate cause and effect. Alternatively, suppose that firms introduce new domestic varieties due to demand shocks, and manufacturing these new varieties requires more imported inputs. The imports and domestic input indices will both adjust in response to the demand shock, further influencing the minimum cost of production. This reverse causality concern is precisely the econometric complication that has limited previous research from identifying the impact between imported inputs and domestic variety growth. Equation (19) therefore highlights the importance of the policy change (i.e., the tariff liberalization) that we exploit. Section II established that declines in India’s tariffs were plausibly unanticipated and not correlated with firm and industry characteristics prior to the reform, so tariff changes are a natural instrument for identifying the channels. The exogenous reform allows us to establish a casual chain of the following events. A sharp and unanticipated decline in tariffs led to lower prices of existing inputs (as seen in Table IIIb), and hence a lower conventional price index for imports. Tariff declines also resulted in increased imported varieties (Table IIIc); this finding is consistent with models with fixed costs of exporting where lower variable exporting costs increase variable profits and make it more likely that the returns to exporting exceed the fixed cost of entering the foreign market. Thus, changes in tariffs will be correlated with the input price and variety indices in equation (19), satisfying a necessary condition for the IV strategy.

1754

QUARTERLY JOURNAL OF ECONOMICS

Although the price index of domestic inputs changes as firms introduce new domestic varieties, this phenomenon is an indirect effect of the trade reform affecting imported inputs. This point reflects our main identification assumption: input tariffs affect the price index of domestic inputs and TFP only through their impact on imported input prices and varieties, which we capture through the right hand–side variables in (19). That is, there is no direct effect of changes in input tariffs on the unobserved components of (19). Perhaps the most controversial component of this identification strategy is that the unobservable components in (19) include total factor productivity because there is evidence that trade liberalizations lead to productivity improvements. However, most of this evidence pertains to productivity improvements that result from reallocation effects associated with output tariff liberalization (e.g., Pavcnik [2002] and Melitz [2003]); these findings are not pertinent to our analysis because we focus on changes within firms over time,23 which we argue are the result of input tariff liberalization. More relevant to our study are findings from recent empirical studies that report within-firm (measured) productivity improvements following trade reforms.24 The three prevailing arguments for why trade reforms affect within-firm measured productivity are (a) product rationalization, (b) improved access to imported inputs, and (c) elimination of x-inefficiencies through managerial restructuring. From Table IVa (see also GKPT [2010a]), there is no evidence that Indian firms dropped relatively unproductive product lines to improve measured TFP; this rules out point (a) in the Indian context. The input channel (argument (b)) is precisely the focus of our analysis: the trade reform affects productivity through the intermediate input channels in (19), which are captured by the observable part of this equation. Elimination of x-inefficiency is a plausible argument, but it is important to note that our policy instruments are input tariffs. One would expect elimination of x-inefficiency to be driven by procompetitive output tariffs, rather than changes in input tariffs.25 Hence, our identification 23. Recall that Prowess contains relatively large firms for which entry and exit are not important margins of adjustment. Moreover, Sivadasan (2009) finds very little support for the reallocation mechanism in the context of India’s market reforms. 24. See Amiti and Konings (2007), Halpern, Koren, and Szeidl (2009), Sividasan (2009), and Topalova and Khandelwal (2011). For theoretical evidence, see Bernard, Redding, and Schott (2006), Nocke and Yeaple (2006), and Eckel and Neary (2009). 25. We can control for this channel by controlling for changes in output tariffs in equation (19).

IMPORTED INPUTS AND PRODUCT GROWTH

1755

assumption is supported by existing theoretical and empirical research. Because equation (19) contains two endogenous variables, we need a second instrument to identify the coefficients. Our second instrument is motivated by the insights of Helpman, Melitz, and Rubinstein (2008) and is based on the idea that the potential for exporting to India following the liberalization may be higher for those countries with “stronger ties” or proximity to India.26 Tariff declines lower the conventional price index and the variety index, but because India sets a common tariff to all countries, tariff declines alone cannot explain which countries are more likely to start exporting products to India after the reform. In other words, tariffs alone are not sufficient as instruments for the increase in varieties, defined as export country/product pairs. Our second instrument, based on a common language between India and its potential trading partners in a given industry, attempts to explain, for a given decline in tariffs, which industries experience a larger growth in new countries that begin exporting to India (i.e., new imported varieties). The instrument is constructed as follows. We first identify the set of countries that speak English (English is an official language of India). These countries plausibly possess a lower fixed cost of exporting to India (Helpman, Melitz, and Rubinstein 2008).27 Next, we identify the set of countries with a revealed comparative advantage (RCA) for each HS4 industry. Countries with a RCA are more likely to respond to the trade liberalization than countries that do not have a RCA. We identify countries’ RCA using Comtrade data that reports countries’ HS4-level exports to the world (excluding India) in 1989 (prior to India’s reform). We then take the intersection of these two sets to identify, for each HS4 industry, the set of English-speaking and RCA countries. Our proximity measure is a GDP-weighted average of these countries with the idea that industries with a higher average are likely to experience a larger increase in the extensive margin following trade liberalization. The proximity measure therefore uses country-specific differences in fixed costs of exporting to India (captured by language), 26. We are grateful to a referee for suggesting the idea of this instrumentation strategy. 27. Other possible fixed cost proxies might include common religion, border, and colonial origin. Common religion and border are not very good fixed cost proxies in the Indian context, and a colonial origin dummy is colinear with the English language dummy. Distance is not a good proxy because it is more likely to capture variable costs than fixed costs.

1756

QUARTERLY JOURNAL OF ECONOMICS

combined with information on RCA, to construct a proxy for fixed costs that varies across industries. We then pass this variable through the input-output matrix and use the concordances described above to obtain an NIC-level measure of language proximity of potential trading partners to India. This industry-specific variable therefore reflects the lower fixed cost of exporting intermediates to India. Finally, we interact this measure of proximity of potential trading partners in a given NIC code with the change in input tariffs. This interaction serves as our second instrument. IV.C. Empirical Implementation We use the formulas from the theoretical model to guide our empirical implementation. We begin by constructing the import indices, i F and i F . We calculate these indices from India’s import data according to equations (11) and (14) at the HS4 level of aggregation. We chose this level of aggregation because although the method proposed by Feenstra (1994) and Broda and Weinstein (2006) is designed to quantify the gains from new varieties within existing codes, the method is unable to quantify the introduction of entirely new codes.28 We obtain estimates for the elasticity of substitution σi from Broda, Greenfield, and Weinstein (2006), who estimate India’s elasticities of substitution at the HS3 level. Table V reports i F computed between 1989 and 1997.29 Row 1 reports the mean of each component across all HS4 codes. The mean variety index between 1989 and 1997 is 0.899, implying that the exact import price index adjusted for variety growth fell about 10% faster than the conventional import price index. There is a considerable heterogeneity in the impact of variety growth across HS4 price indices (for examples of HS4 codes, see GKPT [2010b]). Column (3) aggregates across all HS4 codes to compute the overall import price index. Accounting for the introduction of new varieties lowers the conventional import price index by 28. This is because index decomposition relies on a set of overlapping varieties across time periods. Between 1989 and 1997, the Indian import data indicate that the number of imported HS6 codes increased from 2,958 to 4,115, which means that computing indices at the HS6 level would ignore this substantial increase in new products. We therefore chose to compute indices at the HS4 level (although we still are unable to compute indices for the 220 (out of 1,145 HS4 codes) that appear between 1989 and 1997). 29. For HS4 codes that enter the import data after 1989, we assign a variety index of one. This is a conservative estimate of the gains from variety. For HS4 codes with missing price and variety indices in 1997 (for instance, because there is no overlap in varieties or units for prices are missing), we assign average values of coarser HS codes.

IMPORTED INPUTS AND PRODUCT GROWTH

1757

TABLE V IMPORT VARIETY INDICES Variety index

All sectors Intermediate sectors Final sectors

Mean

Median

Overall

0.899 0.881 0.904

0.986 0.954 1.000

0.688 0.624 0.850

Notes. Table reports the variety index computed at the HS4 level using elasticities of substitution from Broda, Greenfield, and Weinstein (2006) for India. The indices use HS6-country pairs as the definition of a variety. Columns (1) and (2) report the median and mean variety index across HS4 groups. Column (3) aggregates the HS4 indices to the overall economy level using equation (13) in Broda and Weinstein (2006). The first row reports the variety index over all imported sectors. The second and third rows compute the indices for the intermediate and final sectors. The numbers are computed using data between 1989 and 1997.

31% over nine years, or by 3.9% per year. This contribution of the extensive margin to the import price index is substantially larger than estimates obtained for Costa Rica (Arkolakis et al. 2008). It is also larger than the estimates for the United States, where aggregate import prices are on average 1.2% lower per year due to new imported varieties (Broda and Weinstein 2006). This large contribution of the extensive margin in India reaffirms the evidence from the raw data in Section (3) and reflects the restrictive nature of the Indian trade policy prior to the 1991 liberalization. The second and third rows of Table V report the price index computed separately for the HS4 codes classified by intermediate and final goods, respectively. Consistent with the import decompositions in Table II and the import variety regressions in Table IIIc, we observe that new variety growth was more substantial in the intermediate sectors than in the final goods sectors. The mean variety index for the intermediate sectors was 0.881 between 1989 and 1997 compared to 0.904 for final goods sectors. The difference in the overall aggregate price index is even starker. Variety growth deflated the conventional price index by 38% for intermediate sectors, compared to 15% for final sectors. This figure implies that the import price index for intermediates is on average 4.75% lower per year due to new varieties. Table V clearly highlights the gains from new imported varieties, particularly for intermediate inputs.30 30. As discussed earlier, Prowess does not contain reliable product-level information on imported inputs used by a firm. We therefore cannot create a reliable

1758

QUARTERLY JOURNAL OF ECONOMICS

Having established that variety growth has a substantial impact on the import price index, and that this effect is particularly pronounced in the intermediate goods sector, we next turn to quantifying the relative importance of the price and variety margins in the expansion of domestic product scope. We construct the two components of price index from (17) and (18) that capture the price and variety channels. This requires several pieces of information in addition to the conventional import price and import variety indices discussed above. We calculate the nominal wage index (PL) from the ASI by taking the ratio of the total industry wage bill between 1997 and 1989. We use the wholesale price index (WPI) for the nontradable price index (PS ).31 Finally, we need the two sets of weights: the Cobb–Douglas shares, αiq , and the share of foreign imports, ωi F . India’s IO matrix provides estimates of αiq . We obtain ωi F using equation (8) from the information on the share of imports in total domestic consumption for each sector in India’s IO matrix. We collapse the import indices to the level of aggregation in India’s IO matrix and combine it with the additional variables described above, to construct the indices using (17) and (18). We then map these indices to industry-level NIC codes associated with the main product a firm produced prior to reform. IV.D. Results We begin by reporting the OLS estimates of equation (19) in Table VIa. Table VIa offers a preliminary lens to the mechanisms driving the reduced-form results in Section III. Columns (1) and (2) estimate equation (19) with the conventional input price and variety index separately. A negative coefficient on the conventional input price index in column (1) suggests that lower prices of existing inputs are associated with higher product scope, measure of the actual number of imported inputs used by a firm. Of course, because the import data used in our analysis are a census of all imported varieties, it is clear from the previous tables that firm access to new inputs expanded during this period. The ASI collects the number of inputs (imported and total) at the plantlevel, but this information is only available after the reform period, so we cannot look at changes in firm input use. Moreover, because the ASI is a cross-sectional database, we cannot directly observe changes in inputs at a level more disaggregate than the industry. Nevertheless, we conducted one robustness check using the 1999 ASI data and regressed the industry-level average number of inputs per plant on changes in input tariffs between 1989 and 1997. We observe a statistically significant and negative relationship for both imported and total inputs. This is consistent with the trade reform enabling firms to expand their range of inputs. 31. A separate price index for electricity is available, so we separate the nontradable inputs into electricity and other inputs (e.g., warehousing, communication, water, gas) for which we do not have detailed price indices (and assign the WPI).

IMPORTED INPUTS AND PRODUCT GROWTH

1759

TABLE VIa PRODUCT SCOPE AND CHANNELS: OLS (1) Conventional price index

−0.156 (0.121)

Variety index R2 Observations

.002 696

(2)

−5.97∗∗ (2.55) .009 696

(3) −0.124 (0.113) −5.70∗∗ (2.41) .010 696

Notes. OLS regressions of firm scope on the imported input price indices. Column (1) includes the conventional index, column (2) includes the variety index, and column (3) includes both indices. Regression is run for the years 1989 and 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

although the coefficient is not statistically significant. The coefficient on the input variety index in column (2) is negative and statistically significant suggesting that an increase in input variety (captured by a lower index number) is associated with an expansion of firm scope. This finding continues to hold in column (3), when we estimate equation (19) with both indices as independent variables. Thus, the OLS results indicate that an increase in input variety is correlated with firm scope expansion. The theoretical section showed that the import indices may be correlated with the error term of the estimating equation. This would bias the OLS coefficients in Table VIa. We therefore turn to the IV results next. Columns (1) and (2) of Table VIb report the coefficients from first stage regressions. Column (1) reports first stage results with the conventional input price index as the dependent variable. As expected, a decline in tariffs leads to a decline in the conventional input price index. The coefficient on the interaction of the input tariff with language proximity to India is not significant, indicating no differential decline in the conventional input price index across sectors that vary in their language proximity to India. Column (2) reports first stage results for the input variety price index. Lower input tariffs result in more imported input varieties (i.e., a decline in the variety component), particularly in industries where countries with a RCA share language with India (i.e., a higher value of proximity cost variable). This is consistent with the interpretation that industries with closer language proximity to India experience a larger increase in varieties for a given decline in input tariffs.

696

696

1.340∗∗∗ −0.007 (0.476) (0.020) −0.707 0.121∗ (1.618) (0.065)

{Variety index}q (2)

−14.24∗ (7.56)

{Firm scope} f (4)

−0.033 (0.299) −13.40 (10.46)

{Firm scope} f (5)

−0.047 (0.245) −14.88∗ (8.97)

−0.493∗ (0.271) −14.80∗∗∗ (5.00)

{Firm scope} f — polynomial instruments (7)

[5.3, p = .01; [33.4, p = .00; [21.4, p = .00; [5.3, p = .01] [3.1, p = .05] 3.1, p = .05] 5.5, p = .01] 12.8, p = .00] 696 696 696 696 696

−0.240 (0.211)

{Firm scope} f (3)

{Firm scope} f — 1998/99 IO matrix (6)

2nd stage regressions

Notes. Table reports IV regressions of firm scope on the imported input price indices. The instruments are changes in input tariffs and changes in input tariffs interacted with the proximity measure described in the text. Columns (1) and (2) report the first-stage regressions for the conventional price and variety indices, respectively. Columns (3)–(7) report the second-stage regressions. Column (6) uses the 1998/1999 input–output matrix. Column (7) includes third-order polynomials of the instruments and is estimated using a continuously updated GMM estimator. The regressions are run for the years 1989 and 1997. Standard errors (in parentheses) clustered at the industry level. ∗ 10%, ∗∗ 5%, and ∗∗∗ 1% significance level.

F-test 1st stage instruments Observations

{Variety index}q

{Conventional price index}q

{Input tariff}q,97−89 × {proximity}q

{Input tariff}q,97−89

{Conv. price index}q (1)

1st stage regressions

TABLE VIb PRODUCT SCOPE AND CHANNELS: INSTRUMENTAL VARIABLES

1760 QUARTERLY JOURNAL OF ECONOMICS

IMPORTED INPUTS AND PRODUCT GROWTH

1761

The remaining columns of Table VIb report IV estimates of equation (19). The first-stage F statistics on excluded instruments are reported at the bottom of each column. Column (3) reports the results using only the conventional input price index; this is the IV version of column (1) in Table VIa. As with OLS, the result is not significant, but the sign of the coefficient suggests that lower input prices of existing inputs are associated with increases in firm scope. Column (4) presents the IV result for the input variety index. The coefficient on the variety index is negative and significant. Column (5) presents the results when equation (19) is estimated with IV and both indices are included; this equation is just identified with the two instruments and two endogenous regressors. The coefficient on the variety index is not statistically significant at conventional levels (p-value 20%), which is not surprising, given the well-known problems associated with the efficiency of IV estimators. However, the point estimates are very close to the IV results in column (4), which do not condition on the conventional input price index. The results in columns (4) and (5) suggest that more imported variety (i.e., a lower variety index) is associated with expansion in product scope. Note that the IV estimates of the variety effect in columns (4) and (5) are lower (higher in magnitude) than the OLS estimates. A priori, it is difficult to sign the bias of the OLS estimates. As noted earlier, the error term in equation (19) contains the (unobserved) price index of domestic inputs (i D) as well as unobserved inp demand shocks. If the correlation between the error term and qF is positive, the OLS estimates are biased downward (i.e., too negative). If the correlation is negative, the OLS estimates are biased upward (i.e., not negative enough). In order to understand why the bias is ambiguous, suppose there is an increase in (unobserved) demand. The demand shock will likely raise the demand for forinp eign inputs resulting in a lower qF . The shock may also induce domestic input suppliers to manufacture new varieties, which will cause downward pressure on i D because more varieties lower the price index. This effect suggests a positive correlation between inp qF and the error term in (19). However, the domestic shock will also induce an increase in the prices of existing domestic inputs, therefore causing i D to increase. If the price increase of existing domestic inputs outweighs the downward pressure on i D due to new varieties, there will be an overall negative correlation beinp tween qF and the error term in (19). Thus, the potential bias of

1762

QUARTERLY JOURNAL OF ECONOMICS

the OLS estimates is, a priori, ambiguous. The IV coefficients on the variety index are lower than the OLS estimates, suggesting that the negative correlation dominates. We estimate additional variants of equation (19). Our analysis so far has relied on the 1993–1994 IO table for India. This IO table likely reflects India’s production technology across industries at the start of the reform period. At that time, industries may not have relied heavily on inputs of machinery that were subject to high tariffs. Such an IO matrix may thus provide a more noisy measure of the potential to benefit from trade in inputs. As a robustness check, we reconstructed the conventional and variety input price indices using India’s 1998–1999 IO matrix. In column (6) of Table VIb, we report IV results based on these measures. We find that the point estimates are similar to those in column (5) but are, not surprisingly, more precisely estimated. The response of the extensive margin to tariffs is likely to be nonlinear, because India’s strongest or weakest trading partners are less likely to be affected by changes in tariffs. In column (7) we therefore use a third-order polynomial expansion of input tariffs and language proximity as instruments for the conventional and variety input price index and estimate equation (19) with a continuously updating GMM estimator. This estimator is more efficient than the two-stage least-squares estimator and also less prone to potential problems with weak instruments when there are multiple instruments (see Stock, Wright, and Yogo [2002] and Baum, Schaffer, and Stillman [2007]). We again find that lower input variety is associated with expanded product scope and that the magnitudes of the coefficients are similar to those in previous columns, and the first-stage F-statistics improve. Finally, we reestimate equation (19), controlling for changes in output tariffs. This specification directly controls for the possibility that trade liberalization affected TFP of domestic firms through declines in output tariffs. These regressions (available upon request) yield coefficients very similar to those reported in columns (4)–(7), suggesting that our assumption that input tariffs affect firms’ product scope only through the conventional input price and variety index is valid. Overall, the analysis suggests that the increase in imported variety enabled Indian firms to expand their product scope. The magnitudes of the coefficients on the imported variety index in columns (3)–(7) are also economically significant and consistent

IMPORTED INPUTS AND PRODUCT GROWTH

1763

with the reduced form results in Section III. Consider the coefficient in column (5). The coefficient implies that a 1% decline in the variety index leads to a 13.4% increase in firm scope. This elasticity is large, but it is important to note that the input variety index has been weighted by import shares (see equation (18)) and so the import-share-weighted variety indices are orders of magnitude smaller than the numbers in Table V. During the period of our analysis, input tariffs declined on average by 24 percentage points, and from column (2), the decline in input tariffs led to a 0.25% decline in input variety index on average. The IV point estimate therefore implies a 3.4% increase in scope for the average firm due to the increased availability of imported varieties. Although our theoretical framework suggests that reductions in the price and variety components should have an equal effect on product scope (see equation (13)), our empirical results suggest a much higher elasticity with respect to the variety index. As we noted earlier, one possible explanation for this finding is that the technological complementarity within product lines in a firm is much stronger than the complementarity we capture through the variety index we construct at a more aggregate level. Suppose, for example, that the production of a particular product required the use of particular inputs in fixed proportions. The firm might adjust product lines in response to tariff changes, but there would be no substitution of inputs within product lines. This would make the effect of new varieties very strong: the availability of new inputs would enable the firm to produce entirely new products. Although we cannot provide direct evidence on this hypothesis given the level of aggregation in available data, the large elasticity of product scope with respect to the variety index is highly suggestive. To conclude, the results in Table VIa and IVb provide insight into the mechanisms generating the reduced-form results we presented earlier.32 Given that new product additions accounted for about 25% of growth in Indian manufacturing output during our sample, the results suggest that the availability of new imported intermediates played an important role in the growth of Indian manufacturing in the 1990s. 32. We estimated equation (19) for output, TFP, and R&D activity. The analysis confirms that access to more varieties resulted in higher firm output and R&D activity, but lower TFP (although these results are not statistically significant). The counterintuitive sign on TFP could in part reflect the difficulties associated with measuring TFP noted in the introduction.

1764

QUARTERLY JOURNAL OF ECONOMICS

V. CONCLUSIONS After decades of import-substitution policies, Indian firms responded to the 1991 trade liberalization by increasing their imports of inputs. Importantly, two-thirds of the intermediate import growth occurred in products that had not been imported prior to the reforms. During the same period, India also experienced an explosion in the number of products manufactured by Indian firms. In this paper, we use a unique firm-level database that spans the period of India’s trade liberalization to demonstrate that the expansion in domestic product scope can be explained in large part by the increased access of firms to new imported intermediate varieties. Hence, our analysis points to an additional benefit of trade liberalization: Lower tariffs increase the availability of new imported inputs. These in turn enable the production of new outputs. Local consumers gain from an increase in domestic variety (on top of the increased number of imported consumer goods). Our approach relies on detailed product-level information on all Indian imports to measure the input price and variety changes. Because similar data are readily available for many countries, our approach can in principle be used by other researchers interested in the consequences of trade and imported inputs. Additionally, disaggregate data on the use of imported intermediates at the firm level may be available for some countries. However, we believe that relying on aggregate, product-specific import data rather than firm-level data on input use offers a few advantages. First, because the data on product imports are a census, we can say with confidence that the varieties classified as “new” were not available anywhere in India prior to the reform: their total imports were zero. Second, firms frequently access imported inputs through intermediary channels rather than direct imports; hence, it is possible that a firm that reports zero imported intermediates is in fact using imported intermediates that have been purchased through a domestic intermediary. This implies that there are advantages to using products or sectors as the appropriate units of aggregation. Third, the level of aggregation we use in this study allows us to take advantage of the tariff reforms in our identification strategy. Nevertheless, firm-level data with detailed information on imported inputs by firm may strengthen our understanding of the mechanisms that we highlight. More detailed data would enable us, for example, to study the determinants and consequences

IMPORTED INPUTS AND PRODUCT GROWTH

1765

of differential adoption of imported inputs by Indian firms, although such a study would need to address the endogeneity of this differential adoption of imported inputs by firms—the trade policy changes we exploit as a source of identification do not vary by firm. Our findings relate to growth models that highlight the importance of access to new imported inputs for economic growth and to recent cross-country evidence that lower tariffs on intermediate inputs are associated with income growth (Estevadeordal and Taylor 2008). Our firm-level analysis offers insights into the microeconomic mechanisms underlying growth by focusing on one particular channel, access to imported intermediates, and one particular margin of firm adjustment, product scope. Although we do not concentrate on aggregate growth, the fact that the creation of new domestic products accounted for nearly 25% of total Indian manufacturing output growth during our sample period suggests that the implications of access to new imported intermediate products for growth are potentially important. In future work we plan to further explore the contribution of these new products to TFP by exploiting product-level information on prices and sales available in our data. This will allow us ultimately to provide a direct estimate of the dynamic gains from trade. REFERENCES Aghion, Philippe, Robin Burgess, Stephen Redding, and Fabrizio Zilibotti, “The Unequal Effects of Liberalization: Evidence from Dismantling the License Raj in India,” American Economic Review, 98 (2008), 1397–1412. Amiti, Mary, and Jozef Konings, “Trade Liberalization, Intermediate Inputs and Productivity,” American Economic Review, 97 (2007), 1611–1638. Arkolakis, Costas, Svetlana Demidova, Peter J. Klenow, and Andres RodriguezClare, “Endogenous Variety and the Gains from Trade,” American Economic Review Papers and Proceedings, 98 (2008), 444–450. Baum, Christopher F., Mark E. Schaffer, and Steven Stillman, “Enhanced Routines for Instrumental Variables/GMM Estimation and Testing,” Stata Journal, 7 (2007), 465–506. Bernard, Andrew B., Stephen J. Redding, and Peter K. Schott, “Multi-product Firms and Trade Liberalization,” NBER Working Paper No. w12782, 2006. ——, “Multi-product Firms and Product Switching,” American Economic Review, 100 (2010), 70–97. Brambilla, Irene, “Multinationals, Technology, and the Introduction of Varieties of Goods,” NBER Working Paper No. w12217, 2006. Broda, Christian, Joshua Greenfield, and David E. Weinstein, “From Groundnuts to Globalization: A Structural Estimate of Trade and Growth,” NBER Working Paper No. w12512, 2006. Broda, Christian, and David E. Weinstein, “Globalization and the Gains from Variety,” Quarterly Journal of Economics, 121 (2006), 541–585. Chopra, Ajai, Charles Collyns, Richard Hemming, Karen Elizabeth Parker, Woosik Chu, and Oliver Fratzscher, “India: Economic Reform and Growth,” IMF Occasional Paper No. 134, 1995.

1766

QUARTERLY JOURNAL OF ECONOMICS

Debroy, Bibek, and A. T. Santhanam, “Matching Trade Codes with Industrial Codes,” Foreign Trade Bulletin (New Delhi: Indian Institute of Foreign Trade, 1993). De Loecker, Jan, “Product Differentiation, Multi-product Firms and Estimating the Impact of Trade Liberalization on Productivity,” Princeton University, Mimeo, 2007. Eckel, Carsten, and J. Peter Neary, “Multi-product Firms and Flexible Manufacturing in the Global Economy,” Review of Economic Studies, forthcoming, 2009. Erdem, Erkan, and James R. Tybout, “Trade Policy and Industrial Sector Responses: Using Evolutionary Models to Interpret the Evidence,” Brookings Trade Forum (2003), 1–43. Estevadeordal, Antoni, and Alan M. Taylor, “Is the Washington Consensus Dead? Growth, Openness, and the Great Liberalization, 1970s–2000s,” NBER Working Paper No. w14246, 2008. Ethier, Wilfred J., “Internationally Decreasing Costs and World Trade,” Journal of International Economics, 9 (1979), 1–24. ——, “National and International Returns to Scale in the Modern Theory of International Trade,” American Economic Review, 72 (1982), 389–405. Feenstra, Robert C., “New Product Varieties and the Measurement of International Prices,” American Economic Review, 84 (1994), 157–177. Feenstra, Robert C., Dorsati Madani, Tzu-Han Yang, and Chi-Yuan Liang, “Testing Endogenous Growth in South Korea and Taiwan,” Journal of Development Economics, 60 (1999), 317–341. Feenstra, Robert C., James R. Markusen, and William Zeile, “Accounting for Growth with New Inputs: Theory and Evidence,” American Economic Review, 82 (1992), 415–421. Gang, Ira N., and Mihir Pandey, “Trade Protection in India: Economics vs. Politics?” University of Maryland Working Paper No. 27, December 1996. Goldberg, Pinelopi K., Amit K. Khandelwal, Nina Pavcnik, and Petia Topalova, “Multi-product Firms and Product Turnover in the Developing World: Evidence from India,” Review of Economics and Statistics, forthcoming, 2010a. ——, “Trade Liberalization and New Imported Inputs,” American Economic Review, 99 (2010b), 494–500. Goyal, S. K., “Political Economy of India’s Economic Reforms,” Institute for Studies in Industrial Development (ISID) Working Paper, 1996. Grossman, Gene M., and Elhanan Helpman, Innovation and Growth in the Global Economy (Cambridge, MA: MIT Press, 1991). Halpern, Lazlo, Miklos Koren, and Adam Szeidl, “Imports and Productivity,” Federal Reserve Bank of New York, Mimeo, 2009. Hasan, Rana, Devashish Mitra, and K. V. Ramaswamy, “Trade Reforms, Labor Regulations and Labor Market Elasticities: Empirical Evidence from India,” Review of Economics and Statistics, 89 (2007), 466–481. Helpman, Elhanan, Marc J. Melitz, and Yona Rubinstein, “Estimating Trade Flows: Trading Partners and Trading Volumes,” Quarterly Journal of Economics, 123 (2008), 441–487. Kasahara, Hiroyuki, and Joel Rodrigue, “Does the Use of Imported Intermediates Increase Productivity?” Journal of Development Economics, 87 (2008), 106– 118. Klenow, Peter J., and Andres Rodriguez-Clare, “Quantifying Variety Gains from Trade Liberalization,” Penn State University, Mimeo, 1997. Kochhar, Kalpana, Utsav Kumar, Raghuram Rajan, Arvind Subramanian, and Ioannis Tokatlidis, “India’s Pattern of Development: What Happened, What Follows,” Journal of Monetary Economics, 53 (2006), 981–1019. Markusen, James R., “Trade in Producer Services and in Other Specialized Intermediate Inputs,” American Economic Review, 79 (1989), 85–95. Melitz, Marc J., “The Impact of Trade on Intra-industry Reallocations and Aggregate Industry Productivity,” Econometrica, 71 (2003), 1695–1725. Mohan, Rakesh, “The Growth Record of the Indian Economy, 1950–2008: A Story of Sustained Savings and Investment,” Reserve Bank of India Speech, 2008.

IMPORTED INPUTS AND PRODUCT GROWTH

1767

Muendler, Marc-Andreas, “Trade, Technology, and Productivity: A Study of Brazilian Manufacturers, 1986–1998,” University of California at San Diego, Mimeo, 2004. Navarro, Lucas, “Plant Level Evidence on Product Mix Changes in Chilean Manufacturing,” University of London, Mimeo, 2008. Nocke, Volker, and Stephen R. R. Yeaple, “Globalization and Endogenous Firm Scope,” NBER Working Paper No. w12322, 2006. Nouroz, H., Protection in Indian Manufacturing: An Empirical Study (Delhi: MacMillan India Ltd., 2001). Olley, Steven G., and Ariel Pakes, “The Dynamics of Productivity in the Telecommunications Equipment Industry,” Econometrica, 64 (1996), 1263–1298. Pavcnik, Nina, “Trade Liberalization, Exit, and Productivity Improvements: Evidence from Chilean Plants,” Review of Economic Studies, 69 (2002), 245–276. Rivera-Batiz, Luis, and Paul M. Romer, “Economic Integration and Endogenous Growth,” Quarterly Journal of Economics, 106 (1991), 531–555. Romer, Paul M., “Growth Based on Increasing Returns Due to Specialization,” American Economic Review, 77 (1987), 56–62. ——, “Endogenous Technological Change,” Journal of Political Economy, 98 (1990), 71–102. ——, “New Goods, Old Theory, and the Welfare Costs of Trade Restrictions,” Journal of Development Economics, 43 (1994), 5–38. Sivadasan, Jagadeesh, “Barriers to Competition and Productivity: Evidence from India,” B.E. Journal of Economic Analysis & Policy, 9 (2009), 42. Stock, James H., Jonathan H. Wright, and Motohiro Yogo, “A Survey of Weak Instruments and Weak Identification in Generalized Method of Moments,” Journal of Business and Economics Statistics, 20 (2002), 518–529. Topalova, Petia, and A. K. Khandelwal, “Trade Liberalization and Firm Productivity: The Case of India,” Review of Economics and Statistics, forthcoming, 2011.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES∗ EFRAIM BENMELECH EUGENE KANDEL PIETRO VERONESI The use of stock-based compensation as a solution to agency problems between shareholders and managers has increased dramatically since the early 1990s. We show that in a dynamic rational expectations model with asymmetric information, stock-based compensation not only induces managers to exert costly effort, but also induces them to conceal bad news about future growth options and to choose suboptimal investment policies to support the pretense. This leads to a severe overvaluation and a subsequent crash in the stock price. Our model produces many predictions that are consistent with the empirical evidence and are relevant to understanding the current crisis.

I. INTRODUCTION Although a large theoretical literature views stock-based compensation as a solution to an agency problem between shareholders and managers, a growing body of empirical evidence shows that it may also lead to earnings management, misreporting, and outright fraudulent behavior. Does stock-based compensation amplify the tension between the incentives of managers and shareholders instead of aligning them? The ongoing global financial crisis has brought forth renewed concerns about the adverse incentives that stock-based compensation may encourage. Many managers of the recently troubled financial institutions were among the highest-paid executives in the United States, with huge equity-based personal profits realized when their firms’ stock prices were high. Although the subsequent sharp decline of their firms’ stock prices may have been due to exogenous systemic shocks to the economy, it is an important ∗ We would like to thank Mary Barth, Joseph Beilin, Dan Bernhardt, Jennifer Carpenter, Alex Edmans, Xavier Gabaix, Dirk Hackbarth, Zhiguo He, Elhanan Helpman (the editor), Ilan Guttman, Alex Joffe, Ohad Kadan, Simi Kedia, Ilan Kremer, Holger Muller, Thomas Philippon, Andrei Shleifer, Lucian Taylor, and three anonymous referees, as well as seminar participants at the 2009 NBER Summer Institute Asset Pricing meetings, the 2008 Western Finance Association meetings in Waikoloa, the 2006 ESSFM Conference in Gerzensee, the 2006 Finance and Accounting Conference in Atlanta, Hebrew University, IDC, Michigan State, NYU Stern, Oxford, Stanford, Tel Aviv, the University of Chicago, the Chicago Fed, the University of Illinois at Urbana–Champaign, and Washington University for helpful comments and suggestions. Kandel thanks the Krueger Center for Finance Research at the Hebrew University for financial support. C 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of

Technology. The Quarterly Journal of Economics, November 2010

1769

1770

QUARTERLY JOURNAL OF ECONOMICS

open question whether the extent of their stock-based compensation may have induced CEOs to willingly drive prices up in full awareness of the impending crash. Indeed, similar concerns about these possible perverse effects of stock-based compensations on CEOs’ behavior were raised after the burst of the dot-com bubble. As governments across the globe are preparing a new wave of sweeping regulation, it is important to study the incentives induced by stock-based compensation, as well as the trade-offs involved in any decision that may affect the stock components in executives’ compensation packages.1 In this paper, we show formally that although stock-based compensation induces managers to exert costly effort to increase their firms’ investment opportunities, it also supplies incentives for suboptimal investment policies designed to hide bad news about the firm’s long-term growth. We analyze a dynamic rational expectations equilibrium model and identify conditions under which stock-based executive compensation leads to misreporting, suboptimal investment, run-up, and a subsequent sharp decline in equity prices. More specifically, we study a hidden-action model of a firm that is run by a CEO whose compensation is stock-based. The firm initially experiences high growth in investment opportunities and the CEO must invest intensively to exploit the growth options. The key feature of our model is that at a random point in time the growth of the firm’s investment opportunities slows down. The CEO is able to postpone the expected time of this decline by exercising costly effort. But when the investment opportunities growth does inevitably slow down, the investment policy of the firm should change appropriately. We assume that whereas the CEO privately observes the slowdown in the growth rate, shareholders are oblivious to it. Moreover, they do not observe investments, but base their valuation only on dividend payouts. When investment opportunities decline, the CEO has two options: revealing the decline in investment opportunities to shareholders, or behaving as if nothing had happened. Revealing the decline to shareholders leads to an immediate decline in the stock price. If the CEO chooses not to report the change in the business environment of the firm, the stock price does not fall, as the outside 1. For instance, in January 2009 the U.S. government imposed further restrictions on the non–performance related component of the compensation packages. In light of our results, it seems that the administration is moving in the wrong direction.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1771

investors have no way of deducing this event, and equity becomes overvalued. To maintain the pretense over time, the CEO must design a suboptimal investment strategy: we assume that as long as the reported dividends over time are consistent with the high growth rate, the CEO keeps his or her job. Any deviation in dividends that follows a suboptimal investment policy leads to the CEO’s dismissal. We show that when a CEO’s compensation is based on stock, and the range of possible growth rates is large, there exists a pooling Nash equilibrium for most parameter values. In this equilibrium, the CEO of a firm that experienced a decline in the growth rate of investment opportunities follows a suboptimal investment policy designed to maintain the pretense that investment opportunities are still strong. We solve for the dynamic pooling equilibrium in closed form and fully characterize the CEO’s investment strategy. In particular, because the CEO is interested in keeping a high growth profile for as long as possible, he or she initially invests in negative–NPV projects as stores of cash, and later on foregoes positive–NPV projects in order to meet rapidly growing demand for dividends. In both cases, he destroys value. Because this strategy cannot be kept forever, at some point the firm experiences a cash shortfall, the true state is revealed, and the stock price sharply declines as the firm needs to recapitalize. Our model highlights the tension that stock-based compensation creates. Although the common wisdom of hidden-action models is to align the manager’s objectives with those of investors by tying his compensation to the stock price, we show that stockbased compensation may lead the manager to invest suboptimally and destroy value to conceal bad news about future growth. The trade-off is made apparent by the fact that for most reasonable parameter values, and especially for medium- to high-growth companies, stock-based compensation indeed induces an equilibrium with high effort but also leads to a suboptimal investment strategy. That is, the cost of inducing high managerial effort ex ante comes from the suboptimal investment policy after the slowdown in investment opportunities, which eventually leads to undercapitalization and a stock price crash. Although our analysis focuses on a linear stock-based compensation contract, we also consider alternative compensation schemes widely used in the industry. We analyze (i) flat wage contracts, (ii) deferred compensation, (iii) option-based compensation, (iv) bonuses, and (v) clawback clauses. We discuss the pros

1772

QUARTERLY JOURNAL OF ECONOMICS

and cons of each of these contracts and show that these commonly used compensation schemes are not necessarily efficient either ex ante in inducing managerial effort, or ex post in forcing the manager to reveal the true state of the firm. We then propose and analyze an optimal managerial compensation contract. We show that this double incentive problem (i.e., inducing high effort and revelation) can often be overcome by a firm-specific compensation scheme characterized by a combination of stock-based compensation and a bonus awarded to the CEO upon revelation of the bad news about long-term growth. Indeed, although stock-based compensation is necessary to induce the manager to exert costly effort and increase investment opportunities, it also implicitly punishes the CEO for truth telling, as the stock price will sharply decline. The contingent bonus, possibly in the form of a golden parachute or a generous severance package, is then necessary to compensate for the loss in the CEO stock holdings. However, we also show that a bonus contract alone would not work ex ante, because although it induces truth telling, it also provides an incentive not to exert effort, as this behavior anticipates the time of the bonus payment. The stock-based component ensures high effort. An important implication of the model is that different types of firms need to put different levels of stocks in place in the compensation package. Specifically, we find that the CEOs’ compensation packages of growth firms, that is, those with high investment opportunities growth and high return on capital, should have little stock-price sensitivity. Indeed, a calibration of the model shows that for most firms the stock-based compensation component should never be above 40% of the total CEO compensation in order to induce truth revelation and optimal investments. Similarly, for most firms with medium-high return on investment, the stock-based compensation component should be strictly positive, to induce high effort. These results suggest that policymakers and firms’ boards of directors should be careful both with an outright ban of stock-based compensation and with too much reliance on it. Our model’s predictions are consistent with the empirical evidence documenting that stock-based executive compensation is associated with earnings management, misreporting and restatements of financial reports, and outright fraudulent accounting (e.g., Healy [1985], Beneish [1999], Ke [2005], Bergstresser and Philippon [2006], Burns and Kedia [2006], Johnson, Ryan, and

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1773

Tian [2009], and Kedia and Philippon [2010]). In fact, our model’s predictions go beyond the issue of earnings manipulation and restatements, as we focus on the entire investment behavior of the firm over the long haul. Similarly, our model’s predictions are consistent with the survey results of Graham, Harvey, and Rajgopal (2005), according to which most managers state that they would forego a positive–NPV project if it caused them to miss on earnings target, with high-tech firms much more likely to do so. High-tech firms are also much more likely to cut R&D and other discretionary spending to meet a target. On the same note, our model’s predictions are also consistent with Skinner and Sloan (2002), who show that the decline in firm value following a failure to meet analysts’ forecasts is more pronounced in high-growth firms. Our paper is related to the literature on managerial “shorttermism” and myopic corporate behavior (e.g., Stein [1989], Bebchuk and Stole [1993], Jensen [2005], and Aghion and Stein [2008]). In terms of assumptions, our paper bears some similarities to Miller and Rock (1985), who study the effects of dividends announcements on the value of firms. Similarly to Inderst and Mueller (2006) and Eisfeldt and Rampini (2008), we also assume that the CEO has a significant informational advantage over investors, but differently from them, we focus on investors’ beliefs about future growth rates and their effect on managers’ incentives. Our paper is also related to Bolton, Scheinkman and Xiong (2006), Goldman and Slezak (2006), and Kumar and Langberg (2009), but it differs from them in that we emphasize the importance of firms’ long-term growth options, which have a strong “multiplier” impact on the stock price and thus on CEO incentives to hide any worsening of investment opportunity growth. Finally, our paper is also related to the recent literature on dynamic contracting under asymmetric information (e.g., Quadrini [2004], Clementi and Hopenhayn [2006], and DeMarzo and Fishman [2007]). These papers focus on the properties of the optimal contract that induces full revelation, such that there is no information asymmetry in equilibrium. Although we also find the contract that induces full revelation and the first best, the main focus of our paper is the properties of the dynamic pooling equilibrium in which the manager does not reveal the true state, which we believe to be widespread. This analysis is complicated by the feedback effect that the equilibrium price dynamics exerts on

1774

QUARTERLY JOURNAL OF ECONOMICS

the CEO’s compensation and thus on the CEO’s optimal intertemporal investment strategy, which in turn affects the equilibrium price dynamics itself through shareholders beliefs. The solution of this fixed point problem is absent in other dynamic contracting models, but is at the heart of our paper. We organize the paper as follows. Section II presents the model setup. Section III presents the disincentives that stockbased compensation creates when there is information asymmetry. Section IV considers alternative compensation schemes. Section V provides the quantitative implications of our model. We discuss the broader implications of our results in Section VI. II. THE MODEL We consider a firm run by a manager who (a) chooses an unobservable effort level that affects the growth opportunities of the firm; (b) privately observes the realization of the growth opportunities and decides whether to report them to the public; and (c) chooses the investment strategy for the firm that is consistent with his or her public announcement. Our analysis focuses on the manager’s trade-off between incentives to exert costly effort to maintain high growth of investment opportunities, and incentives to reveal to shareholders when investment opportunities growth slows down. We start by defining the firm’s investment opportunities, which are described by the following production technology: given the stock of capital Kt , the firm’s operating profit (output) Yt is zKt if Kt ≤ Jt (1) , Yt = zJt if Kt > Jt where z is the rate of return on capital and Jt defines an upper bound on the amount of deployable productive capital that depends on the technology, operating costs, demand, and so on. The Leontief technology specification (1) implies constant returns to scale up to the upper bound Jt , and then zero return for Kt > Jt . This simple specification of a decreasing–returns to scale technology allows us to conveniently model the evolution of the growth rate in profitable investment opportunities, which serves as the driving force of our model. The existing stock of capital depreciates at a rate of δ.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1775

0.25

t=4

0.2 t=2 t=0

Output

0.15

0.1

0.05

0

0

0.5

1

1.5

2

2.5

3

Capital

FIGURE I Growth in Investment Opportunities This figure reproduces the output (earnings) profile Yt as a function of capital Kt for three different time periods, t = 0, t = 2, and t = 4.

We assume that the upper bound Jt in (1) grows according to dJt = gJ ˜ t, dt where g˜ is a stochastic variable described below. The combination of (1) and (2) yields growing investment opportunities for the firm. Because the technology displays constant returns to scale up to Jt , it is optimal to keep the capital at the level Jt if these investments are profitable, which we assume throughout. Figure I illustrates investment opportunities growth. We set time t = 0 to be the point when shareholders know firm’s capital K0 , as well as the current growth rate of investment opportunities, g˜ = G. One can think of t = 0 as the time of the firm’s initial public offering (IPO) or seasoned public offering (SEO) or of a major corporate event, such as a reorganization, that has elicited much information about the state of the firm. This is mostly a technical simplifying assumption, as we believe that all the insights would remain if the market had a system of beliefs over the initial capital and the growth rate. Firms tend to experience infrequent changes in their growth rates. We are interested in declines of the growth rate, as these are the times when the manager faces a hard decision as to whether to reveal the bad news to the public. Any firm may experience such a decline; thus our analysis applies to a wide variety of scenarios. We (2)

1776

QUARTERLY JOURNAL OF ECONOMICS

model the stochastic decline in investment opportunities growth as a discrete shift from the high-growth regime, g˜ = G, to a lowgrowth regime, g˜ = g (< G), that occurs at a random time τ ∗ , where τ ∗ is exponentially distributed with parameter λ. Formally, G for t < τ ∗ ∗ (3) where f (τ ∗ ) = λe−λτ . g˜ = ∗ g for t ≥ τ We assume that the manager’s actions affect the time at which the decline occurs. After all, CEOs must actively search for investment opportunities and monitor markets and internal developments, all of which require time and effort. In our model, higher effort translates into a smaller probability of shifting to lower growth. More specifically, the manager can choose to exert high or low effort, e H > e L. Choosing higher effort increases the expected time τ ∗ at which the investment opportunities growth declines. Formally, λ H ≡ λ(e H ) < λ(e L) ≡ λ L ⇐⇒ E[τ ∗ |e H ] > E[τ ∗ |e L]. The cost of high effort is positive, whereas the cost of low effort is normalized to zero: c(e) ∈ {c H , c L},

s.t. c H > c L = 0.

To keep the analysis simple, we assume linear preferences of the manager, T −β(u−t) (4) e wu[1 − c(e)] du , Ut = Et t

where wu is the periodic wage of the CEO, and β is his or her discount rate.2 We specify a cost of effort in a multiplicative fashion, which allows us to preserve scale invariance. Economically, this assumption implies complementarity between the wage and “leisure” [1 − c(e)], a relatively standard assumption in macroeconomics. That is, effort is costly exactly because it does not allow the CEO to enjoy his or her pay wt as much as possible.3 In (4), T is the time at which the manager leaves the firm, possibly T = ∞.4 However, the departing date T may occur 2. Our results also hold with risk-averse managers, although analytical tractability is lost. 3. See Edmans, Gabaix, and Landier (2009). 4. T = ∞ also corresponds to the case in which there is a constant probability that the manager leaves the firm or dies, whose intensity is then included in the discount rate β, as in Blanchard (1985).

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1777

earlier, as the manager may be fired if the shareholders learn that he has followed a suboptimal investment strategy. We must make several technical assumptions to keep the model from diverging, degenerating, or becoming intractable. First, we assume for tractability that manager’s decisions are firm-specific, and thus do not affect the systematic risk of the stock and its cost of capital, which we denote r. Then we must assume that (5)

z > r + δ;

that is, the return on capital z is sufficiently high to compensate for the cost of capital r and depreciation δ. This assumption implies that it is economically optimal for investors to provide capital to the company and invest up to its fullest potential, as determined by the Leontief technology described in (1). To ensure a finite value of the firm’s stock price, we must assume that r > G − λ H and r > g. We also set β > G to ensure that the total utility of the manager is finite. We also assume that β ≥ r; that is, the manager has a higher discount rate than fully diversified investors.5 Although we assume that the market does not observe the investments and the capital stock, there is a limit to what the firm can conceal. We model this by assuming that to remain productive, the firm must maintain a minimum level of capital Kt ≥ Kt , where Kt is exogenously specified, and for simplicity it depends on the optimal size of the firm: (6)

Kt ≥ Kt = ξ Jt

for 0 ≤ ξ < 1,

where Jt is defined in (2). This is a purely technical assumption, and ξ is a free parameter. Finally, we assume for simplicity that the firm does not retain earnings; thus the dividend rate equals its operating profit, Yt , derived from its stock of capital, Kt , less the investment it chooses to make, It . Given the technology in (1), the dividend rate is (7)

Dt = z min(Kt , Jt ) − It .

5. It is intuitive that the discount rate of an individual, β, is higher than the discount rate of shareholders: for instance, a manager may be less diversified than the market, or β may reflect some probability of leaving the firm early or death, or simply a shorter horizon than the market itself.

1778

QUARTERLY JOURNAL OF ECONOMICS

II.A. Investments and Stock Prices in the First Best To build intuition, it is useful first to derive the optimal investment and the stock price dynamics under the first best. To maximize the firm value, the manager must invest to its fullest potential, that is, keep Kt = Jt for all t. We solve for the investment rate It that ensures that this constraint is satisfied, and we obtain the following: PROPOSITION 1. The first-best optimal investment policy given λ is It =

(8)

(G + δ)e Gt ∗ ∗ (g + δ)e Gτ +g(t−τ )

for t < τ ∗ . for t ≥ τ ∗

The dividend stream of a firm that fully invests is given by (9)

Dt = zKt − It =

for t < τ ∗

DtG = (z − G − δ) e Gt g

Dt = (z − g − δ) e Gτ

∗

∗

+g(t−τ )

for t ≥ τ ∗

.

The top panel of Figure II plots the dynamics of the optimal dividend path for a firm with high growth in investment opportunities until τ ∗ , and low growth afterward. As the figure shows, the slowdown in the investment opportunities requires a decline in the investment rate, which initially increases the dividend pay∗ g out rate: Dτ ∗ − DτG∗ = (G − g)e Gτ . Given the above assumptions, the dividend rate is always positive. Moreover, from (9), the dividend growth rate equals the growth rate of investment opportunities, g. ˜ Given the dividend profile, the price of the stock follows: PROPOSITION 2. Given λ, under symmetric information the value of the firm is

(10)

(11)

before Pfi,t

∞

e−r(s−t) Dsg ds z − g − δ Gτ ∗ +g(t−τ ∗ ) e = for t ≥ τ ∗ , r−g τ ∗

∗ after = Et e−r(s−t) DsG ds + e−r(τ −t) Pfi,τ ∗

after = Pfi,t

t

t

= e Gt Afiλ

for t < τ ∗ ,

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1779

Dividend path under perfect information

0.7

0.6 Low growth

Dividend

0.5

0.4

0.3 High growth

0.2

0.1 τ* 0

0

5

10

15

11

20 25 30 Time Price path under perfect information

35

40

45

50

40

45

50

10 9

Price

8

Low growth

7 High growth 6 5 4 τ*

3 2

0

5

10

15

20

25 Time

30

35

FIGURE II A Dividend Path (Top Panel) and a Price Path (Bottom Panel) under Perfect Information Parameter values are in Table I.

where (12)

Afiλ

z−g−δ (z − G − δ) +λ . = r+λ−G (r − g)(r + λ − G)

In the pricing functions (10) and (11) the subscript “fi” stands for “full information” and superscripts “after” and “before” are for t ≥ τ ∗ and t < τ ∗ , respectively. Under full information, the share price drops at time τ ∗ by after before = −e Gτ Pfi,τ ∗ − Pfi,τ ∗

∗

(z − r − δ)(G − g) , (r − g)(r + λ − G)

1780

QUARTERLY JOURNAL OF ECONOMICS

which increases in τ ∗ . The bottom panel of Figure II plots the price path in the benchmark case corresponding to the dividend path in the top panel. We finally show that all else equal, shareholders prefer the manager to exert high effort, as the choice of e H maximizes firm value. COROLLARY 1. The firm value under e H is always higher than under e L; that is, (13)

before H before L Pfi,t (λ ) > Pfi,t (λ ).

By simple substitution in Proposition 2, it is easy to see that (13) holds iff z − r − δ > 0,

(14)

which is always satisfied (see condition (5)). It is intuitive that without a proper incentive scheme, the CEO won’t exert high effort in this environment, even if τ ∗ is observable, because of the cost of effort. In our setting, shareholders cannot solve this incentive problem by simply “selling the firm to the manager”: COROLLARY 2. The manager’s personal valuation of the firm before τ ∗ is as in (11), but with β substituted for r. Thus, a manager/owner exerts high effort, e H , iff (15)

λL + β − G 1 + λ L H Div > , λH + β − G 1 − c H + λ H H Div

where (16)

H Div =

(z − g − δ) . (z − G − δ)(β − g)

Condition (15) is intuitive. First, when effort does not produce much increase in the expected τ ∗ , that is, when λ H ≈ λ L, then the condition is never satisfied with c H > 0, and therefore the manager does not exert high effort. Second, and less intuitively, even when effort is costless (c H = 0) the manager may still not choose high effort, even if he owns the firm. In this case, condition

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1781

(15) is satisfied iff 6 z − β − δ > 0,

(17)

which is similar to (14), but with the manager discount β taking the place of the shareholders’ discount r. That is, if the manager is impatient, and the return on capital is low, so that β > z − δ, then the manager/owner won’t exert high effort. Because of this, the manager’s personal valuation of the firm is much lower than the shareholders’ valuation, a difference that goes beyond the simple difference due to discounting. In line with previous work, stock-based compensation provides the incentive for the manager to exert costly effort. For simplicity, we focus first on the simplest compensation program, in which the manager receives shares at the constant rate η per period. Because of linear preferences, it is always optimal for the manager to sell the shares immediately;7 therefore his or her effective compensation at time t is wt = η Pt .

(18)

PROPOSITION 3. In equilibrium with consistent beliefs where λ is before = the current intensity of τ ∗ and the stock price is Pfi,t fi Gt fi H e Aλ , where Aλ is in (12), the manager exerts high effort, e , under stock-based compensation (18) iff (19)

Afiλ + λ L H Stock λL + β − G > , λH + β − G Afiλ (1 − c H ) + λ H H Stock

where H Stock =

z−g−δ . (r − g)(β − g)

It follows that a high-effort Nash equilibrium occurs iff (19) is satisfied for Afiλ H . A low-effort Nash equilibrium occurs iff (19) is not satisfied for AfiλL . 6. This condition is obtained by substituting c H = 0 and the value of H Div into (15) and rearranging terms. 7. This is true in the full information equilibrium, as there is no signaling role of consumption. It is also true in a pooling asymmetric equilibrium, discussed next, as any deviation from this rule would reveal the manager’s information and destroy the equilibrium.

1782

QUARTERLY JOURNAL OF ECONOMICS

We note that if the cost of effort is zero, c H = 0, then condition (19) is always satisfied, as the price multiplier of the stock, largely due to future investment opportunities, is capitalized in the current salary, providing the proper incentive for the manager to exert high effort.

III. THE (DIS)INCENTIVES OF STOCK-BASED COMPENSATION The preceding section shows that when τ ∗ is observable, a simple stock-based compensation will resolve the shareholders’ incentive problem. Clearly the manager has much more information than the investors regarding the future growth opportunities of the firm, as well as its actual investments. Is this compensation scheme still optimal when τ ∗ is private information of the manager? To build intuition, consider again the random time τ ∗ in the bottom panel of Figure II, and assume now that τ ∗ is private information of the manager. In this case, if the manager disclosed the information, his wage would drop from wt = η Ptbefore to wt = η Ptafter , depicted in the figure by the relatively sharp drop in price. That is, a pure form of stock-based compensation effectively implies that shareholders severely punish the manager for revealing bad news about the growth prospects of the firm. It is important to note that the bad news is only about the growth prospects—which we refer to as growth options, in line with the asset-pricing terminology—and not about the return on assets in place, which we assume constant and equal to z. Given an opportunity, the manager will try to conceal this information. However, this conceal strategy is harder to implement than it seems at first, even if shareholders have no information about the firms’ investment and capital dynamics. In reality, shareholders have only imprecise signals about the amount of economic capital and investment undertaken by the corporation. This is especially true for those industries characterized by high R&D expenditures, intellectual property, or a high degree of opacity in their operation (e.g., financial institutions), or in rapidly growing new industries, as the market does not know how to distinguish between investments and costs. For tractability reasons, we assume that signals about the level of capital and investments have in fact infinite noise, and thus shareholders form beliefs about the manager’s actions only

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1783

by observing realized dividend payouts. Although this assumption is extreme, it reflects the lack of randomness of the return on capital z that we assume. A more realistic model would have both z stochastic and informative signals about capital and investments. Such a model is much more challenging to analyze, not only because shareholders’ beliefs dynamics, which affect prices, are more complicated, but also because the CEO’s optimal investment strategy would be extremely complex, as he or she would have to balance out the amount of information to reveal with the need to conceal the bad news for as long as possible. We note that although some information about investments and capital would make it easier for shareholders to monitor the manager, the presence of random return on capital z would also make it easier for the manager to conceal bad news longer, as he or she could blame low dividend payouts to temporary negative shocks on z rather than a permanent decline in investment opportunities. It is thus not obvious that our assumption of nonstochastic z but unobservable K and I make it easier for the manager to conceal τ ∗ than a more realistic setting. Shareholders know that at t = 0 the firm has a given K0 of capital and high growth rate G of investment opportunities. As long as the firm is of type G, they expect a dividend DtG , as described in (9).8 We assume that whenever the dividend deviates from the path of a G firm, shareholders perform an internal investigation in which the whole history of investments is made public. This assumption is realistic, as only major changes in the firm’s dividend policy may act as a coordination device across dispersed shareholders, who may then call for a possibly costly internal investigation on the firm. If the drop in dividend payouts does not correspond to the time of slowdown in investment opportunities (τ ∗ ), the CEO is dismissed from the firm.

III.A. Investment under Conceal Strategy If the CEO decides to conceal the truth, he or she must design an investment strategy that enables the firm to continue paying the high-growth dividend stream DtG in (9). Intuitively, such a strategy cannot be held forever, as it will require more cash than the firm produces. We denote by T ∗∗ the time at which the 8. The assumption that dividends can be used to reduce agency costs and monitor managers has been suggested by Easterbrook (1984).

1784

QUARTERLY JOURNAL OF ECONOMICS

firm experiences a cash shortfall and must disclose the truth to investors. Because the firm’s stock price will decline at that time, and the manager will lose his or her job, it is intuitive that the best strategy for the CEO is to design an investment strategy that maximizes T ∗∗ , as established in the following lemma: LEMMA 1. Conditional on the decision to conceal the true state at τ ∗ , the manager’s optimal investment policy is to maximize the time until the cash shortfall T ∗∗ . The next proposition characterizes the investment strategy that maximizes T ∗∗ : PROPOSITION 4. Let Kτ ∗ − denote the capital accumulated in the firm by time τ ∗ . If the CEO chooses to conceal the decline in growth opportunities at τ ∗ , then 1. He or she employs all the existing capital stock: Kτ ∗ = Kτ ∗ − . 2. His or her investment strategy for t > τ ∗ is It = z min(Kt , Jt ) − (z − G − δ)e Gt . 3. The firm’s capital dynamics is characterized as follows: Let h∗ and h∗∗ be the two constants defined in (49) and (50) in the Appendix, with T ∗∗ = τ ∗ + h ∗∗ . Then a. For t ∈ (τ ∗ , τ ∗ + h∗ ), the firm’s capital Kt exceeds its optimal level Jt . b. For t ∈ (τ ∗ + h∗ , T ∗∗ ], the firm’s capital Kt is below its optimal level Jt . Point 1 of Proposition 4 shows that in order to maximize the time of cash shortfall T ∗∗ , the manager must invest all of its existing capital in the suboptimal investment strategy. This suboptimal investment strategy, in point (2) of the proposition, ensures that dividends are equal to the higher growth profile DtG = (z − G − δ)e Gt (see (9)) for as long as possible. The extent of the suboptimality of this investment strategy is laid out in point (3) of Proposition 4. In particular, the CEO initially amasses an amount of capital that is above its optimal level Jt (for t < τ ∗ + h∗ ), whereas eventually the capital stock must fall short of Jt (for t ∈ (τ ∗ + h∗ , T ∗∗ )). These dynamics are illustrated in Figure III for a parametric example. The top panel shows that the optimal capital stock initially exceeds the upper bound on the employable capital, Kt > Jt . This implies that the pretending firm initially invests in negative– NPV projects, as shown in the bottom panel. Indeed, although

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1785

Capital path after τ *

1.4

Reveal strategy Conceal strategy Minimum required capital

1.35 1.3 1.25

Capital

1.2 1.15 1.1 1.05 1 Crisis time T **

0.95 0.9

0

2

4

6

8

10 Time

12

14

16

18

20

16

18

20

Investment path after τ*

0.08 0.06 0.04

Investment

0.02 0 −0.02 −0.04 −0.06 −0.08

Crisis time T **

−0.1 −0.12

Reveal strategy Conceal strategy

0

2

4

6

8

10 Time

12

14

FIGURE III The Dynamics of Capital and Investments under Reveal and Conceal Equilibrium after τ ∗ (Normalized to 0 in This Figure) This figure shows the capital dynamics (top panel) and investment dynamics (bottom panel) for a g firm pretending to be a G firm (dashed line), relative to the revealing strategy (solid line). The vertical dotted line denotes the liquidity crisis time T ∗∗ . Parameter values are in Table I.

the excess capital stock Kt − Jt has a zero return by assumption, it does depreciate at the rate δ. Intuitively, when investment opportunities slow down, the CEO is supposed to return capital to the shareholders (see Figure II). Instead, if the CEO pretends that nothing has happened, he or she will invest this extra cash in negative–NPV projects as a storage of value to delay T ∗∗ as much as possible. The bottom panel of Figure III shows that eventually the pretending firm engages in disinvestments to raise cash for the larger dividends of the growing firm. The firm can do this as

1786

QUARTERLY JOURNAL OF ECONOMICS

long as its capital Kt is above the minimal capital Kt . Indeed, the condition KT ∗∗ = K T ∗∗ determines T ∗∗ .9 In a conceal Nash equilibrium, rational investors anticipate the behavior of the managers, and price the stock accordingly. We derive the pricing function next. III.B. Pricing Functions under Asymmetric Information At time T ∗∗ , the pretending firm experiences a cash shortfall and is not able to pay its dividends DtG . At this time, there is full information revelation and thus the valuation of the firm becomes straightforward. The only difference from the symmetric information case is that the firm now does not have sufficient capital to operate at its full potential; thus it needs to recapitalize. Because at T ∗∗ the firm’s capital equals the minimum employable capital level, KT ∗∗ = K T ∗∗ , whereas the optimal capital should be JT ∗∗ , the firm must raise JT ∗∗ − K T ∗∗ . From assumption (6), K T ∗∗ = ξ JT ∗∗ , which yields the pricing function

∞

∗∗

Dt e−r(t−T ) dt − JT ∗∗ (1 − ξ ) T ∗∗ z−r −δ (G−g)τ ∗ +gT ∗∗ +ξ . =e r−g

L Pai,T ∗∗ =

(20)

g

The pricing formula for t < T ∗∗ is then (21)

Pai,t = Et t

T ∗∗

e−r(s−t) DsG ds + e−r(T

∗∗

−t)

L Pai,T . ∗∗

The subscript “ai” in (21) stands for “asymmetric information.” Expression (21) can be compared with the analogous pricing formula under full information (11); the only difference is that the after switch time τ ∗ is replaced by the (later) T ∗∗ , and the price Pfi,τ ∗ is L replaced with the much lower price Pai,T ∗∗ . We are able to obtain an analytical solution: PROPOSITION 5. Let shareholders believe that λ is the current intensity of τ ∗ . Under asymmetric information and conceal 9. Therefore the technical assumption of minimal capital stock in equation (6) affects the time at which the firm can no longer conceal the decline in its stock of capital.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1787

strategy equilibrium, the value of the stock for t ≥ h∗∗ is10 (22)

Pai,t = e Gt Aai λ,

where (23) Aai λ =

(z − G − δ) ∗∗ + λe−(G−g)h (r + λ − G)

z − r − δ + (r − g)ξ (r − g)(r + λ − G)

.

Comparing the pricing formulas under asymmetric and symmetric information, (22) and (11), we observe that the first terms in the constants Afiλ and Aai λ are identical. However, the second term is smaller in the case of asymmetric information: the reason is that under asymmetric information, rational investors take into account two additional effects. First, even if the switch time τ ∗ has not been declared yet, it may already have taken place, and the true investment opportunities may be growing at a lower rate g for ∗∗ a while (up to h∗∗ ). The adjustment e−(G−g)h < 1 takes this possibility into account. Second, at time T ∗∗ , the firm must recapitalize to resume operations, which is manifested by the smaller numerator of the second term, compared to the equivalent expression in (11). The top panel of Figure IV illustrates the value loss associated with the conceal strategy. Because the manager’s compensation is not coming out of the firm’s funds, the value loss is equal to the loss of the shareholders relative to what they would have gotten under the reveal strategy (full information). These costs can be measured by the present value (as of τ ∗ ) of the difference in the dividends paid out to the shareholders under the two equilibria. Relative to the reveal strategy, the conceal strategy pays lower dividends for a while, as the manager pretends to actively invest, and then must pay higher dividends, which arise from allegedly high cash flow. These higher dividend payouts come at the expense of investment, and thus are essentially borrowed from the future dividends. The lower the minimum employable capital Kt (i.e., lower ξ in (6)), the longer the CEO can keep up the pretense, and thus the higher the recapitalization that is required when the firm experiences a cash shortfall. This also implies lower dividends forever after the T ∗∗ . How does the information asymmetry affect the price level? The bottom panel of Figure IV plots price dynamics under the conceal equilibrium and compares them with prices under the 10. The case t < h∗∗ does not yield additional intuition, yet it is much more complex. For this reason, we leave it to equation (51) in the Appendix.

1788

QUARTERLY JOURNAL OF ECONOMICS Dividend path in two equilibria

1.4 Conceal equilibrium Reveal equilibrium 1.2

Dividend

1

0.8

0.6

0.4

τ*

0.2

0

0

5

10

15

20

25 Time

30

35

40

45

50

35

40

45

50

Price path under two equilibria

22

Conceal equilibrium Reveal equilibrium

20 18 16

Price

14 12 10 8 6 τ*

4 2

0

5

10

15

20

25 Time

30

FIGURE IV Dividend Dynamics and Price Dynamics in Reveal and Conceal Equilibria The vertical dotted line denotes time τ ∗ of the growth change from G to g. Parameter values are in Table I.

reveal equilibrium. Rational investors initially reduce prices in the conceal equilibrium, as they correctly anticipate the manager’s suboptimal behavior after τ ∗ . The stock price, however, at some point exceeds the full information price, as the firm’s cash payouts increase (see top panel). The price finally drops at T ∗∗ when the firm experiences a severe cash shortfall and needs to recapitalize. The exact sizes of underpricing and price drop depend on parameter values, as further discussed in Section V. The conceal equilibrium discussed in this section also provides CEOs a motive to “meet analysts’ earnings expectations,”

1789

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

a widespread managerial behavior, as recently documented by Graham, Harvey, and Rajgopal (2005). Indeed, the stock behavior at T ∗∗ is consistent with the large empirical evidence documenting sharp price reductions following failures to meet earnings expectations, even by a small amount (see, e.g., Skinner and Sloan [2002]). III.C. Equilibrium Strategy at t = τ ∗ We now consider the manager’s incentives at time τ ∗ to conceal or reveal the true growth rate. Because after τ ∗ there is nothing the manager can do to restore high growth G, the choice is driven solely by the comparison between the present value of the infinite compensation stream under the reveal strategy, and the finite stream under the conceal strategy. Recall also that after τ ∗ the manager no longer faces any uncertainty (even T ∗∗ is known), and thus the two utility levels can be computed exactly. The rational-expectations, pure-strategies Nash equilibrium must take into account investors’ beliefs about the manager’s strategy at time τ ∗ , because they determine the price function. There are three intertemporal utility levels to be computed at τ ∗ , depending on the equilibrium. In a reveal equilibrium, the manafter in equation (10) if at τ ∗ the ager’s utility is determined by Pfi,t manager decides to reveal. In contrast, if the manager decides to before conceal, his or her utility is determined by the price function Pfi,t in equation (11). In a conceal equilibrium, if the manager follows the Nash equilibrium strategy (conceal at τ ∗ ), then the price function must be the asymmetric information price function Pai,t in equation (22). If instead, the manager reveals at τ ∗ the true state of the firm, the price function reverts back to the full information after in equation (10). These three utility levels are price Pfi,t (24) reveal UStock,τ ∗

=

(25) conceal,ai UStock,τ ∗

(26) conceal,fi UStock,τ ∗

e T∗

e−β(t−τ

τ∗

=

−β(t−τ ∗ )

τ∗

=

∗

∞

T∗

e τ∗

∗

after ηe Gτ η Pfi,t dt = (β − g)

) Gτ ∗ η Pai,t dt = η Aai λe

−β(t−τ ∗ )

before ∗ η Pfi,t dt = η Afiλ e Gτ

z−g−δ r−g

, ∗∗

1 − e−(β−G)h β−G

,

∗∗

1 − e−(β−G)h β−G

.

1790

QUARTERLY JOURNAL OF ECONOMICS

The following proposition provides the equilibrium conditions: PROPOSITION 6. Let τ ∗ ≥ h∗∗ .11 A necessary and sufficient condition for a conceal equilibrium under stock-based compensation is (z − g − δ) Aai ∗∗ λ 1 − e−(β−G)h > (27) , (β − G) (r − g)(β − g) where the constant Aai λ is given in equation (23). Similarly, a necessary and sufficient condition for a reveal equilibrium under stock-based compensation is (28)

(z − g − δ) Afiλ ∗∗ 1 − e−(β−G)h < , (β − G) (r − g)(β − g)

where the constant Afiλ is given in equation (12). Intuitively, the right-hand sides of both conditions (27) and (28) are the discounted utility under the reveal strategy. Because the compensation is stock-based, the stock multiplier “1/(r − g)” enters the formula. The left-hand sides of both conditions are the discounted utility values under the conceal strategy. In particular, now the stock multiplier Aai λ appears under the conceal equilibrium, whereas the stock multiplier Afiλ appears under the reveal equilibrium. Because Afiλ > Aai λ , conditions (27) and (28) imply that the two equilibria in pure strategies are mutually exclusive, and thus it is not possible to find parameters for which both equilibria can exist at the same time. However, it may happen that for some parameter combination, no pure-strategy Nash equilibrium exists. III.D. Rational Expectations Equilibrium with the Choice of Effort We now move back to t < τ ∗ and obtain conditions for Nash equilibrium that include the manager’s effort choice. The equilibrium depends on the type of compensation and on the equilibrium at time τ ∗ . The expected utility for t < τ ∗ is given by τ ∗

∗ (29) Ut = Et e−β(u−t) wu[1 − c(e)] du + e−β(τ −t) Uτ ∗ , t

11. The solution for the case τ ∗ < h∗∗ is cumbersome and thus is relegated to the Appendix.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1791

where Uτ ∗ is the manager’s utility at τ ∗ , computed in the preceding section, whose exact specification depends on the equilibrium itself. We now derive the conditions under which the stock-based compensation induces high effort. Because “conceal” is the most frequent equilibrium at t = τ ∗ (see Section V), we focus our attention only on this case. PROPOSITION 7. Let t ≥ h∗∗ and let λ H be such that a conceal equilibrium is obtained at τ ∗ . Then high effort e H is the equilibrium strategy iff λL + β − G 1 + λ L HaiStock , > λH + β − G 1 − c H + λ H HaiStock

(30) where

∗∗

HaiStock ≡

1 − e−(β−G)h . β−G

Condition (30) is similar to condition (19) in the benchmark case, and it has the same intuition (see discussion after Proposition 3). To summarize, Propositions 6 and 7 show that under conditions (27) and (30), pure stock-based compensation induces a high effort/conceal equilibrium: The CEO exerts high effort to increase the firm’s growth options but will not disclose bad news about growth options when the time comes. Section V shows that these conditions hold for a wide range of parameters. IV. ALTERNATIVE SIMPLE COMPENSATION SCHEMES The preceding section focused on a simple, linear stock-based compensation scheme. In this section we consider alternative simple compensation schemes widely used in the industry. The final section also discusses the properties of an optimal contract. IV.A. Flat Wage Suppose the manager simply gets a wage wt that is not contingent on anything. For simplicity, assume wt = w, a constant. In this case, it is intuitive that the manager at time τ ∗ prefers to reveal the decrease in investment opportunities to shareholders, as he would get w for a longer period. The drawback, of course, is that the manager has no incentive to exert costly effort,

1792

QUARTERLY JOURNAL OF ECONOMICS

because it would bear the effort cost c H . Thus, the resulting Nash equilibrium is a low effort/reveal equilibrium, as shareholders expect that the manager will not exert effort, and prices adjust to before = e Gt AfiλL in (11). Pfi,t From the shareholders’ point of view, the interesting question is whether it is better to induce a low effort/reveal equilibrium through a simple flat wage, or a high effort/conceal equilibrium through stock-based compensation. Because each of the two equilibria has one positive and one negative feature, the question is which one is better. The next corollary answers this question, whereas Section V contains a quantitative assessment of this corollary. L

COROLLARY 3. There are λ H and λ such that for λ H < λ H and λ L > L λ the value of the firm under high effort/conceal equilibrium is higher than under low effort/reveal equilibrium. That is, before . Pai,0 > Pfi,0 Intuitively, as λ H → 0, the price under asymmetric information converges to the Gordon growth formula with high growth: Pai,0 → (z − G − δ)/(r − G). Similarly, as λ L → ∞, the price under full information converges to the same model but with low before → (z − g − δ)/(r − g). Because in our model growth rate g, Pfi,0 before z > r + δ, in the limit Pai,0 > Pfi,0 . This corollary implies that if the manager’s effort strongly affects the investment opportunities growth, then shareholders prefer an incentive scheme that induces a conceal strategy as a side effect. They are willing to tolerate the stock price crash at T ∗∗ and recapitalization as a delayed cost to provide incentives for longer-term growth. This fact implies that it is not necessarily true that finding ex post managers who have not been investing optimally during their tenure is in contrast with shareholders’ ex ante choice. Given the choice between these two equilibria, ex ante shareholders would be happy to induce high growth at the expense of the later cost of a market crash. We believe that this is a new insight in the literature. Section V below shows that stock-based compensation is ex ante optimal for a wide range of reasonable parameters. IV.B. Deferred Compensation: Vesting A popular incentive scheme is to delay the compensation of managers for a few years. Indeed, there is a conventional wisdom

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1793

that delayed stock-based compensation provides the incentives both to exert high effort and to reveal any bad news about the company. Unfortunately, this conventional wisdom is not warranted, as we now show. To see the problem with the argument, consider the case in which the firm pays the managers at a rate of ηt shares per period, which are vested for k years. Because of linear preferences, it is optimal for the manager to sell off all of the shares that are becoming eligible for vesting, and consume out of the proceeds.12 Thus, at time t, the manager’s consumption is wt = ηt−k Pt . As in the preceding section, we study the case in which the firm always awards the same number of shares per period, ηt = η, which makes the consumption at time t simply wt = η Pt . Assume that if the CEO conceals at τ ∗ , he will lose all the nonvested shares at the time of the cash shortfall T ∗∗ . It is immediate then that if τ ∗ > k, the intertemporal utilities at τ ∗ under either a reveal or a conceal equilibrium are given by the expressions (24)–(26). Thus, in this case, the incentive problem of the manager is identical to the one examined earlier in Proposition 7. That is, delayed stock-based compensation is completely ineffective in inducing the manager to reveal bad news about growth options. Intuitively, if τ ∗ > k, then at τ ∗ the manager has accumulated enough shares becoming eligible for vesting per period so that the revelation of bad news about growth options would undermine. The manager will then retain the information. What if τ ∗ < k? The only change in the expressions for the intertemporal utilities (24)–(26) is that the integral will start from k instead of τ ∗ , and thus the expressions have to be modified accordingly. In this case, indeed, for k long enough the intertemporal utility under the conceal strategy always decreases to zero more quickly than the one under the reveal strategy, and thus a large vesting period k would provide the correct incentives if τ ∗ < k. However, relying on this event (τ ∗ < k) to provide the proper incentives to the CEO is misguided, as the lower utility under stockbased compensation only stems from the fact that the CEO has not had enough time to accumulate shares before τ ∗ . This logic is problematic for two reasons: First, shareholders actually want τ ∗ 12. See footnote 7.

1794

QUARTERLY JOURNAL OF ECONOMICS

to occur as far in the future as possible, and thus hoping also to have τ ∗ < k is against their desire to push the manager to exert high effort, unless the delay k is unreasonably high. Second, the argument relies on t = 0 being the time at which the CEO starts accumulating shares, which is also problematic. In our model, t = 0 is any time at which there is full disclosure about both the capital K0 and the company growth rate g˜ = G. For instance, if this time reflects the time of an IPO, it is typically the case that the owner selling the firm becomes the CEO of the new company while initially retaining a large fraction of the firm’s ownership. Similarly, managers who are promoted from within the firm have already accumulated shares during their time at the firm. Thus, if we assume that at time t = 0 the manager is already endowed with shares becoming eligible for vesting, then τ ∗ < k can never happen, and the equilibrium is effectively identical to the one in the preceding section. We finally note that when the manager decides at τ ∗ to conceal the bad news about future investment opportunities, he or she does so in the full knowledge that at the later time T ∗∗ he or she will lose all of the nonvested shares (ηk), suffering an effective loss at T ∗∗ equal to ηkPT ∗∗ . This amount can be quite substantial, as PT ∗∗ is very high (see Figure IV), and thus it may appear at first that a CEO who loses a massive amount of wealth at the time of the firm’s liquidity crisis (i.e., T ∗∗ ) cannot be responsible for the crisis itself, as he or she is the first to lose. But this conclusion is not warranted, because the price reached the large level PT ∗∗ exactly because of the (misleading) behavior of the CEO. Had the CEO behaved in the best interest of the shareholders, such a high value of the stock would have been not realized in the first place, as shown by the dashed line in Figure IV. This analysis therefore cautions against reaching any type of conclusion on the behavior of CEOs based on personal losses of wealth at the time of the firm’s liquidity crisis. IV.C. Deferred Compensation: Delayed Payments The main problem with vesting is that the effective compensation at t still depends on the stock price at t. A popular variation of deferred compensation is to delay the payment only to k years in the future, so that the consumption at time t is wt = ηt−k Pt−k

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1795

if the manager acts in the best interest of the shareholders, or zero after the cash shortfall T ∗ , if it happens. This compensation scheme is equivalent to placing the cash equivalent of the pure stock-based compensation in an escrow account, and paying it k years later if the CEO has not been caught misbehaving. The next proposition characterizes the equilibrium: PROPOSITION 8. a. Let k be defined by the equation

−(β−G)(h∗∗ −k) z−g−δ fi 1 − e (31) . Aλ = β−G (r − g)(β − g) Then the manager reveals at τ ∗ iff k ≥ k. b. Let t ≥ k = k and let τ ∗ not have been realized yet.13 Then the manager exerts high effort e H iff

L ∗∗ H β +λ −G c (32) < (λ L − λ H ) e−(β−G)h . β + λH − G Thus, there exists a constant c H , such that if c H < c H , then high effort/reveal is a Nash equilibrium. This proposition shows that the stock-based delayed-payment compensation scheme may effectively achieve the first best, as the CEO exerts high effort (b) and reveals the true state at τ ∗ (a). This is good news, and it matches the intuition that the stock component of the compensation still provides the incentive to work hard, and the delayed payment provides the incentive to reveal any bad news about the long term. However, the proposition also highlights two facts: First, the delay must be sufficiently long (k > k), and second, the cost of effort must be sufficiently low (c H < c H ). Unfortunately, if either of these requirements is not satisfied, the equilibrium breaks down. In our basic calibration, we find that the minimum delay to induce truth revelation is k ≈ 11.8 years, which we find unrealistically long. In addition, we also find that the lower bound to the managerial cost is c H ≈ 1.8%, which is below the parameter range we use in our calibration. The optimal contract in Section IV.G and its implementation through a stocksplus-bonus compensation scheme in Section V.C provide a solution that works across a large range of parameter values. 13. We assume t ≥ k to sidestep the issue of having already accumulated shares by τ ∗ , as discussed in the preceding section.

1796

QUARTERLY JOURNAL OF ECONOMICS

IV.D. Option-Based Compensation How does an option-based contract affect the incentive to conceal bad news about growth options? We now show that such a contract amplifies the incentive to conceal. Let ηt denote the number of options awarded at time t, and let k be their time to maturity. As is standard practice, we assume that these options are issued at the money, with strike price Ht = Pt . Thus, the consumption of the manager at t is given by wt = ηt−k max (Pt − Ht−k, 0) = ηt−k max (Pt − Pt−k, 0). Consider again time τ ∗ . In this case, the intertemporal utilities under the reveal and conceal strategy in the reveal Nash equilibria14 are given by τ ∗ +k after ∗ Reveal,fi before (33) UOption,τ e−β (t−τ ) ηt−k max Pfi,t − Pfi,t−k , 0 dt ∗ = τ∗ ∞ ∗ after after e−β t−τ ηt−k max Pfi,t − Pfi,t−k , 0 dt, + Conceal,fi (34) UOption,τ ∗ =

τ ∗ +k τ ∗ +h∗∗

τ∗

∗ before before e−β (t−τ ) ηt−k max Pfi,t − Pfi,t−k , 0 dt.

The intertemporal utilities under these two strategies in the conceal equilibrium are identical but with the asymmetric informabefore in both (33) and (34). tion price Pai,t substituted in place of Pfi,t The next proposition shows that the leverage implied by option like contracts in fact makes the conceal strategy more likely. PROPOSITION 9. a. Let kfi∗ be defined by r−g 1 (35) Afiλ . kfi∗ = log G z−g−δ Then for k < kfi∗ a reveal equilibrium at τ ∗ holds iff (e gk − 1) β − G ∗ ∗∗ > e Gkfi eβk 1 − e−(β−G)h . (36) −Gk (1 − e ) β−g There exists g > 0 such that this condition is always violated for g < g. Thus, a reveal equilibrium cannot be supported when g is small. 14. See notation and discussion in Section III.C.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1797

fi ∗ be defined as in (35) but with Aai b. Let kai λ in place of Aλ , and ∗ ∗∗ ∗ let τ > h + k. Then, for k < kai , a conceal Nash equilibrium at τ ∗ occurs if (e gk − 1) β − G ∗ ∗∗ < e Gkai eβk 1 − e−(β−G)h . −Gk (1 − e ) β−g

There exists g > 0 such that this condition is always satisfied for g < g. Thus, a conceal equilibrium can always be supported when g is small. The intuition behind this proposition is straightforward. Consider the case in which g = 0. In this case, the pricing formula in (11) shows that when the information is revealed, the price drops after , which is a constant. The value of kfi∗ in (35) is the time lag to Pfi,t after that ensures that the price at revelation, Pfi,τ ∗ , equals the price before after before revelation while it was still increasing, Pfi,τ ∗ −k∗ = Pfi,τ ∗ . fi ∗ Clearly, for k < kfi , it follows that the price after revelation is albefore ways smaller than the strike price Hτ ∗ −k = Pfi,τ ∗ −k, pushing the option out of the money. The case in which g = 0 also implies that the manager cannot expect that his future options will ever be in the money. Thus, in this case, by revealing the CEO gets intertemporal utility equal to zero. By concealing, in contrast, he always receive positive utility. It follows from this argument that optionlike payoffs tend to increase the incentive to conceal bad news compared to the case in which the manager has a linear contract. If k > kfi∗ calculations are less straightforward, but because the simple stock-based compensation can be considered a special type of option-based contract with strike price equal to H = P0 , it follows that increasing the strike price only decreases the payoff if the manager reveals information, decreasing his incentive to reveal. IV.E. Cash Flow–Based (Bonus) Compensation One alternative to stock-based compensation is a profit-based compensation contract. In the spirit of our simple neoclassical model with no frictions, we assume the firm pays out its output net of investments in the form of dividends. From an accounting standard, these should be considered the firm’s free cash flow,15 which coincides with dividends and earnings in our model, but 15. We thank Ray Ball for pointing this out.

1798

QUARTERLY JOURNAL OF ECONOMICS

that in reality they are different and subject to different degrees of manipulation. Free cash flows are arguably harder to manipulate and thus we consider a simple compensation defined on cash flows. Let then the compensation be given by wt = ηd Dt . In this case, we obtain the following: PROPOSITION 10. Under cash flow–based compensation, a necessary and sufficient condition for a “reveal” equilibrium at t = τ ∗ is z−G−δ z−g−δ −(β−G)h∗∗ (37) 1−e . < β−G β−g In addition, the manager exerts high effort, e H , iff (38)

1 + λ L H Div λL + β − G > , λH + β − G 1 − c H + λ H H Div

where (39)

H Div =

(z − g − δ) . (z − G − δ)(β − g)

A Nash equilibrium with high (low) effort obtains iff (38) is (is not) satisfied. This compensation strategy can achieve first best under some parameterizations. In particular, we find that condition (37) is satisfied for most parameter configurations. This result is in fact intuitive, and leads us to the optimal compensation discussed later. Referring to the top panel of Figure II, we see that when the manager optimally reveals, he or she has to increase the payout to shareholders, as there are no longer any investment opportunities available. Effectively, by doing so, the manager also increases his or her own compensation. That is, this cash flow–based compensation resembles a “bonus” contract in which the revelation of bad news leads to a higher cash payment and thus higher utility. Of course, a drawback of this compensation scheme is that it provides an incentive to pay out too high dividends and thus sacrifice investments. In fact, because the manager is impatient, β > r, he or she prefers to have τ ∗ occur as soon as possible, thus decreasing his or her incentive to exert high effort. Indeed, we find that condition (38) is satisfied only under some extreme parameterizations, in which both the return on capital z and the growth rate G are large. In this case, the higher discount β is

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1799

compensated by (much) larger cash flows in the future if the manager invests heavily, that is, if he or she exerts high effort. In all the other cases, the cash flow–based compensation leads to low effort and revelation, thereby generating the same type of conundrum already discussed in Section IV.A IV.F. Clawback Clauses One final popular incentive scheme is to insert clawback clauses into the CEO compensation package. Such clauses establish that the CEO has to return part or all of the compensation he or she received during a given time if he or she is found guilty of misconduct. In our model, in a conceal equilibrium the CEO is not disclosing all the information to shareholders, which can be considered reasonable cause for shareholders or regulators to proceed against the CEO. Clearly, if the difference between τ ∗ and T ∗∗ is verifiable in court, then by imposing a sufficiently large penalty at T ∗∗ we can always ensure that the manager discloses. The clawback clause is just such a penalty. Our model, however, suggests that shareholders and regulators have to be careful even with clawback clauses. For instance, suppose that the distinction between τ ∗ and T ∗∗ is observable but not verifiable, meaning that it would be hard to effectively prove in court that the manager has misbehaved. Although in our stylized model it is simple to detect misbehavior of the CEO, in reality the nonoptimal investment strategy of the CEO is much harder to detect, let alone to prove in court. In this case, one may decide to make the clawback clause contingent on some measure of performance. Consider, for instance, that shareholders move against the manager to claw back the salary paid when the price drops. In this case it is intuitive that the manager has no incentive to reveal his or her information at time τ ∗ , as it would induce a price decline. By concealing, the manager can push the price decline further back in the future, and thus maximize his or her utility. Another possibility is to set up a clawback clause contingent on a (large) recapitalization, which is the main difference between τ ∗ and T ∗∗ . This clause would indeed solve the problem within our model,16 although it relies on the fact that in our model the firm never needs to go to the capital markets. In an extension of the 16. The clawback clause must require the CEO to return the actual compensation received. As shown in Section IV.B, losing only the shares ηk not yet vested at T ∗∗ , for instance, would not alleviate the incentive to conceal the bad news at τ ∗ .

1800

QUARTERLY JOURNAL OF ECONOMICS

model in which the firm may need to raise more capital for investment purposes, for instance to open a new identical firm with the same technology that allows to increase the size, then again there is the risk that by putting a clawback clause the shareholders would not be inducing the optimal CEO behavior. IV.G. The Optimal Contract The preceding sections considered relatively standard simple compensation packages, adapted to our stylized model, and discussed their pros and cons. In this final section we briefly discuss the characteristics of the optimal contract, and compare them to the previous contracts. As in our dynamic model all quantities increase at an exponential rate, we restrict our attention to contracts of the form wt =

wtb = Ab e Bb t ∗ wta = Aa e Ba t+Ca τ

if t < τ ∗ , if t ≥ τ ∗

where the subscript a stands for “after τ ∗ ” and the subscript b stands for “before τ ∗ .” We assume for simplicity that although τ ∗ is not ex ante observable by shareholders, they are able to observe whether τ ∗ has been realized or not once the announcement is made. As discussed earlier, the manager may produce convincing evidence that investment opportunities deteriorated at τ ∗ , whereas he or she may refrain from producing this information in a conceal equilibrium. This simplifying assumption allows us to make the contract’s payoff contingent on the announcement itself.17 All bargaining power is with the firm, but the manager has an outside option, given by UtO = AO e BO t , which may be growing over time. Because in our model the resources to pay the manager are outside the model (do not come from dividends themselves), effectively the firm solves min E

∞

e−rt wt dt

0

17. For simplicity, we sidestep here the issue of truthfull revelation, that is, the incentive to have the manager announce τ ∗ when it actually happens.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1801

conditional on the following incentive compatibility constraints: (40)

(Reveal at τ ∗ )

(41)

(High effort before τ ∗ ) UtH ≥ UtL

(42)

(Outside option)

where UτReveal = ∗ for i = H, L,

∞ τ∗

Uti = Et

τ∗ t

UτReveal ≥ UτConceal ∗ ∗

for all t ≤ τ ∗ ,

Ut ≥ UtO ∗

e−β(t−τ ) wta dt, UτConceal = ∗

for all τ ∗ ,

for all t, T ∗∗ τ∗

∗

e−β(t−τ ) wtbdt, and

∗ e−β(s−t) wsb 1 − csi ds + e−β(τ −t) UτReveal |i . ∗

Note that we assume that by concealing, the manager loses the outside option. That is, there is a serious penalty for concealing. This is realistic. Adding back the outside option after concealing is possible at the cost of additional complications, but without much change in intuition. This assumption skews the manager against concealing. Because we find that with stock-based compensation concealing is widespread, adding the outside option even after (be18 ing caught) concealing would just ∞ make it even more frequent. Finally, to ensure that E[ 0 e−rt wt dt] is finite, we must assume that r + λ > Bb and r > Ba . That is, the compensation does not grow at a higher rate than the cost of capital. PROPOSITION 11. The incentive compatibility constraints are satisfied iff the following constraints are met: (43)

(44)

(Reveal at τ ∗ ) ∗∗ (β − Ba ) Aa ≥ Ab 1 − e−(β−Bb )h ; (β − Bb) (Ba + Ca ) ≥ Bb; (High effort before τ ∗ ) c H (β + λ L − Bb) (β − Ba ) 1− ; Aa ≤ Ab (β − Bb) [λ L − λ H ] Bb ≥ (Ca + Ba ) ;

18. Although we did not mention any outside option in preceding sections, as the compensation was assumed exogenous, it is necessary here to ensure that the manager has a positive lower bound to his or her payment (the firm has all the bargaining power). The analysis here is nonetheless consistent with the previous one so long as BO is small enough, and the free parameter η is set to equalize the manager’s expected utility at time 0 with the value of the outside option.

1802 (45)

(46)

QUARTERLY JOURNAL OF ECONOMICS

(Outside option)

1 λ H Aa H ≥ AO ; Ab 1 − c + H (β − Ba ) β + λ − Bb Bb ≥ BO ; Aa ≥ AO ; Ca ≥ 0; Ba ≥ BO . β − Ba

Subject to these constraints, the firm then minimizes

∞

1 λ H Aa . V =E e−rt wt dt = Ab + (r − Ba ) (r + λ H − Bb) 0 Constraint (43) shows that after τ ∗ , the level Aa of the compensation has to be above some value to induce the manager to reveal the bad news. Similarly, constraint (44) shows that the level of compensation cannot be too high after τ ∗ ; otherwise the manager prefers not to exert effort and increase his or her payoff sooner. These two constraints combined imply that Ba + Ca = Bb and (β − Ba ) c H (β + λ L − Bb) Ab 1− (β − Bb) (λ L − λ H ) (β − Ba ) ∗∗ 1 − e−(β−Bb )h . ≥ Aa ≥ Ab (β − Bb) This last constraint determines a feasibility region for Aa . This region is not empty iff the cost to exert high effort is below a threshold: ∗∗ [λ L − λ H ] e−(β−Bb )h H . c ≤ (β + λ L − Bb) Because the right-hand side is increasing in Bb, this constraint implies a lower bound on the growth rate of the CEO compensation before τ ∗ . We further discuss the properties and the intuition of the optimal contract in our calibration analysis in Sections V.B and V.C. There we also compare the stock-based compensation to the optimal contract and illustrate how the optimal contract can be approximated by using an appropriate stock-plus-bonus compensation. It is worth emphasizing immediately, however, that because of the simplicity of the model, we are able here to make the contract only contingent on time t and τ ∗ . This is useful to gauge the characteristics of the contract. In its implementation, however, one must use a better proxy than t for the firm’s growth. We return to this issue below.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1803

TABLE I PARAMETER VALUES Cost of capital High growth rate Low growth rate Depreciation rate Minimal capital level

r 8% G 7% g 1% δ 1% ξ 90%

Return on capital CEO discount rate Expected τ ∗ (high effort) Expected τ ∗ (low effort) CEO cost of effort

z 16% β 18% E[τ ∗ |e H ] = 1/λ H 15 years E[τ ∗ |e L] = 1/λ L 2 years 5% cH

V. QUANTITATIVE IMPLICATIONS In this section we examine when stock-based compensation generates a high effort/conceal equilibrium. For comparison, and to pave the way for the discussion of the optimal contract in Section V.C, we also consider the equilibrium induced by a cash flow–based (bonus) type of contract discussed in Section IV.E. The base parameter values are in Table I. Figure V shows the partition of the parameter space of (z, G) into regions corresponding to various equilibria.19 In the top right area the manager chooses high effort regardless of the compensation mode. Consequently, in this region, compensating the manager based only on a cash-flow (bonus) type of contract achieves the first best, as in this case he or she also reveals the bad news to investors, and maximizes firm value. This region consists of firms characterized by high returns on investment, z, and high growth, G, of investment opportunities. Such firms do not have to use stock-based compensation to induce high effort. The region below and to the left of the top right area is where bonus compensation no longer induces high effort, whereas stockbased compensation does, although in a conceal equilibrium. This is indeed the most interesting region, where we observe a trade-off between effort inducement and truth-telling inducement. Firms with reasonably high growth rates and return on investment are in that region. Finally, the region below and to the left from there is where we no longer have a pure strategy equilibrium under stockbased compensation, whereas cash flow–based bonus compensation still induces a low effort equilibrium. This is a region where a stock-compensated manager would prefer to conceal if he or she chose high effort, but would no longer choose high effort if he or she 19. Online Technical Appendix B shows that stock-based compensation indeed induces a conceal equilibrium for most parameter values in the spaces (g, G) and (E[τ ], G) as well. The implications are similar and omitted for brevity.

1804

QUARTERLY JOURNAL OF ECONOMICS Stock comp.: high effort/conceal eq. Cash-flow (bonus) comp.: high effort/reveal eq.

0.24

Return on Investment z

0.22

0.2

Stock comp.: high effort/conceal eq. Cash-flow (bonus) comp.: low effort/reveal eq.

0.18

0.16

0.14 Stock comp.: no pure strategy eq. Cash-flow (bonus) comp.: high effort/reveal eq.

0.03

0.04

0.05

0.06

0.07

0.08 0.09 High Growth G

0.1

0.11

0.12

0.13

FIGURE V Equilibrium Areas under Stock-Based Compensation and Cash Flow–Based Compensation In the (z, G) space, the figure shows the areas in which the following equilibria are defined: (a) the high effort/reveal equilibrium under dividend-based compensation; (b) the low effort/reveal equilibrium under dividends-based compensation; and (c) the high effort/conceal equilibrium under stock-based compensation. For all combination of parameters, dividend compensation generates a reveal equilibrium. z ranges between 12% and 25%, whereas G ranges between 3% and 14%. The remaining parameters are in Table I.

concealed. Part of that region corresponds to the conceal / low effort equilibrium (the worst possible scenario), whenever it exists. The existence depends on λ L: it does not exist for high levels of λ L. The remainder of the region corresponds to equilibria in mixed strategies. Solving for these is complicated, as the dynamic updating of investors’ beliefs becomes very tedious. They are not likely to provide new intuitions; thus we ignore them. In fact we find the region above that, where the real trade-off takes place, of most interest. V.A. High Effort or Truthful Revelation? The preceding section shows a large area in the parameter space in which a high effort/conceal equilibrium and a low effort/reveal equilibrium may coexist. Are shareholders better off with low effort and an optimal investment strategy, or high effort and a suboptimal investment strategy? Corollary 3 shows that the choice depends on the difference between λ L and λ H . This section provides a quantitative illustration of the trade-off.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1805

Dividend path under three equilibria

1

Stock−based compensation Dividend−based compensation High effort — symmetric information

0.9 0.8 0.7

Dividend

0.6 0.5 0.4 0.3 0.2

0

τ* under λ H

τ* under λ L

0.1

0

5

10

15

20

25 Time

30

35

40

45

50

35

40

45

50

Price path under three equilibria

22

Stock−based compensation Dividend−based compensation High effort — symmetric information

20 18 16

Price

14 12 10 8 6 4 2

τ* under λ H

τ* under λL 0

5

10

15

20

25 Time

30

FIGURE VI Dividend and Price Paths in Three Equilibria The figure plots hypothetical dividend (top panel) and price (bottom panel) paths in the cases of “stock-based compensation” (solid line); “dividend-based compensation” (dotted line); and the first best benchmark case with symmetric information and optimal investment (dashed line). Parameter values are in Table I.

To illustrate the trade-off, Figure VI plots the hypothetical price and dividend paths under the high effort/conceal and low effort/reveal equilibriua. For comparison, it also reports the first best, featuring high effort and the optimal investment after τ ∗ . As shown in Corollary 3, the low effort/reveal equilibrium induced, for instance, by a flat wage or cash flow–based (bonus) compensation may induce too low an effort, and this loss outweighs the benefits of optimal investment behavior at τ ∗ . The high effort/conceal equilibrium, induced by stock-based compensation, in contrast, gets closer to the first best, yet also leads to

1806

QUARTERLY JOURNAL OF ECONOMICS

suboptimal investment behavior, which generates the bubblelike pattern in dividend payouts and prices. To gauge the size of the trade-off between the two equilibria under various parameter choices, Table II reports the firm before , under the two equilibria value at time t = 0, Pai,0 , and Pfi,0 (columns (2) and (4)), and the average decline in price when the true growth rate of investment is revealed, at T ∗∗ in the high effort/conceal equilibrium (column (3)) and at τ ∗ in the low effort/reveal equilibrium (column (5)). Online Technical Appendix A contains closed-form formulas to compute the average decline (see Corollary A1). The first column reports the parameter that we vary compared to the benchmark case in Table I. The last two columns report the value and the expected decline in the first-best case. Panel A of Table II shows that even when the high growth rate G is relatively low, G = 6%, the high effort/conceal equilibrium achieves a higher firm value (Pai,0 = 2.48) than the low before L effort/reveal equilibrium (Pfi,0 (λ ) = 2.10), even though the former equilibrium induces a substantial expected market crash E[PT ∗∗ /PT−∗∗ − 1] = −51.49% at T ∗∗ , against a milder decline of only E[Pτ ∗ /Pτ−∗ − 1] = −4.59% in the latter case. The last two columns show that the first best achieves an even higher firm before H (λ ) = 2.58, although this value is not so much higher value, Pfi,0 than the one under asymmetric information. Note that even in the first-best case there is a market decline at revelation (−22.38%), although it is far smaller than in the asymmetric information case. The remainder of Table II (Panels B–F) shows that a similar pattern is realized for a wide range of parameter choices.20 For instance, in the base case, low effort induces an expected time of investment growth E[τ ∗ |e L] = 2 years. However, Panel B shows that even if E[τ ∗ |e L] is as high as eight years, a similar result before L (λ ) is always lower than Pai,0 . Panel C shows applies, as Pfi,0 that a higher return on investments z leads to an increase in prices (across equilibria) and a mild decline in the size of the crash at T ∗∗ for the asymmetric information case. Panel D shows that the higher cost of capital r reduces both the prices across the equilibria and the decline at revelation, although the impact on the asymmetric information case is smaller than that on the 20. In Panels B and F we set the cost c H = 2% instead of c H = 5% assumed throughout to ensure that all three equilibria exist under the parameter choices in column (1).

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1807

TABLE II HIGH EFFORT OR TRUTHFUL REVELATION? High effort/conceal eq.

G

Pai,0

6.00% 2.48 8.00% 2.85 10.00% 3.48 E[τ L]

Pai,0

4.00 6.00 8.00

2.65 2.65 2.65

z

Pai,0

16.00% 2.65 18.00% 3.19 20.00% 3.71 r

Pai,0

7.00% 8.00% 9.00%

3.38 2.65 2.15

δ

Pai,0

0.00% 1.00% 2.00%

2.92 2.65 2.37

ξ

Pai,0

60.00% 2.64 70.00% 2.64 80.00% 2.64

Low effort/reveal eq.

High effort/reveal eq.

Panel A: Investment opportunities growth P ∗∗ P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 τ−

T−

−51.49% −66.24% −78.99%

2.10 2.14 2.19

−4.59% −6.54% −8.57%

τ−

2.58 3.05 3.93

−22.38% −34.42% −49.08%

Panel B: Expected τ L under low effort P P∗ P∗ before L before H E PT ∗∗ −1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 ∗∗ τ−

T−

−59.11% −59.11% −59.11%

2.23 2.34 2.44

−10.34% −14.51% −18.18%

τ−

2.78 2.78 2.78

−28.12% −28.12% −28.12%

Panel C: Return on investment P P∗ P∗ before L before H E PT ∗∗ −1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 ∗∗ τ−

T−

−59.11% −57.52% −56.26%

2.12 2.44 2.76

−5.55% −6.21% −6.71%

τ−

2.78 3.29 3.80

−28.12% −30.56% −32.35%

Panel D: Shareholders’ discount rate P ∗∗ P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 τ−

T−

−60.66% −59.11% −57.89%

2.49 2.12 1.84

−6.42% −5.55% −4.71%

τ−

3.53 2.78 2.26

−33.96% −28.12% −22.88%

Panel E: Depreciation rate δ P P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 ∗∗ τ−

T−

−58.69% −59.11% −59.58%

2.28 2.12 1.96

−5.90% −5.55% −5.15%

τ−

3.04 2.78 2.53

−29.44% −28.12% −26.53%

Panel F: Minimum capital requirement ξ P ∗∗ P∗ P∗ before L before H E PT ∗∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 Pfi,0 (λ ) E Pτ ∗ − 1 τ−

T−

−66.70% −64.35% −61.85%

2.11 2.11 2.11

−5.55% −5.55% −5.55%

τ−

2.78 2.78 2.78

−28.12% −28.12% −28.12%

Notes. Column (1) reports the value of the parameter that is changed from its base value in Table I. Columns (2) and (3) report the firm value at t = 0 and the average stock decline at T ∗∗ , respectively, under the high effort/conceal equilibrium induced by stock-based compensation. Columns (4) and (5) report the firm value at t = 0 and the average stock decline at τ ∗ , respectively, under the low effort/reveal equilibrium induced, for example, by flat wage compensation or cash flow–based (bonus) compensation. The last two columns report the same quantities under the high effort/reveal equilibrium induced by the optimal contract. Panels B and F use c H = 2% instead of c H = 5% to ensure that all three types of equilibria exist under the parameters in column (1).

1808

QUARTERLY JOURNAL OF ECONOMICS

symmetric information case. The last two panels show the results for various depreciation rates δ and minimum employable capital ξ . In particular, the smaller is the minimum capital requirement ξ , the higher is the size of the crash at T ∗∗ , as the firm can pretend for longer and will need an even larger recapitalization at T ∗∗ .21 The results in Table II highlight that if the shareholders’ choice is between a contract that induces a low effort/reveal equilibrium and one that induces a high effort/conceal equilibrium, they should choose the latter, as firm value is much higher in this case and in fact rather close to the first-best high effort/reveal equilibrium. This finding may also explain why stock-based compensation is so widespread in the real world: If there is any difficulty in implementing an optimal contract that induces the first best, then a simple stock-based compensation achieves a second best that is not too far from the first best, as can be seen by comparing the value of the assets in columns (2) and (6) of Table II. Erring on the side of inducing truth revelation but low effort, instead, has a much worse impact on the value of the firm than erring on the side of providing incentives to increase investment opportunities. V.B. The Optimal Contract What does the optimal contract look like? Figure VII compares the optimal contract established in Section IV.G to the stock-based compensation, for a hypothetical realization of τ ∗ , under the assumption that at τ ∗ there is full revelation in both cases. This is the relevant scenario to understand why the CEO is reluctant to reveal information under stock-based compensation. The figure contains six panels: The left hand–side panels plot the optimal contract (dashed line) and the stock-based contract. Each of the three panels corresponds to a different level of the cost of effort c H . In all cases, the payoffs have been scaled to ensure that the CEO receives the same expected utility at time 0. Of course, the optimal contract is cheaper for the firm, as it minimizes the 21. We note that some parameters do not affect the comparative statics: for instance, the manager discount rate β or the cost of effort c H affects only whether a conceal equilibrium or reveal equilibrium is obtained. But because the CEO strategy conditional on concealing is just to push the time of cash shortfall T ∗∗ as far into the future as possible, the latter depends only on the technological parameters and not on preferences. Thus, both the value of the firm and the size of the crash at T ∗∗ are independent of these preference parameters.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

A. Stock comp. versus optimal comp. c H = 0.02

B. Combined comp. versus optimal comp. c H = 0.02

t

0.4 w

wt

0.4 0.2 0

0.2

Optimal Stock 0

10

20

0

30

0

10

20

30

Time

C. Stock comp. versus optimal comp. c H = 0.05

H D. Combined comp. versus optimal comp. c = 0.05

t

0.4 w

wt

Optimal Combined

Time

0.4 0.2 0

Optimal Stock 0

10

20

0.2 0

30

Time

0

10

20

30

F. Combined comp. versus optimal comp. c H = 0.08 0.4 wt

0.4 0.2 0

Optimal Combined Time

E. Stock comp. versus optimal comp. c H = 0.08

wt

1809

Optimal Stock 0

10

20 Time

30

0.2 0

Optimal Combined 0

10

20

30

Time

FIGURE VII Optimal Contract versus Stock-Based Compensation The figure plots compensation paths under the optimal contract and simple stock-based compensation (left panels) and a combined compensation with both stocks and cash-flow (bonus) components (right panels). The vertical dotted bar corresponds to a hypothetical occurrence of τ ∗ . The figure reports three sets of panels, each pair corresponding to a different effort cost c H . The remaining parameters are in Table I. The stock-based compensation and combined compensation are normalized to provide the same intertemporal utility to the CEO as the optimal contract at time 0, conditional on full revelation.

discounted expected future cash payouts. Finally, we assume the outside option has AO = 1 and BO = 1%, which reflects the fact that in our baseline case in Table I the growth of the firm also shifts to g = 1% at τ ∗ .

1810

QUARTERLY JOURNAL OF ECONOMICS

It is convenient to start from the middle panel (Panel C) which contains our base case c H = 5% (see Table I). We see two noteworthy features of the optimal contract: First, the payoff to the manager is increasing over time, and it jumps up at τ ∗ upon revelation. The latter discrete increase is a bonus compensation that the manager must receive to provide the incentive to reveal the bad news about investment opportunities. Because the manager must be compensated for telling when τ ∗ occurs, the impatient manager has an incentive to work little in order to anticipate τ ∗ (which is ex post observable) as much as possible. An increasing payoff provides the incentive to the manager to push τ ∗ into the future. These characteristics of the optimal contract, namely, an increasing pattern plus a bonus to reveal bad news, strongly contrast with the payoff implicit in stock-based compensation, which is depicted by the solid line. Indeed, the latter implies a fast-growing payoff, which provides the incentive to work hard and increase investment opportunities. At the same time, however, the payment to the CEO drops substantially at τ ∗ if he or she reveals the bad news about growth opportunities. That is, a stock-based contract implicitly punishes the CEO for good behavior of truth revelation. It follows that this implicit punishment provides an incentive to conceal the bad news as discussed in Section III.A We see an additional interesting characteristics of the optimal contract by comparing Panel C with Panels A and E, which use a different effort cost c H : The higher the effort cost c H the stronger must be the increasing pattern in the optimal contract to ensure the manager is willing to exert high effort for longer, and thus the larger must be the bonus when he or she reveals the bad news. Indeed, when c H is small (top panel), then the overall compensation of the manager drops at τ ∗ . Still, even in this case, the drop is far smaller than the one implied by the stock-based compensation. V.C. The Stock-Plus-Bonus Contract This discussion shows that indeed optimal compensation is in general rising over time to provide the incentive for high effort and has a bonus component to provide the incentive to reveal bad news. Whereas stock-based compensation implies a drop at τ ∗ , we recall from Section IV.E that a compensation wt = ηd Dt generates a bonus type of compensation at time τ ∗ . However, we

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1811

also recall that this compensation has a bonus that is too generous, and thus the manager has an incentive to work less to anticipate its payment. A possibility is combining stock-based compensation with cash flow–based compensation to mimic the optimal contract, using variables that are observable in the firm. The right hand–side panels of Figure VII report the contract obtained by combining stock-based compensation wt = η Pt with the cash flow–based compensation wt = ηd Dt , where we normalize ηd = η/(r − g), a value that ensures that the expected utility under stock-based and cash flow–based compensation is the same. We emphasize that we use dividend Dt to be consistent with the model, but the interpretation of Dt could also be as a cash payment over time with the same characteristics as dividends in our model, in particular, with a bonus component at revelation. We let ω be the weight on the stock-based compensation, so that the combined compensation is (47)

wt = ωη Pt + (1 − ω)ηd Dt .

The compensation is only linked to variables in the firm, and they do not depend any more on time t, nor on τ ∗ itself. We must choose ω to ensure that under compensation (47) the manager has an incentive both to exert high effort and to reveal at τ ∗ . Proposition A1 in Online Technical Appendix A contains the formal conditions. We find that ω = 30% works for all three right hand–side panels in Figure VII.22 Consider first the base case in Panel D. In this case, the combined compensation is similar to the optimal contract, in that it implies both a rising compensation over time and a bonus at τ ∗ . When the cost of effort is higher, as c H = 8% in Panel F, the match between the optimal contract and the combined stock-plusbonus contract is very accurate. Instead, the combined compensation is not close to the optimal in the case in which the cost is small, such as c H = 2% as in Panel B. The stock-plus-bonus contract resembles real contracts that include golden parachutes and generous severance packages as a way of compensating the manager for revealing bad news at τ ∗ . An important question pertains to the weight to give to stock in combined compensation to ensure that the manager exerts effort and reveals bad news (recall that this discussion applies also to deferred compensation; see Section IV.B). Section B.2 in the 22. Again, we rescaled η to ensure that in all cases, the expected utility at time 0 from the optimal contract and from the combined compensation is the same.

1812

QUARTERLY JOURNAL OF ECONOMICS

Online Technical Appendix shows that in our numerical exercises, the weight on stock ω in (47) is always below 40% for reasonable parameter values. In addition, unless the return on capital z is excessively high, we also find that ω is strictly positive, showing that some of the CEO’s compensation must be in stocks to ensure high effort. Our analysis abstracts from the costs that different incentive schemes impose on the firm itself. Endogenizing the compensation costs to the firm in our dynamic model, however, is quite hard, as dividend flows have to be adjusted depending on the equilibrium price, and the fixed point that sustains the Nash equilibrium is hard to obtain. Nevertheless, we can approximate the size of these costs in the various equilibria by taking their present value at the cost of capital of the firm, and comparing the value of the firm net of these costs across the various incentive schemes. Our analysis (see Section B.3 in the Online Technical Appendix), although approximate, shows that the combined optimal compensation plan discussed earlier achieves the first best without imposing too high a burden on the company. VI. DISCUSSION AND CONCLUSIONS Our paper contributes to the debate on executive compensation.23 On one hand, advocates of stock-based compensation highlight the importance of aligning shareholder objectives with managers’ and argue that compensating managers with stocks achieves the goal. Detractors argue that stock-based compensation instead gives managers the incentive to misreport the true state of the firm and in fact even to engage at times in outright fraudulent behavior. This paper sheds new light on the debate by analyzing the ex ante incentive problem of inducing managers to exert costly effort to maximize the firms’ investment opportunities, and simultaneously inducing managers to reveal the true outlook for the firm ex post and follow an optimal investment rule. We show that a combined compensation package that uses both stock-based performance and a cash flow–based bonus type of compensation reaches the first best, inducing the manager to 23. See Murphy (1999), Hall and Murphy (2003), Bebchuk and Fried (2004), Gabaix and Landier (2008), and Edmans and Gabaix (2009) for recent discussions.

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1813

exert costly effort and reveal any worsening of the investment opportunities, if it happens. Firm value is then maximized in this case. Each component (stock and bonus) in the combined compensation package serves a different purpose and thus they are both necessary “ingredients”: the stock-based component increases the manager’s effort to expand the growth options of the firm, whereas compensating managers with bonuses when they reveal bad news about long-term growth significantly reduces their incentives to engage in value-destroying activities to support the inflated expectations. Thus, our model supports the inclusion of a golden parachute or a generous severance package in the stock-based compensation package of the CEO. It is crucial to realize, though, that the weight on stocks in the combined compensation package is not identical across firms: for instance, high-growth firms should not make much use of stocks in their compensation packages, whereas the opposite is true for lowgrowth firms. That is, there is no fixed rule that works for every type of firm. As a consequence, generalized regulatory decisions that ban stock-based compensation, or board of directors’ decisions on CEO compensation that are based on “conventional wisdom,” are particularly dangerous, as they do not consider that each firm needs a different incentive scheme. Interestingly, our calibrated model also shows that if for any reason implementing the first-best contract is not possible, then it is better for shareholders to compensate CEOs with too much stock rather than too little. In fact, for most parameter specifications firm value under a high effort/conceal equilibrium is much closer to the first best than under a low effort/reveal equilibrium. This finding may explain the widespread use of stock-based compensation in the real world, even in situations where ex post it may appear that it was not optimal for shareholders. Indeed, our model helps shed light on the incentives and disincentives of CEOs when their compensation is too heavily tilted toward stock. The 1990s high-tech boom and collapse and the 2007–2008 financial-crisis offer interesting examples of the mechanism discussed in our model. The 1990s high-tech boom was characterized by expectations of high growth rates and high uncertainty, coupled with highpowered, stock-based executive compensation. Firms with high perceived growth options were priced much higher than firms with similar operating performance, but with lower perceived growth options. We argue that because of their high-powered incentives,

1814

QUARTERLY JOURNAL OF ECONOMICS

executives had an incentive to present theirs as high-growth firms, even when the prospects of high future growth faded at the end of the 1990s. Our analysis suggests that high-powered incentives induce the pretense of high-growth firms and lead eventually to the crash of the stock price. Similarly, the source of the banking crisis of 2007–2008 may also be partially understood through the mechanism discussed in the paper, as banks also share some of the key characteristics assumed in the model. In particular, there is a serious lack of transparency of banks’ investment behavior (e.g., complicated derivative structures) as well as of the available investment opportunities. Moreover, banks, and especially investment banks, employ high-powered, stock-based incentives, or contracts that are effectively stock-based. Consider, for instance, the growth in the mortgage market. It is reasonable to argue that banks’ CEOs observed a slowdown in the growth rate of the prime mortgage market. When investment opportunities decline, the first-best action is to disclose the news to investors, return some capital to shareholders, and suffer a capital loss on the stock market. However, if a CEO wants to conceal the decline in investment opportunities’ growth, then our model implies that in order to maintain the pretense that nothing happened, the bank’s manager has to first invest in negative–NPV projects, such as possibly subprime mortgages, if the mortgage rate charged does not correspond to the riskiness of the loan.24 Moreover, to keep up the pretense for as long as possible, the manager also has to disinvest and pass on positive–NPV projects. According to the model, the outcome of the suboptimal investment program is a market crash of the stock price, and the need for a large recapitalization of the firm. As the debate about optimal CEO compensation is evolving, our model shows that too much stock sensitivity is “bad,” as it induces this perverse effect on managers’ investment ex post. Nevertheless, too little stock sensitivity has a potentially even worse effect, providing the CEO with no incentives to search for good investment opportunities. 24. Laeven and Levine (2009) provide empirical evidence that bank risk taking is positively correlated with ownership concentration. Fahlenbrach and Stulz (2009), however, conclude that bankers’ incentives “cannot be blamed for the credit crisis or for the performance of banks during the crisis” (p. 18), basing their conclusion on evidence on CEOs’ personal losses during the crisis. On this last point, however, see our discussion at the end of Section IV.B

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1815

APPENDIX This Appendix contains only sketches of the proofs of the propositions. Details can be found in a separate Online Technical Appendix available on the authors’ Web pages. Proof of Proposition 1. The capital evolution equation is given by (48)

dKt = It − δ Kt . dt

From (2), the target level of capital, Jt , is given by Jt = e Gt for ∗ ∗ t < τ ∗ and Jt = e Gτ +g(t−τ ) for t ≥ τ ∗ . Imposing Kt = Jt for every t and using (48) the optimal investment policy is given by (8). From (7), the dividend stream is (9). QED after Proof of Proposition 2. For t ≥ τ ∗ , the Pfi,t stems from intebefore gration of future dividends. For t < τ ∗ , the expectation in Pfi,t can be computed by integration by parts. QED before H before L Proof of Corollary 1. Pfi,t (λ ) > Pfi,t (λ ) iff Afiλ H > AfiλL . Substituting, this relation holds iff z − r − δ > 0, which is always satisfied. QED

Proof of Corollary 2. The first part immediately follows the fact that a manager/owner values the firm as the present value of future dividends discounted at β. The second part follows from the utility of the manager at τ ∗ and t < τ ∗ . First, after τ ∗ , there is no benefit from exerting effort. Thus the manager/owner’s utility is ∞ (z − g − δ) Gτ ∗ ∗ g e . e−β(t−τ ) Dt dt = UDiv,τ ∗ = (β − g) τ∗ Before τ ∗ , the utility of the manager for given effort e is

τ ∗ −β (u−t) G −β (τ ∗ −t) UDiv,t (e) = E e Du (1 − c(e))du + e UDiv,τ ∗ t

(z − g − δ ) Gt (z − G − δ ) . 1 − c(e) + λ (e) =e β + λ(e) − G ( z − G − δ ) (β − g ) Given H Div in (16), the condition UDiv,t (e H ) > UDiv,t (e L) translates into (38). QED Proof of Proposition 3. As in Corollary 2, from τ ∗ onward the manager will not exercise high effort, resulting in a utility level

1816

QUARTERLY JOURNAL OF ECONOMICS

at τ ∗ given by

UStock,τ ∗ =

∞

τ∗

after e−β(s−t) (η Pfi,s )ds =

η (z − g − δ ) Gτ ∗ e . (r − g) (β − g)

∗

Thus, for t < τ we have

τ ∗ −β(s−t) before −β(τ ∗ −t) UStock,t (e) = E e (η Pfi,s )(1 − c(e)) ds + e UStock,τ ∗ t

=

z−g−δ ηe Gt . Afiλ (1 − c(e)) + λ(e) β + λ(e) − G (r − g) (β − g)

Let e H be the optimal strategy in equilibrium. The price function before with Afiλ H . We then obtain the condition UStock,t (e H ) > is then Pfi,t L UStock,t (e ) iff (19) holds. The Nash equilibrium follows. Similarly, if e L is the optimal strategy in equilibrium, then the price function before with AfiλL . Thus, UStock,t (e H ) < UStock,t (e L) iff (19) does not is Pfi,t hold. QED Proof of Lemma 1. Conditional on the decision to conceal g, the manager must provide a dividend stream DtG , as any deviation make him or her lose his or her job. Because he or she cannot affect the stock price, after τ ∗ his or her utility only depends on the length of his or her tenure. Because we normalize the manager’s outside options to zero, his or her optimal choice is to maximize QED T ∗∗ . Proof of Proposition 4. Point (1). The CEO must mimic DtG for as long as possible. From (7), this target determines the investments It (in point (2) of the proposition) and thus the evoτ ∗ . lution of capital dKt /dt = It − δ Kt for given initial condition K From the monotonicity property of differential equations in their initial value and the definition of T ∗∗ as the time at which τ ∗ . The claim KT ∗∗ = K T ∗∗ = ξ JT ∗∗ , T ∗∗ must be increasing with K follows. ∗ Point (3). At time τ ∗ we have Kτ ∗ = Jτ ∗ = e Gτ , which implies ∗ ∗ dKt /dt|τ ∗ = Ge Gτ > dJt /dt|τ ∗ = ge Gτ . Thus, the trajectory of capital at τ ∗ is above Jt . By continuity, there is a [0, t1 ] in which Kt > Jt . Solving explicitly for the capital evolution shows that as t increases, Kt − Jt → −∞. Because Kτ ∗ +dt − Jτ ∗ +dt > 0, there must be a t1 at which Kt1 − Jt1 = 0. Define h∗ ≡ t1 − τ ∗ , and substituting in Kt1 − Jt1 = 0, h∗ must satisfy

z−g−δ −Gh∗ gh∗ −δh∗ −(δ+G)h∗ z−G−δ (49) 0 = e . e −e + e −1 δ+g δ+G

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1817

For t > t1 , Kt < Jt . Solving explicitly for the capital evolution shows that Kt − Jt diverges to −∞ as t → ∞. From the condition KT ∗∗ − JT ∗∗ ξ = 0, and defining h∗∗ ≡ T ∗∗ − τ ∗ , we obtain the equation defining h∗∗ : ∗ ∗∗ ∗ ∗∗ (50) 0 = 1 + e−(G−g)h − 1 e(z−G−δ)(h −h ) − e−(G−g)h ξ. QED Proof of Proposition 5. Let t > h∗∗ . If a cash shortfall has not been observed by t, then a shift cannot have occurred before t − h∗∗ . Bayes’ formula implies that time T ∗∗ = τ ∗ + h∗∗ conditional on not observing a cash shortfall by time t has the conditional distribution FT ∗∗ (t |T ∗∗ > t) = Pr τ ∗ < t − h∗∗ |τ ∗ > t − h∗∗ ∗∗

∗∗

e−λ(t−h ) − e−λ(t −h = e−λ(t−h∗∗ )

)

= 1 − e−λ(t −t) .

That is, T ∗∗ has the exponential distribution f (T ∗∗ |no cash ∗∗ shortfall by t) = λe−λ(T −t) . The value of Pai,t for t > h∗∗ in (22) then follows from the pricing formula (21). Let t < h∗∗ ; then the conditional distribution of T ∗∗ is zero in the range [t, h∗∗ ], as even a shift at 0 would be revealed only at ∗∗ ∗∗ h∗∗ . The density is then f (T ∗∗ ) = λe−λ(T −h ) 1(T ∗∗ >h∗∗ ) . Using this density to compute the expectation, we find ∗∗

(51)

Pai,t = (z − G − δ)e Gt

1 − e−(r−G)(h (r − G)

−t)

∗∗

+ ert e(G−r)h Aai λ. QED

Proof of Proposition 6. Let τ ∗ > h∗∗ . There are two equilibria to consider: a reveal equilibrium and a conceal equilibrium. Expressions (24), (25), and (26) contain the expected utilities at τ ∗ under conceal and reveal strategies in the two possible equilibria. conceal,fi reveal and a conceal A reveal equilibrium occurs iff UStock,τ ∗ > U Stock,τ ∗ conceal,ai reveal equilibrium occurs iff UStock,τ ∗ > UStock,τ ∗ . The conditions in the claim are obtained by simple substitution. conceal,ai depends on the price in (51). Finally, if τ ∗ < h∗∗ , then UStock,τ ∗ Details are left to the Online Technical Appendix. QED Proof of Proposition 7. Let t ≥ h∗∗ . In a conceal equilibrium with high effort, Pai,t in (22) with Aai determines the wage wt = λH conceal,ai η Pai,t . Using expression (29) with Uτ ∗ = UStock,τ ∗ , the expected

1818

QUARTERLY JOURNAL OF ECONOMICS

utility under effort e is UStock,t (e) = e

Gt

η Aai λH

1 − c (e) + λ (e) H Stock , β + λ (e ) − G

where λ(e) and c(e) are the intensity and the cost of effort under effort choice e. The condition in the proposition follows from the maximization condition UStock,t (e H ) > UStock,t (e L). Finally, given e H chosen by the manager, indeed λ H applies in equilibrium, a conceal equilibrium occurs at τ ∗ , and thus the price function is Pai,t in (22), concluding the proof. A similar proof holds for t < h∗∗ . The expressions are in the Online Technical Appendix. QED Proof of Propositions 8, 9, and 10. See the Online Technical Appendix. Proof of Proposition 11. a. The two utilities under reveal and conceal strategies are ∗

(52)

UτReveal = ∗

Aa e( Ba +Ca )τ ; β − Ba

∗∗

UτConceal = Abe Bb τ ∗

∗

1 − e−(β−Bb )h . (β − Bb)

≥ UτConceal for all τ ∗ iff conditions (43) are It follows that UτReveal ∗ ∗ satisfied. b. For i = H, L and t < τ ∗ we can compute Abe Bb t 1 − ci λi Aa e(Ca +Ba )t i . + Ut = β + λi − Bb (β − Ba ) β + λi − (Ca + Ba ) Tedious algebra shows that UtH ≥ UtL is satisfied for all t iff conditions (44) are satisfied. c. For all t and τ ∗ we must have Ut ≥ UtO . For t < τ ∗ , the constraints obtained above imply that UtH is the relevant utility. Imposing Bb = Ca + Ba , UtH ≥ AO e BO t iff

e Bb t λ H Aa ≥ AO e BO t . Ab(1 − c H ) + (β − Ba ) β + λ H − Bb This condition is satisfied for all t iff conditions in (45) are satisfied. Similarly, for t ≥ τ ∗ the constraints above imply that Ut = Reveal , given by (52). Thus, the outside option condition is Ut Aa ∗ eCa τ e Ba t ≥ AO e BO t , β − Ba

STOCK-BASED COMPENSATION AND CEO (DIS)INCENTIVES

1819

which is satisfied for all t and τ ∗ iff conditions (45) are satisfied. QED HARVARD UNIVERSITY AND NATIONAL BUREAU OF ECONOMIC RESEARCH UNIVERSITY OF CHICAGO, CENTER FOR ECONOMIC AND POLICY RESEARCH, AND NATIONAL BUREAU OF ECONOMIC RESEARCH HEBREW UNIVERSITY AND CENTER FOR ECONOMIC AND POLICY RESEARCH

REFERENCES Aghion, Philippe, and Jeremy Stein, “Growth vs. Margins: Destabilizing Consequences of Giving the Stock Market What It Wants,” Journal of Finance, 63 (2008), 1025–1058. Bebchuk, Lucian, and Jesse Fried, Pay without Performance: The Unfulfilled Promise of Executive Compensation (Cambridge, MA: Harvard University Press, 2004). Bebchuk, Lucian, and Lars Stole, “Do Short-Term Objectives Lead to Under- or Overinvestment in Long-Term Projects?” Journal of Finance, 48 (1993), 719– 729. Beneish, Massod, “Incentives and Penalties Related to Earnings Overstatements That Violate GAAP,” Accounting Review, 74 (1999), 425–457. Bergstresser, Daniel, and Thomas Philippon, “CEO Incentives and Earnings Management,” Journal of Financial Economics, 80 (2006), 511–509. Blanchard, Olivier, “Debt, Deficits, and Finite Horizons,” Journal of Political Economy, 93 (1985), 223–247. Bolton, Patrick, Jose Scheinkman, and Wei Xiong, “Executive Compensation and Short-Termist Behavior in Speculative Markets,” Review of Economic Studies, 73 (2006), 577–610. Burns, Natasha, and Simi Kedia, “The Impact of Performance-Based Compensation on Misreporting,” Journal of Financial Economics, 79 (2006), 35–67. Clementi, Gian Luca, and Hugo Hopenhayn, “A Theory of Financing Constraints and Firm Dynamics,” Quarterly Journal of Economics, 121 (2006), 229–265. DeMarzo, Peter, and Michael Fishman, “Agency and Optimal Investment Dynamics,” Review of Financial Studies, 20 (2007), 151–188. Easterbrook, Frank H., “Two Agency-Cost Explanations of Dividends,” American Economic Review, 74 (1984), 650–659. Edmans, Alex, and Xavier Gabaix, “Is CEO Pay Really Inefficient? A Survey of New Optimal Contracting Theories,” European Financial Management, 15 (2009), 486–496. Edmans, Alex, Xavier Gabaix, and Augustin Landier, “A Multiplicative Model of Optimal CEO Incentives in Market Equilibrium,” Review of Financial Studies, 22 (2009), 4881–4917. Eisfeldt, Andrea, and Adriano Rampini, “Managerial Incentives, Capital Reallocation, and the Business Cycle,” Journal of Financial Economics, 87 (2008), 177–199. Fahlenbrach, Rudriger, and Rene M. Stulz, “Bank CEO Incentives and the Credit Crisis,” Charles A. Dice Center for Research in Financial Economics Working Paper No. 2009-13, 2009. Gabaix, Xavier, and Augustin Landier, “Why Has CEO Pay Increased So Much?” Quarterly Journal of Economics, 123 (2008), 49–100. Goldman, Eitan, and Steve Slezak, “An Equilibrium Model of Incentive Contracts in the Presence of Information Manipulation,” Journal of Financial Economics, 80 (2006), 603–626. Graham, John, Campbell Harvey, and Shiva Rajgopal, “The Economic Implications of Corporate Financial Reporting,” Journal of Accounting and Economics, 40 (2005), 3–73. Hall, Brian, and Kevin J. Murphy, “The Trouble with Stock Options,” Journal of Economic Perspectives, 17 (2003), 17, 49–70.

1820

QUARTERLY JOURNAL OF ECONOMICS

Healy, Paul, “The Effect of Bonus Schemes on Accounting Decisions,” Journal of Accounting and Economics, 7 (1985), 85–107. Inderst, Roman, and Holger, Mueller, “CEO Compensation and Private Information: An Optimal Contracting Perspective,” New York University Working Paper, 2006. Jensen, Michael, “Agency Cost of Overvalued Equity,” Financial Management, 34 (2005), 5–19. Johnson, Shane, Harley E. Ryan, Jr., and Yisong S. Tian, “Managerial Incentives and Corporate Fraud: The Sources of Incentives Matter,” Review of Finance, 13 (2009), 115–145. Ke, Bin,